iis – How to use Application Request Routing and URL Rewrite to redirect a http request to another server transparently?

I have a DNS that is pointing to two servers.
When the http request arrives on any of those those servers I need to redirect to another server (and port)

I need something like this:

http://www.contoso.com/something arrives on server1 or server2
server1 sends the request to http://server3:8888/something and return the response transparently to the requestor, as if the response was sent by itself.

I’ve already installed URL Rewrite and Application Request Routing on the two front servers.
But I don’t know how to write the rules.

Any help is apreciated.

http – Is there a reliable way to get get the fingerprint of a file hosted online, without fully downloading it?

Background

Tertiary to this question, I have been building my own imageboard that prevents (for example) duplicate images from being downloaded again and again on behalf of the client. How I do this, is that I keep all files in a database with a key being a hash of the file. The client sees the hash, and first checks its database to see if it has been downloaded before actually making a request. Similarly for my server; I also prevent duplicate uploads by having the client send me the hash first.

I am expanding a more general purpose networking library for downloading files from the web, and to my dismay; I discovered that not all servers will supply me with some sort of hash.

Question

In an effort to de-duplicate downloads, and to continue partial downloads in which their url has changed, is there a way to reliably fingerprint a file from its headers and url?

Just taking an example here, of a plain HEAD request

QVariant reply->header( QNetworkRequest::ContentLengthHeader )
int
44374

QUrl url
scheme()   : https
userName() : NULL
password() : NULL
host()     : i.imgur.com
port()     : -1
path()     : /oEdf6Rl.png
fragment() : NULL
query()    : NULL
View post on imgur.com
QNetworkReply* reply Connection keep-alive Content-Length 44374 Last-Modified Sun, 21 Feb 2021 15:14:36 GMT ETag "83c16cca4ee371145485130383104315" Content-Type image/png cache-control public, max-age=31536000 Accept-Ranges bytes Date Fri, 26 Feb 2021 04:14:22 GMT Age 392375 X-Served-By cache-bwi5134-BWI, cache-yul12821-YUL X-Cache HIT, HIT X-Cache-Hits 1, 2 X-Timer S1614312862.217094,VS0,VE0 Strict-Transport-Security max-age=300 Access-Control-Allow-Methods GET, OPTIONS Access-Control-Allow-Origin * Server cat factory 1.0 X-Content-Type-Options nosniff NoError Unknown error

The only things that seem static here, are the Mime Type, and the file size. One thing I would be willing to do is do a Accept-Ranges Download of certain bits, as I have found most servers do support this header, and from there; create a hash of the corresponding bytearray, and fingerprint it that way.

However, I am skeptical whether this would work reliably, especially concerned with something like two image frames that are nearly identical, but are in fact, not.

Am I pursuing a lost cause here? Or is there a reasonable way to fingerprint a file hosted on the web, without having to fully download it?

I’d like to do this with any file above 1mb large, because I have an exceptionally slow connection at times. Thanks.

google pagespeed – Avoid multiple page redirects about from HTTP to HTTPS and from non-www HTTPS to with-www HTTPS

Humbly, In Google PageSpeed Insights test I got 95 score for mobile and 95 score for desktop with pretty much just one error:

Avoid multiple page redirects

The first entry of the error is about HTTP to HTTPS redirects and the second entry of the error is about non-www HTTPS to with-www HTTPS redirects.

The first entry is weird because its Google themselves who passionately promote HTTP to HTTPS usage, so why would they give me an error for doing so?…

8 – How to automatically add http prefix to links in a form?

I am trying to come up with a solution to fix the D8 Link module to allow URLs like example.com (the most common use case, imho). I was thinking it would be simple enough to modify the submit value during a custom validate function; but regardless of what I set the value to it still validates on the original value.

In a form alter I have done this:

$form('field_linktest')('widget')(0)('#element_validate')() = '_fix_link_field_value';

and then in that validate function I set the url uri value:

function _fix_link_field_value(&$element, FormStateInterface $form_state, &$complete_form) {
  $url = $form_state->getValue('field_linktest');
  $url(0)('uri') = 'http://example.com';
  $form_state->setValue('field_linktest', $url);
}

But I still get the error: Manually entered paths should start with one of the following characters: / ? #

Drupal 8, http client manager, json with nested data

Im currently using Drupal’s http client manager module to pull data from a json endpoint. I am having trouble getting this to work for a nested structure like this:

{
  count: 2431
  page_info: []
  content:
    0:
      id: 1
      title: "Ipsum lorem"
    1:
      id: 2
      title: "dolor set"
}

The service description looks like this:

operations:
  GetEvents:
    httpMethod: "GET"
    uri: "8892e8de"
    summary: "Gets the available Posts."
    responseClass: "PostsList"

models:
  Posts:
    type: "object"
    location: "json"
    properties:
      content:
        location: "json"
        type: "array"

  PostsList:
    type: "array"
    location: "json"
    items:
      "$ref": "Posts"

This is returning an empty object of arrays and not picking up any data. What is the best way to access the nested data?

http – Nginx showing 504 Gateway timed out when uploading large files

I have a simple PHP file upload script that uploads a file. The server consists of 1 nginx reverse proxy and another nginx server (upstream). The problem is that when I try to upload very large files ~ 2GB then I get an error:

504 Gateway Time-out

Here is my reverse proxy configuration:

server {
    listen 80;
    
    server_name upload.mycloudhost.com;

    proxy_buffer_size 1024k;
    proxy_buffers 4 1024k;
    proxy_busy_buffers_size 1024k;
    proxy_connect_timeout       600;
    proxy_send_timeout          600;
    proxy_read_timeout          600;
    send_timeout                600;
    client_body_timeout         600;
    client_header_timeout       600;
    keepalive_timeout           600;
    uwsgi_read_timeout          600;

    location / {
        proxy_set_header X-Real-IP  $remote_addr;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Host $host;
        proxy_pass http://localhost:13670;
    }
}

And here is the other nginx server (upstream):

server {
    listen 80;
    
    server_name upload.mycloudhost.com;

    client_max_body_size 80G;
    proxy_buffer_size 1024k;
    proxy_buffers 4 1024k;
    proxy_busy_buffers_size 1024k;
    proxy_connect_timeout       600;
    proxy_send_timeout          600;
    proxy_read_timeout          600;
    send_timeout                600;
    client_body_timeout         600;
    client_header_timeout       600;
    keepalive_timeout           600;
    uwsgi_read_timeout          600;

    root /var/www/mycloudhost;
    index index.php index.html index.htm index.nginx-debian.html;

    location ~ .php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+.php)(/.+)$;
        fastcgi_pass php:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
    }

    location / {
        try_files $uri $uri/ =404;
    }
}

http – Timeless timing attacks and response jitter

I’ve been researching timeless timing attacks, ie: timing attacks using concurancy rather than round trip time. Here is an article by portswigger with links to the origional article by Van Goethem. Basically it says that if you pack two requests into a tcp packet for http2 or tor, then it will cancel out network jitter. I understand how this works when going from client => server, but when the server responds the http responses are in different packets again and so wouldn’t they be effected by jitter on the return trip? I haven’t seen any mention of network jitter on the return trip from any of the articles I’ve read

web browser – What happens if multiple Strict-Transport-Security headers are set in the HTTP response?

If multiple Strict-Transport-Security headers are set with different settings (e.g. different max-age values), how will the browser behave? Does the browser just follow one of them, or simply error out and discard all? Is this behaviour different across various browsers?

seo – Splitting an existing site into sub-domains and using a landing page with an http 301 – is this just wrong?

I currently manage a website for a large(ish) primary school. I’m in the process of rebuilding their website, and have had a thought today about how to potentially better organise the content.

We currently have a Joomla site running at myprimary.com, and most of the content and menu items are specifically related to the primary school. However, there are two menu items (“Nursery” and “After School Club”) that I feel would do better with their own Joomla sites as they are technically separate organisations that are just closely related to the school (and also share the premises).

What I’m thinking of doing involves using 3 subdomains:

  • school.myprimary.com
  • nursery.myprimary.com
  • club.myprimary.com

I would then use myprimary.com as a landing page. In theory this page could display a menu for each of the subdomains, but ideally I’d like it to just redirect to school.myprimary.com. This would allow people to carry on using the domain in the same way as they have been doing in the past, and I would expect that the majority of visitors will be aiming for the actual school information. Each subdomain would contain links to the other two sites in a kind of “Visit our Other Sites” kind of a fashion, and would be visibly similar to each other whilst retaining elemenets of individuality (colour schemes, etc).

Another benefit of doing this is that if school is closed due to snow, I can switch to serving a completely static html page from myprimary.com explaining the situation, alleviating some of the strain caused by thousands of visits within a very short timeframe on days when the weather is questionable.

A separate point to note is that we currently use a CNAME record to handle the www subdomain. Whilst I don’t see this causing a problem with the landing page, we may have to train people to not try and use www.nursery.myprimary.com.

Is this approach something that makes sense, both in terms of SEO and usability? Or should i just ditch this idea now before I get stuck into it?

Consulta SoapCliente() PHP com URL em HTTP sendo consultada via domínio com SSL HTTPS – Error 500

bom dia!

Estou com um projeto interno na empresa, onde realizo um consulta em um webservice SOAP cuja URL está em http.

Estou com um VPS linux ubuntu 18 onde no domínio vinculado ao ip desta VPS, foi aplicado um certificado ssl via “certbot”.

Ao tentar realizar a consulta por um script em PHP, é retornado o erro 500 https://i.stack.imgur.com/c7HBL.png

Quando eu comento a parte do código responsável pela consulta via SoapClient(), o código é executado normalmente.

Há alguma configuração que deve ser aplicada nesta estrutura para evitar esta incompatibilidade.

Desculpe se estiver perguntando algo muito simples, pois sou iniciante e tenho muitas dúvidas.

Grato por qualquer ajuda.

$soapcliente = new SoapClient('http://wsgsxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx?WSDL');
$param = array (
            'EmpCliente' => $EmpClient,
            'Login' => $Login,
            'Senha' => $Senha,
            'ObterLocalizacao' => 'false'
);

$response = $soapcliente -> Lista_UltimasPosicoes($param);
$array = json_decode(json_encode($response), true);

$retorno = $array('Lista_UltimasPosicoesResult')('Posicao');
$count_retorno = count($retorno);