php – Nginx isn’t passing on the last forward slash of an arg to a script

Have an issue where nginx isn’t passing on the last forward slash of an arg to a script

Example
https://xxxx.com/test_t/company/Default/icon_category/Hotel.png

Rewrite:
location / {​​
rewrite "^/((a-zA-Z0-9)+.*)/((a-zA-Z0-9)+.*)$"
/test2.php?t=$1&file=$2 last;
}​​

test2.php is just simply getting and echoing $t and $file

Actual outcome: test_t/company/Default/icon_categoryHotel.png

Expected outcome: test_t/company/Default/icon_category/Hotel.png

linux – nginx reverse proxy traversals attacks

I am using nginx as a reverse proxy with the following config:

location / {
    resolver 1.1.1.1;
    proxy_http_version 1.1;
    proxy_pass http://webserver.example/$http_host$request_uri;
}

I have only static content hosted on the webserver, which is available over

http://webserver.internal/website1.example
and
http://webserver.internal/website2.example

The reverse proxy is handling https and scaling in front of this.
The goal is to serve this websites by https://website1.example and https://website2.example – which works fine with the above config.

My question is, if this approach is safe against path traversals attacks like requesting website1.example/../website2.example.

The example attack does currently not work, but i am not sure if this means i am protected against this kind of attack.

ssl – Nginx fails to proxy to Node

I have a running Node app that is live and works fine on http and https.

I setup Nginx and it is also running fine, tested with an sshtunnel, and it is getting a correct response from static files (such as MyPath/index.html).

However, I am trying to get Nginx to work as a reverse-proxy for Node.
Because I want to make another app on my machine, and Nginx should sort the incoming requests for each app.

But there seems to be an issue with Nginx I cannot figure out. I suspect it is a config problem. When I try to reach my Node app, I always get an error page from my browser, saying that there is an SSL issue.

Nginx config

server {
        listen (::):4444 default_server;
        server_name localhost mysite.com www.mysite.com;    

        access_log /home/mysite/access-log;    

        location / {
            proxy_pass http://127.0.0.1:5555;
        }
}    

I tried changing http://127.0.0.1:5555 to https://127.0.0.1:6666 but that didn’t change anything.

Node app

const port = 5555;
const secureport = 6666;    

const privateKey = fs.readFileSync('PATHTOCERT');
const certificate = fs.readFileSync('PATHTOKEY');
const credentials = {key: privateKey, cert: certificate};    


I use an express app instance here, also configured CSP with helmet. But I don’t think that’s the problem, because I disabled helmet and that did not solve anything.

const httpServer = http.createServer(app);
const httpsServer = https.createServer(credentials, app);    

httpServer.listen(port);
httpsServer.listen(secureport);

nginx – How do I troubleshoot 403 access forbidden by rule error?

In the first 40-50 minutes after I restart my server (Apache with Ngintron) I get a 403 nginx error, and in the logs it says “access forbidden by rule”.
I have all my traffic redirected to HTTPS.
This is my etc/nginx/common_https.conf:

# Common definitions for HTTPS content

# TLS/SSL common
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;

# Diffie-Hellman parameter for DHE ciphersuites (2048 bits)
ssl_dhparam /etc/ssl/certs/dhparam.pem;

# --- Protocols & Ciphers (start) ---

# Maximum client support (enabled by default)
# Supports Firefox 1, Android 2.3, Chrome 1, Edge 12, IE8 on Windows XP, Java 6, OpenSSL 0.9.8, Opera 5 & Safari 1
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers / removed /;
ssl_prefer_server_ciphers on;

# Intermediate client support (disabled by default)
# Supports Firefox 27, Android 4.4.2, Chrome 31, Edge, IE 11 on Windows 7, Java 8u31, OpenSSL 1.0.1, Opera 20 & Safari 9
#ssl_protocols TLSv1.2 TLSv1.3;
#ssl_ciphers /removed/;
#ssl_prefer_server_ciphers off;

# Modern client support (disabled by default)
# Supports Firefox 63, Android 10.0, Chrome 70, Edge 75, Java 11, OpenSSL 1.1.1, Opera 57 & Safari 12.1
#ssl_protocols TLSv1.3;
#ssl_prefer_server_ciphers off;

# --- Protocols & Ciphers (finish) ---

# Set the port for HTTPS proxying
set $PROXY_TO_PORT 443;

# Include common definitions and rules with HTTP
include common_http.conf;

Could be the problem this rule: ssl_session_cache shared:SSL:50m; ?

And this is my etc/nginx/common_http.conf:

# Common definitions for HTTP content

# Initialize important variables
set $CACHE_BYPASS_FOR_DYNAMIC 0;
set $CACHE_BYPASS_FOR_STATIC 0;
set $PROXY_DOMAIN_OR_IP $host;
set $PROXY_FORWARDED_HOST $host;
set $PROXY_SCHEME $scheme;
set $SITE_URI "$host$request_uri";

# Generic query string to request a page bypassing Nginx's caching entirely for both dynamic & static content
if ($query_string ~* "nocache") {
    set $CACHE_BYPASS_FOR_DYNAMIC 1;
    set $CACHE_BYPASS_FOR_STATIC 1;
}

# Proxy requests to "localhost"
if ($host ~* "localhost") {
    set $PROXY_DOMAIN_OR_IP "127.0.0.1";
}

# Disable caching for cPanel specific subdomains
if ($host ~* "^(webmail|cpanel|whm|webdisk|cpcalendars|cpcontacts).") {
    set $CACHE_BYPASS_FOR_DYNAMIC 1;
    set $CACHE_BYPASS_FOR_STATIC 1;
}

# Fix Horde webmail forwarding
if ($host ~* "^webmail.") {
    set $PROXY_FORWARDED_HOST '';
}

# Set custom rules like domain/IP exclusions or redirects here
include custom_rules;

location / {
    try_files $uri $uri/ u/backend;
}

location u/backend {
    include proxy_params_common;
    # === MICRO CACHING ===
    # Comment the following line to disable 1 second micro-caching for dynamic HTML content
    include proxy_params_dynamic;
}

# Enable browser cache for static content files (TTL is 1 hour)
location ~* .(?:json|xml|rss|atom)$ {
    include proxy_params_common;
    include proxy_params_static;
    expires 1h;
}

# Enable browser cache for CSS / JS (TTL is 30 days)
location ~* .(?:css|js)$ {
    include proxy_params_common;
    include proxy_params_static;
    expires 30d;
}

# Enable browser cache for images (TTL is 60 days)
location ~* .(?:ico|jpg|jpeg|gif|png|webp)$ {
    include proxy_params_common;
    include proxy_params_static;
    expires 60d;
}

# Enable browser cache for archives, documents & media files (TTL is 60 days)
location ~* .(?:3gp|7z|avi|bmp|bz2|csv|divx|doc|docx|eot|exe|flac|flv|gz|less|mid|midi|mka|mkv|mov|mp3|mp4|mpeg|mpg|odp|ods|odt|ogg|ogm|ogv|opus|pdf|ppt|pptx|rar|rtf|swf|tar|tbz|tgz|tiff|txz|wav|webm|wma|wmv|xls|xlsx|xz|zip)$ {
    set $CACHE_BYPASS_FOR_STATIC 1;
    include proxy_params_common;
    include proxy_params_static;
    expires 60d;
}

# Enable browser cache for fonts & fix u/font-face cross-domain restriction (TTL is 60 days)
location ~* .(eot|ttf|otf|woff|woff2|svg|svgz)$ {
    include proxy_params_common;
    include proxy_params_static;
    expires 60d;
}

# Prevent logging of favicon and robot request errors
location = /favicon.ico {
    include proxy_params_common;
    include proxy_params_static;
    expires 60d;
    log_not_found off;
}

location = /robots.txt  {
    include proxy_params_common;
    include proxy_params_static;
    expires 1d;
    log_not_found off;
}

# Deny access to files like .htaccess or .htpasswd
location ~ /.ht {
    deny all;
}

How do I troubleshoot further exactly which rule causes the issue?

http – Nginx showing 504 Gateway timed out when uploading large files

I have a simple PHP file upload script that uploads a file. The server consists of 1 nginx reverse proxy and another nginx server (upstream). The problem is that when I try to upload very large files ~ 2GB then I get an error:

504 Gateway Time-out

Here is my reverse proxy configuration:

server {
    listen 80;
    
    server_name upload.mycloudhost.com;

    proxy_buffer_size 1024k;
    proxy_buffers 4 1024k;
    proxy_busy_buffers_size 1024k;
    proxy_connect_timeout       600;
    proxy_send_timeout          600;
    proxy_read_timeout          600;
    send_timeout                600;
    client_body_timeout         600;
    client_header_timeout       600;
    keepalive_timeout           600;
    uwsgi_read_timeout          600;

    location / {
        proxy_set_header X-Real-IP  $remote_addr;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Host $host;
        proxy_pass http://localhost:13670;
    }
}

And here is the other nginx server (upstream):

server {
    listen 80;
    
    server_name upload.mycloudhost.com;

    client_max_body_size 80G;
    proxy_buffer_size 1024k;
    proxy_buffers 4 1024k;
    proxy_busy_buffers_size 1024k;
    proxy_connect_timeout       600;
    proxy_send_timeout          600;
    proxy_read_timeout          600;
    send_timeout                600;
    client_body_timeout         600;
    client_header_timeout       600;
    keepalive_timeout           600;
    uwsgi_read_timeout          600;

    root /var/www/mycloudhost;
    index index.php index.html index.htm index.nginx-debian.html;

    location ~ .php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+.php)(/.+)$;
        fastcgi_pass php:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
    }

    location / {
        try_files $uri $uri/ =404;
    }
}

nginx alias with location regex get wrong file name

my location config:

location ~ ^/maat/(js|css|images)/ {
       alias /usr/local/services/gdt-frontend-test-1.0/maat/$1/;
       expires  1y;
    }

when i request http://xxxx/maat/js/entry_xxx.js
i got 301 with location: http://xxxx/maat/js/entry_xxx.js/

the debug log is this:

2021/02/22 10:57:14 (debug) 3423#0: *19 http header: "Host: audit2.test.qq.com"
2021/02/22 10:57:14 (debug) 3423#0: *19 http header: "Connection: keep-alive"
2021/02/22 10:57:14 (debug) 3423#0: *19 http header: "Accept-Encoding: gzip"
2021/02/22 10:57:14 (debug) 3423#0: *19 http header done
2021/02/22 10:57:14 (debug) 3423#0: *19 event timer del: 531: 28164571712
2021/02/22 10:57:14 (debug) 3423#0: *19 generic phase: 0
2021/02/22 10:57:14 (debug) 3423#0: *19 __mydebug_menshen. ngx_http_menshen_handler is 
called r: 00000000029F9630 nginx_version: 1016001
2021/02/22 10:57:14 (debug) 3423#0: *19 <1>status: MENSHEN_STATUS_CTX_CREATE_INIT. uri: 
/maat/js/entry_e731dc7.js. args:  r: 00000000029F9630 r->main: 00000000029F9630 r->count: 1
2021/02/22 10:57:14 (debug) 3423#0: *19 http cleanup add: 00000000029FA4E0
2021/02/22 10:57:14 (debug) 3423#0: *19 server menshen_module: -1 0000000002A56378
2021/02/22 10:57:14 (debug) 3423#0: *19 host and cgi not match ,ptr_conf_rule null
2021/02/22 10:57:14 (debug) 3423#0: *19 rewrite phase: 2
2021/02/22 10:57:14 (debug) 3423#0: *19 rewrite phase: 3
2021/02/22 10:57:14 (debug) 3423#0: *19 test location: "/maat/"
2021/02/22 10:57:14 (debug) 3423#0: *19 test location: "mo/"
2021/02/22 10:57:14 (debug) 3423#0: *19 test location: "logOut"
2021/02/22 10:57:14 (debug) 3423#0: *19 test location: "api/"
2021/02/22 10:57:14 (debug) 3423#0: *19 test location: ~ "^/maat/(js|css|images)/"
2021/02/22 10:57:14 (debug) 3423#0: *19 using configuration "^/maat/(js|css|images)/"
2021/02/22 10:57:14 (debug) 3423#0: *19 http cl:-1 max:8388608
2021/02/22 10:57:14 (debug) 3423#0: *19 rewrite phase: 5
2021/02/22 10:57:14 (debug) 3423#0: *19 rewrite phase: 6
2021/02/22 10:57:14 (debug) 3423#0: *19 post rewrite phase: 7
2021/02/22 10:57:14 (debug) 3423#0: *19 generic phase: 8
2021/02/22 10:57:14 (debug) 3423#0: *19 generic phase: 9
2021/02/22 10:57:14 (debug) 3423#0: *19 generic phase: 10
2021/02/22 10:57:14 (debug) 3423#0: *19 access phase: 11
2021/02/22 10:57:14 (debug) 3423#0: *19 access phase: 12
2021/02/22 10:57:14 (debug) 3423#0: *19 access phase: 13
2021/02/22 10:57:14 (debug) 3423#0: *19 post access phase: 14
2021/02/22 10:57:14 (debug) 3423#0: *19 generic phase: 15
2021/02/22 10:57:14 (debug) 3423#0: *19 generic phase: 16
2021/02/22 10:57:14 (debug) 3423#0: *19 content phase: 17
2021/02/22 10:57:14 (debug) 3423#0: *19 content phase: 18
2021/02/22 10:57:14 (debug) 3423#0: *19 content phase: 19
2021/02/22 10:57:14 (debug) 3423#0: *19 content phase: 20
2021/02/22 10:57:14 (debug) 3423#0: *19 content phase: 21
2021/02/22 10:57:14 (debug) 3423#0: *19 content phase: 22
2021/02/22 10:57:14 (debug) 3423#0: *19 http script copy: "/usr/local/services/gdt-frontend-test-1.0/maat/"
2021/02/22 10:57:14 (debug) 3423#0: *19 http script capture: "js"
2021/02/22 10:57:14 (debug) 3423#0: *19 http script copy: "/"
2021/02/22 10:57:14 (debug) 3423#0: *19 http filename: "/usr/local/services/gdt-frontend-test-1.0/maat/js/"
2021/02/22 10:57:14 (debug) 3423#0: *19 add cleanup: 00000000029FA550
2021/02/22 10:57:14 (debug) 3423#0: *19 http static fd: -1
2021/02/22 10:57:14 (debug) 3423#0: *19 http dir
2021/02/22 10:57:14 (debug) 3423#0: *19 http finalize request: 301, "/maat/js/entry_e731dc7.js?" a:1, c:1
2021/02/22 10:57:14 (debug) 3423#0: *19 http special response: 301, "/maat/js/entry_e731dc7.js?"
2021/02/22 10:57:14 (debug) 3423#0: *19 http set discard body
2021/02/22 10:57:14 (debug) 3423#0: *19 charset: "" > "utf-8"
2021/02/22 10:57:14 (debug) 3423#0: *19 HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Mon, 22 Feb 2021 02:57:14 GMT
Content-Type: text/html
Content-Length: 162
Location: http://audit2.test.qq.com/maat/js/entry_e731dc7.js/
Connection: close
Expires: Tue, 22 Feb 2022 02:57:14 GMT
Cache-Control: max-age=31536000

2021/02/22 10:57:14 (debug) 3423#0: *19 write new buf t:1 f:0 0000000002B4F7D0, pos     0000000002B4F7D0, size: 287 file: 0, size: 0
2021/02/22 10:57:14 (debug) 3423#0: *19 http write filter: l:0 f:0 s:287
2021/02/22 10:57:14 (debug) 3423#0: *19 http output filter "/maat/js/entry_e731dc7.js?"
2021/02/22 10:57:14 (debug) 3423#0: *19 http copy filter: "/maat/js/entry_e731dc7.js?"
2021/02/22 10:57:14 (debug) 3423#0: *19 http postpone filter "/maat/js/entry_e731dc7.js?" 0000000002B4FA10
2021/02/22 10:57:14 (debug) 3423#0: *19 write old buf t:1 f:0 0000000002B4F7D0, pos     0000000002B4F7D0, size: 287 file: 0, size: 0
2021/02/22 10:57:14 (debug) 3423#0: *19 write new buf t:0 f:0 0000000000000000, pos 0000000000B97440, size: 116 file: 0, size: 0
2021/02/22 10:57:14 (debug) 3423#0: *19 write new buf t:0 f:0 0000000000000000, pos 0000000000B97200, size: 46 file: 0, size: 0
2021/02/22 10:57:14 (debug) 3423#0: *19 http write filter: l:1 f:0 s:449
2021/02/22 10:57:14 (debug) 3423#0: *19 http write filter limit 0
2021/02/22 10:57:14 (debug) 3423#0: *19 writev: 449 of 449
2021/02/22 10:57:14 (debug) 3423#0: *19 http write filter 0000000000000000
2021/02/22 10:57:14 (debug) 3423#0: *19 http copy filter: 0 "/maat/js/entry_e731dc7.js?"
2021/02/22 10:57:14 (debug) 3423#0: *19 http finalize request: 0, "/maat/js/entry_e731dc7.js?" a:1, c:1
2021/02/22 10:57:14 (debug) 3423#0: *19 http request count:1 blk:0
2021/02/22 10:57:14 (debug) 3423#0: *19 http close request
2021/02/22 10:57:14 (debug) 3423#0: *19 __mydebug. menshen cleanup r: 00000000029F9630
2021/02/22 10:57:14 (debug) 3423#0: *19 http log handler
2021/02/22 10:57:14 (debug) 3423#0: *19 http monitor handler
2021/02/22 10:57:14 (debug) 3423#0: *19 free: 00000000029F95E0, unused: 0
2021/02/22 10:57:14 (debug) 3423#0: *19 free: 0000000002B4F380, unused: 2096
2021/02/22 10:57:14 (debug) 3423#0: *19 close http connection: 531
2021/02/22 10:57:14 (debug) 3423#0: *19 reusable connection: 0
2021/02/22 10:57:14 (debug) 3423#0: *19 free: 0000000002B6F400
2021/02/22 10:57:14 (debug) 3423#0: *19 free: 0000000002C032C0, unused: 136
2021/02/22 10:57:14 (debug) 3429#0: *20 http header: "Host: audit2.test.qq.com"
2021/02/22 10:57:14 (debug) 3429#0: *20 http header: "Connection: keep-alive"
2021/02/22 10:57:14 (debug) 3429#0: *20 http header: "Accept-Encoding: gzip"
2021/02/22 10:57:14 (debug) 3429#0: *20 http header done
2021/02/22 10:57:14 (debug) 3429#0: *20 event timer del: 22: 28164571836
2021/02/22 10:57:14 (debug) 3429#0: *20 generic phase: 0
2021/02/22 10:57:14 (debug) 3429#0: *20 __mydebug_menshen. ngx_http_menshen_handler is called r: 00000000029F9630 nginx_version: 1016001
2021/02/22 10:57:14 (debug) 3429#0: *20 <1>status: MENSHEN_STATUS_CTX_CREATE_INIT. uri: /maat/js/entry_e731dc7.js/. args:  r: 00000000029F9630 r->main: 00000000029F9630 r->count: 1
2021/02/22 10:57:14 (debug) 3429#0: *20 http cleanup add: 00000000029FA4E0
2021/02/22 10:57:14 (debug) 3429#0: *20 server menshen_module: -1 0000000002A56378

why the http filename is the directory ?

nginx – Only certain countries can access particular route

I am using nginx/1.18.0 (Ubuntu) on an Ubuntu 20.04.1 LTS machine.

I have a laravel project and phpmyadmin running.

My /etc/nginx/sites-enabled/example-application file looks like the following:

server {
    listen 80;
    server_name http://78.46.214.238/;
    root /var/www/demo_laravel_nlg-generation/public;

    add_header X-Frame-Options "SAMEORIGIN";
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Content-Type-Options "nosniff";

    index index.html index.htm index.php;

    charset utf-8;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location = /favicon.ico { access_log off; log_not_found off; }
    location = /robots.txt  { access_log off; log_not_found off; }

    error_page 404 /index.php;

    # phpMyAdmin:
    location /phpmyadmin {
        root /usr/share;
        index index.php;
    }
    # PHP files for phpMyAdmin:
    location ~ ^/phpmyadmin(.+.php)$ {
        root /usr/share;
        index index.php;
        #fastcgi_read_timeout 300;
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/run/php/php7.4-fpm.sock;
    }

    location ~ .php$ {
        fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
        include fastcgi_params;
    }

    location ~ /.(?!well-known).* {
        deny all;
    }

}

To secure my web phpmyadmin interface I was thinking to block certain countries or even regions or allow only specific IPs from accessing my phpmyadmin route, however my web applications should still be accessible for everyone.

Any suggestions how to do this in my /etc/nginx/sites-enabled/example-application file?

I really appreciate your replies!

php – Configure MAMP PRO to run Magento 2 on Nginx

I’m trying to run magento 2 with Nginx on Mamp. But I’m always getting error in logs

2021/02/19 12:38:13 [error] 10836#0: *211 upstream sent too big header while reading response header from upstream, client: 127.0.0.1, server: magento.loc, request: “GET /en_gb HTTP/1.1”, upstream: “fastcgi://unix:/Applications/MAMP/Library/logs/fastcgi/nginxFastCGI_php7.3.24.sock:”, host: “magento.loc”

enter image description here

enter image description here

I tried to add this params and did nginx reload, I didn’t helped. error is the same.
enter image description here

What I can do to resolve it????

linux – nginx proxy_pass works via curl but does not work with my browser

I try to set up a reverse proxy with nginx and added the proxy_pass clause – for testing purposes I forward it to google.com. My nginx.conf looks like this:

user www-data;
worker_processes auto;
pid /run/nginx.pid;
#include /etc/nginx/modules-enabled/*.conf;

events {
    worker_connections 768;
    # multi_accept on;
}


http {
    ##
    # Basic Settings
    ##

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    # server_tokens off;

    # server_names_hash_bucket_size 64;
    # server_name_in_redirect off;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    ##
    # SSL Settings
    ##

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
    ssl_prefer_server_ciphers on;

    ##
    # Logging Settings
    ##

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    ##
    # Gzip Settings
    ##

    gzip on;

    # gzip_vary on;
    # gzip_proxied any;
    # gzip_comp_level 6;
    # gzip_buffers 16 8k;
    # gzip_http_version 1.1;
    # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

    server {
        listen  80 default_server;
        listen (::):80;
        server_name localhost;
        location / {
                proxy_pass http://www.google.com;
        }
    }


    ##
    # Virtual Host Configs
    ##

    #include /etc/nginx/conf.d/*.conf;
    #include /etc/nginx/sites-enabled/*;
}

However, when testing it with curl http://161.35.216.150 it works just fine but when I enter the IP in my browser this does not work. The error.logdoes not show any entry and nginx -t works well.

my system is:

lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.2 LTS
Release:        20.04
Codename:       focal

There are some questions around with similar problems, however the solutions provided don’t work for me. (e.g. are suse specific or relate to includes that override the conf-file)

Thank you!

wordpress – How to stop domains from pointing to my server IP address and duplicating my site using nginx

To improve security, prevent host header attacks, and preserve your search rankings, here is what I recommend:

No default site

Simply drop all traffic not matching your genuine website. Before using the below config, execute the following example command on your server to generate self-signed “dummy” certificates which are necessary for responding to HTTPS requests.

mkdir /etc/ssl/dummy && openssl req -x509 -nodes -newkey rsa:2048 -keyout /etc/ssl/dummy/dummy.key -out /etc/ssl/dummy/dummy.crt

Now use the following two server blocks for your default site configuration.

server {
    listen [::]:80 default_server;
    listen      80 default_server;
    return 444;
}

server {
    listen     [::]:443 ssl http2 default_server;
    listen          443 ssl http2 default_server;
    ssl_certificate           /etc/ssl/dummy/dummy.crt;
    ssl_certificate_key       /etc/ssl/dummy/dummy.key;
    return 444;
}

Reload Nginx and it will drop all the copycat site connections.

Prevent framing

Somewhere in your genuine site’s server block, add the following header to prevent someone embedding your site as a frame / iframe at their domain name.

add_header X-Frame-Options "SAMEORIGIN";

Canonical URLs

In the <head> section of every page, add a canonical URL link element. If every page has something like <link rel="canonical" href="https://www.your-site.com/your-page/"> then even if someone copies your site at their domain name, search engines recognise your site as the original.