How to resolve single-name (no dot) domain names with upstream DNS server on Linux workstations

We are using dnsmasq on our firewall machine and have set up the machine with the hosts file for all our printers and other shared resource machines. This should let us use this as a distributed hosts file, as dnsmasq will respond to queries that it sees in the local machine’s hosts.

This is working well from Windows machines. A NAS device, “tusker”, for example, is set up as 192.168.42.4. I can “ping tusker” from any windows machine and it will correctly resolve that to 192.168.42.4. We also have some Linux workstations, however, and none of them will resolve any single-name domain name. They are a mix of mostly Debian-based distros (Debian, Ubuntu, Mint, Arch) and universally the response to “ping tusker” on any of them is “temporary failure in name resolution”. They don’t seem to be passing on any single-name domain name queries to the DNS server. It sees there are no entries in its local hosts file and stops there without sending the query up.

I’ve tried “options ndots:0” in resolv.conf to no effect. Is there a way to tell the Linux resolver to always send names up to resolve regardless of how many levels are in the host name?

nginx – How to use proxy_pass with multiple upstream servers each with unique URLs

I need to configure a legacy nginx so that it load-balances/reverse-proxies to multiple upstream servers, and each has a unique URL. Would the following configuration be valid? I want a unique URL for each server for the upstream, and I want to append a common suffix to the URL in the proxy_pass directive:

    upstream channels {
             server c4af3793be76b33c.mediapackage.us-west-2.blarg.com/out/v1/bf0fa40fc3b048520e36c24e01704551;
             server 8343f7015c0ea438.mediapackage.us-west-2.blarg.com/out/v1/ac1fec897dad48d4945437dc207f9291;
             server 3ae97e9582b0d011.mediapackage.us-west-2.blarg.com/out/v1/12da38b3d5144cf18004dc7fc5d75ec1;
    }

    server {
        listen       80 default_server;
        listen       (::):80 default_server;
        server_name  live-cdn-vpn-gf-pdx.corp.blarg.com;

        location ~ ^/us-west-2/mp-8343f7014c0ea438/(.*)$ {
            proxy_cache            cache;
            proxy_pass https://channels/$1$is_args$args;

            proxy_http_version     1.1;
            proxy_set_header       Connection "";
            add_header             X-Nginx-Live-serviced-by $hostname;
            add_header             X-Nginx-Live-proxy-host $proxy_host;
            add_header             X-Nginx-Live-http-origin: $http_origin;
            add_header             X-Nginx-Live-upstream-addr $upstream_addr;
            add_header             X-Nginx-Live-upstream-cache-status $upstream_cache_status;
            add_header             X-Nginx-Live-upstream-connect-time $upstream_connect_time;
            add_header             X-Nginx-Live-upstream-response-time $upstream_response_time;
            add_header             X-Nginx-Live-upstream-status $upstream_status;
        }

Disable upstream response buffering nginx

Nginx keeps logging below message on my error log

[warn] 16387#16387: *1117 an upstream response is buffered to a temporary file /var/cache/nginx/fastcgi_temp/1/32/0000000321 while reading upstream, client: 173.245.54.175, server:

this fills up my log files , i want to disable buffering completely ,

i’ve tried turning proxy_buffering proxy_buffering off; but the logs keeps showing that nginx/fastcgi is buffering responses

How do i turn off buffering all together ?

nginx – 502 Error and *2 connect() to unix:/run/gunicorn.sock failed (2: No such file or directory) while connecting to upstream

I’ve encountered this issue on a couple of servers after routine maintenance, and I don’t know what caused them to fail. I am running Nginx, Gunicorn, and Django.

The commands I ran:

> alias please="sudo"
> please apt update
> please apt upgrade
> more ~/.config/fish/functions/update-pip.fish 
function update-pip --description "Update pip, packages, and requirements.txt"
  if ( -e requirements.txt )
    pip install -r requirements.txt -U pip

    if ( $status -ne 0 )
      return 1
    end

    pip freeze | sed s/==/>=/ > requirements.txt
  end

  return 0
end
> upgrade-pip
> please reboot

After that, I saw a 502 error (Bad Gateway).

Nginx config:

server {
    server_name example.com www.example.com;

    if ($host = www.example.com) {
        return 301 https://example.com$request_uri;
    } # managed by Certbot

    location / { 
        include proxy_params;
        proxy_pass http://unix:/run/gunicorn.sock;
    }   

    location = /favicon.ico {
        root /home/matt/Project-Dir/staticfiles/home/img;
        #access_log off;
        #log_not_found off;
    }   
    
    location /static {
        alias /home/matt/Project-Dir/staticfiles;
    }   

    location /media {
        root /home/matt/Project-Dir/media;
    }   

    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot 
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot 
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}

server {
    if ($host = www.example.com) {
        return 301 https://example.com$request_uri;
    } # managed by Certbot

    if ($host = example.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

    listen 80; 
    server_name example.com www.example.com;
    return 404; # managed by Certbot
}

Nginx error log:

2020/09/25 18:33:37 (crit) 417#417: *2 connect() to unix:/run/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: <MY_WAN_IP>, server: example.com, request: "GET / HTTP/1.1", upstream: "http://unix:/run/gunicorn.sock:/", host: "example.com"
2020/09/25 18:33:37 (error) 417#417: *2 open() "/home/matt/Project-Dir/staticfiles/home/img/favicon.ico" failed (2: No such file or directory), client: <MY_WAN_IP>, server: example.com, request: "GET /favicon.ico HTTP/1.1", host: "example.com", referrer: "https://example.com/"
2020/09/25 18:34:11 (crit) 417#417: *2 connect() to unix:/run/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: <MY_WAN_IP>, server: example.com, request: "GET / HTTP/1.1", upstream: "http://unix:/run/gunicorn.sock:/", host: "example.com"

How do I resolve this?

Accessing OPNsense firewall from an upstream LAN

I’ve installed OPNsense firewall downstream of my router and can’t access what’s behind it from the LAN.

The complete network looks like:

Internet (public ip) - Router (192.168.0.1) - (192.168.0.144) OPNsense (192.168.1.1) - host 1 (192.168.1.101)
                                            - host 2 (192.168.0.102)

Accessing host 1 from host 2 does not work, it is being blocked by the firewall.

If I host a webserver on host 1 at port 8001, and do the applicable port forwarding on my router and on OPNsense, that webserver is accessible via the internet. But, If I try to access it from another PC on my LAN it is unreachable.

Since host 2 is on the WAN side of OPNsense, but has a LAN style IP address, I have disabled the setting “Block private networks” in:
Interfaces – WAN – Block private networks (disabled)
This has not solved the problem.

Fault finding so far:

From host 2 I can access the web server from publicIP:8001

From host 2 I cannot access the web server on host 1 from 192.168.0.144:8001 (the part I’m trying to fix)

From host 2 I cannot ping OPNsense on 192.168.0.144 (presumably a feature?)

From OPNsense I can ping host 2 on 192.168.0.102

I thought “disable blocking private networks” would be the magic bullet here, but it seems not. Any ideas?

nginx – Passing custom header from upstream is not working

I am trying to log a custom header using $upstream_http_<header name> like this:

$upstream_http_api_error_message

as:

log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for" '
                          '"$request_time" $upstream_http_api_error_message $http_api_key "$upstream_response_time"';

        access_log /var/log/nginx/access.log main;

however, the response header is not returned back to client neither logged into access.log, any idea how to tackle this?

Nginx (Ubuntu 18.04) FastCGI sent in stderr: “Primary script unknown” while reading response header from upstream

Well… I know there are so many simular questions asked. But in order to make this post to be somehow userful for community once being solved I would like to list up a working Nginx + PHP-fpm conf files for WordPress. But as of now It dosn’t work, he he.

As that is my second time dealing with Nginx (fisrt set up was with close to default settings) configuration I’m afraid I can not handle troubleshouting wihtout help.
What I’m tring to do is to set up Nginx with FastCGI cache in order to kick WordPress’ butt to make it run faster.
Right after install nginx was able to show default greeting html page while adressing http://vps_ip_adress. I guess that means networking and basic set up is fine.
Then I installed php7.4-fpm and tune a bit nginx configuration to enable FastCGI cache for futher WordPress install. For testing purpose I put info.php file containing <?php phpinfo(); ?> into sire root dir /var/www/html/mysitename/info.php.
Now I’m getting FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream in nginx error log. I already read that it means php-fpm can not locate scripit, most likely due to error in nginx configuration. Unfortunately being not expirienced with nginx I can not locate the error.

As a reference I used nginx configuration found on internet (yeah… I know, problem with stuff found in internet – it never work he he). In case if with someonce’s help I’ll get it working here will be listed Nginx+PHP-fpm set up with FAstCGI cache which is quite demanded I guess. Any advices of how to iptimize nginx+php-fpm configuration for WordPress wil be much appreciated.

~~~~~~~Configuration listings~~~~~~~

/etc/nginx/nginx.conf

user www-data;
worker_processes 2;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
        worker_connections 768;
        multi_accept on;
}

http {

        #FastCGI cache settings
        fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=WORDPRESS:100m max_size=4g inactive=60m use_temp_path=off;
        fastcgi_cache_key "$scheme$request_method$host$request_uri";
        #
        fastcgi_buffers 8 16k;
        fastcgi_buffer_size 32k;
        fastcgi_connect_timeout 300;
        fastcgi_send_timeout 300;
        fastcgi_read_timeout 300;
        ##
        # Basic Settings
        ##

        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 15;
        types_hash_max_size 2048;
        server_tokens off;
        client_max_body_size 64m;
        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # SSL Settings
        ##

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
        ssl_prefer_server_ciphers on;

        ##
        # Logging Settings
        ##

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        ##
        # Gzip Settings
        ##

        gzip on;

        # gzip_vary on;
        gzip_proxied any;
        gzip_comp_level 2;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        ##
        # Virtual Host Configs
        ##

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;

        server {
                listen 80 default_server;
                listen (::):80 default_server;
                server_name _;
                return 444;
               }

}

/etc/nginx/fastcgi.conf

fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;
fastcgi_param  QUERY_STRING       $query_string;
fastcgi_param  REQUEST_METHOD     $request_method;
fastcgi_param  CONTENT_TYPE       $content_type;
fastcgi_param  CONTENT_LENGTH     $content_length;

fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
fastcgi_param  REQUEST_URI        $request_uri;
fastcgi_param  DOCUMENT_URI       $document_uri;
fastcgi_param  DOCUMENT_ROOT      $document_root;
fastcgi_param  SERVER_PROTOCOL    $server_protocol;
fastcgi_param  REQUEST_SCHEME     $scheme;
fastcgi_param  HTTPS              $https if_not_empty;

fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
fastcgi_param  SERVER_SOFTWARE    nginx/$nginx_version;

fastcgi_param  REMOTE_ADDR        $remote_addr;
fastcgi_param  REMOTE_PORT        $remote_port;
fastcgi_param  SERVER_ADDR        $server_addr;
fastcgi_param  SERVER_PORT        $server_port;
fastcgi_param  SERVER_NAME        $server_name;

# PHP only, required if PHP was built with --enable-force-cgi-redirect
fastcgi_param  REDIRECT_STATUS    200;

/etc/nginx/site-enabled/mysitename.conf

server {
        # As DNS records are not well set up, I'm suing IP adress. To be replaced with domain name 
        server_name xxx.xxx.xxx.xxx;

        access_log   /var/log/nginx/mysitename.access.log;
        error_log    /var/log/nginx/mysitename.error.log;

        root /var/www/mysitename;
        index index.php;
#
        set $skip_cache 0;
#

        # POST requests and urls with a query string should always go to PHP
        if ($request_method = POST) {
                set $skip_cache 1;
        }
        if ($query_string != "") {
                set $skip_cache 1;
        }

        # Don't cache uris containing the following segments
        if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") {
                set $skip_cache 1;
        }

        # Don't use the cache for logged in users or recent commenters
        if ($http_cookie ~* "comment_author|wordpress_(a-f0-9)+|wp-postpass|wordpress_no_cache|wordpress_logged_in") {
                set $skip_cache 1;
        }

        #Don't chache for store, cart,my account, checkout pages
        if ($request_uri ~* "/store.*|/cart.*|/my-account.*|/checkout.*|/addons.*") {
         set $skip_cache 1;
        }

        #Skip cache for WooCommerce query string
        if ( $arg_add-to-cart != "" ) {
          set $skip_cache 1;
        }


        location / {
                try_files $uri $uri/ /index.php?$args;
        }

        location ~ .php$ {

                          fastcgi_split_path_info ^(.+.php)(/.+)$;
                          fastcgi_pass 127.0.0.1:9000;
                          fastcgi_index index.php;
                          include fastcgi_params;

                          set $rt_session "";

        if ($http_cookie ~* "wc_session_cookie_(^=)*=((^%)+)%7C") {
                    set $rt_session wc_session_cookie_$1;
            }

        if ($skip_cache = 0 ) {
            more_clear_headers "Set-Cookie*";
            set $rt_session "";
            }

            fastcgi_cache_key "$scheme$request_method$host$request_uri$rt_session";

            fastcgi_cache WORDPRESS;
            fastcgi_cache_valid 200 301 302 60m;
            fastcgi_cache_use_stale error timeout updating invalid_header http_500 http_503;
            fastcgi_cache_min_uses 1;
            fastcgi_cache_lock on;
            add_header X-FastCGI-Cache $upstream_cache_status;
            fastcgi_cache_bypass $http_cookie $cookie_nocache $skip_cache;
            fastcgi_no_cache $http_cookie ~* "comment_author_|wordpress_(?!test_cookie)|wp-postpass_" $skip_cache;

            fastcgi_cache_background_update on;

        }

        location ~ /purge(/.*) {
            fastcgi_cache_purge WORDPRESS "$scheme$request_method$host$1";
        }

        location ~* ^.+.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
                access_log off; log_not_found off; expires max;
        }

        location = /robots.txt { access_log off; log_not_found off; }
        location ~ /. { deny  all; access_log off; log_not_found off; }
}

/etc/php/7.4/fpm/pool.d/www.conf
Some highlights from that file (it is pretty long…)

user = www-data
group = www-data
listen = 127.0.0.1:9000
listen.owner = www-data
listen.group = www-data
listen.mode = 0660

Could you please help me in troubleshooting?

NGINX: proxy_pass not passing cookies to upstream

I have two servers running in my machine, one is serving API requests and the other one is serving Next.js. Therefore, I have two different upstreams: web and api. There are two routes (pdf, attachemnt) however that I want to be handled by the api upstream and not the web upstream. The proxy_pass works, but for some reason cookies are not being passed. This is not the case when all the proxy_pass inside the server context are the same, but if all the proxy_pass are the same the Cookies problem goes away.

worker_processes auto;

events {
    worker_connections  1024;
}

http {
    server_tokens off;
    include       mime.types;
    log_format  main  '$remote_addr - $remote_user ($time_local) "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"'
                  'rt=$request_time uct="$upstream_connect_time" uht="$upstream_header_time" urt="$upstream_response_time"';


    upstream api {
        server localhost:3000;
        keepalive 32;
    }

    upstream web {
        server localhost:3030;
        keepalive 32;
    }

    server {
        listen       80;
        server_name  dev.openreview.net;
        return 301   https://$server_name$request_uri;
    }

    server {
        listen       443 ssl http2;
        server_name  dev.domain.net;
        ssl_certificate /etc/letsencrypt/live/dev.domain.net/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/dev.domain.net/privkey.pem;
        include /etc/letsencrypt/options-ssl-nginx.conf;
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;


       location = / {
           proxy_pass http://web;
       }

       location ~ ^/(pdf|attachment) {
           proxy_pass http://api;
       }

       location ~ ^/(logs/process|login|assignments) {
           proxy_pass http://web;
       }


    }

    server {
        listen       80;
        server_name  devapi.domain.net;
        return 301   https://$server_name$request_uri;
    }

    server {
        listen       443 ssl http2;
        server_name  devapi.openreview.net;
        ssl_certificate /etc/letsencrypt/live/devapi.domain.net/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/devapi.domain.net/privkey.pem;
        include /etc/letsencrypt/options-ssl-nginx.conf;
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;


       location = / {
           proxy_pass http://api;
       }

       location ~ ^/(logs/process|login|assignments|pdf|attachment) {
           proxy_pass http://api;
       }
    }
}

webserver – Upstream Nginx server is appending port when responding back to load balancer Nginx

We are using the Nginx load balancer that balances the load to our two upstream Nginx web servers. TCP load balancing is done using a server block inside the stream block as below.

stream {
 upstream stream_backend {
        least_conn;
        server 192.168.200.x ;
        server 192.168.200.y;
    }
  server {
    listen 443 ;
    proxy_pass backend:8440;
    proxy_protocol on; 
   } 
}

The upstream Nginx webservers host multiple websites. Each website with its own server block and server_name under http{} configured as below. We have made these websites listen on a specific port 8440 with ssl and proxy_protocol as options to Listen directive.

server {
    listen 8440 ssl proxy_protocol;
    server_name domain1.example.com;
    root /path/to/folder;

rewrite ^/foo/a.json /bar/b.json permanent; 
    
location / {
        try_files $uri /index.php?$query_string; 
    }
} 

Issue: When I am trying to access http://domain1.example.com/foo/a.json, it is supposed to get a rewrite and server http://domain1.example.com/bar/b.json. However, it is not working as expected. Instead, the port 8440 is getting appended in the browser as http://domain1.example.com:8440/bar/b.json. How can I get rid of this port that is getting appended on rewrite rule?

kubernetes – ingress nginx upstream sent no valid HTTP/1.0 header while reading response header from upstream

I’m trying to setup an nginx ingress controller for services in my namespace.
One of the backend services accept HTTP traffic on port 80, the other accepts only HTTPS traffic on port 443. See the description of those both services

$ kubectl describe svc service-1 -n monit
Name:              service-1
Namespace:         monit
Labels:            app=service-1
Annotations:       <none>
Selector:          app=service-1
Type:              ClusterIP
IP:                10.104.185.173
Port:              https  443/TCP
TargetPort:        8443/TCP
Endpoints:         10.1.0.95:8443
Session Affinity:  None
Events:            <none>

$ kubectl describe svc service-2 -n monit
Name:              service-2
Namespace:         monit
Labels:            app=service-2
Annotations:       <none>
Selector:          app=service-2
Type:              ClusterIP
IP:                10.110.93.64
Port:              service  80/TCP
TargetPort:        3000/TCP
Endpoints:         10.1.0.87:3000
Session Affinity:  None
Events:            <none>

Here is my ingress configuration

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-monit
spec:
  rules:
  - host: service-2.localhost
    http:
      paths:
      - path: /
        backend:
          serviceName: service-2
          servicePort: 80
  - host: service-1.localhost
    http:
      paths:
      - path: /
        backend:
          serviceName: service-1
          servicePort: 443

When I look into the Nginx configuration things look OK

$ kubectl describe ingress ingress-monit -n monit                  
Name:             ingress-monit
Namespace:        monit
Address:          localhost
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host                            Path  Backends
  ----                            ----  --------
  service-2.localhost               
                                  /   service-2:80 (10.1.0.87:3000)
  service-1.localhost  
                                  /   service-1:443 (10.1.0.95:8443)
Annotations:                      Events:
  Type                            Reason  Age   From                      Message
  ----                            ------  ----  ----                      -------
  Normal                          CREATE  31m   nginx-ingress-controller  Ingress monit/ingress-monit
  Normal                          UPDATE  30m   nginx-ingress-controller  Ingress monit/ingress-monit

Now the problem is that I can access properly my service-2, with http://service-2.localhost/ but I cannot access service-1. Visiting http://service-1.localhost/ on chrome gives me

This site can’t be reachedThe webpage at https://service-1.localhost/ might be temporarily down or it may have moved permanently to a new web address.
ERR_INVALID_RESPONSE

When I look into Nginx logs, I see:

$ kubectl logs -n monit ingress-nginx-controller-bbdc786b4-8crdm -f
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       0.32.0
  Build:         git-446845114
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.17.10

-------------------------------------------------------------------------------
. . .
2020/06/02 22:56:47 (error) 2363#2363: *64928 upstream sent no valid HTTP/1.0 header while reading response header from upstream, client: 192.168.65.3, server: service-1.localhost, request: "GET / HTTP/1.1", upstream: "http://10.1.0.95:8443/", host: "service-1.localhost"
192.168.65.3 - - (02/Jun/2020:22:58:13 +0000) "GET / HTTP/1.1" 200 7817 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36" 594 0.005 (monit-service-2-80) () 10.1.0.87:3000 30520 0.005 200 2baefff713047b14a81643650cb50c4c

The error seems to be related to the service-1 returning bad response upstream sent no valid HTTP/1.0 header while reading response header from upstream. The thing is if I use kubectl proxy I can properly access that service!!

Any ideas how I could figure out what’s the real issue??