hosts.deny not working (restrict ssh from access)

Hey
im working on centos to deny access to ssh by country and found a good script.

In / etc / hosts .deny add the line:
sshd: ALL

an… | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1829670&goto=newpost

ssh – Google Cloud Instance Crashed – Same Attached Disk Crashes another instance when attached as secondary (unmounted)

I have a GCP instance that is micro, and I suspect it ran out of memory,
It crashed and does not let me SSH into it, even through the web console on GCP.

I create a new bootable instance and enter it OK, but when I attach the “broken” disk to this instance as a secondary attached Disk after the working bootable, the new instance does not boot, and gives the same error as the other that broke.

Any ideas how I can restore / recover the broken disk?

Robust remote SSH access – Super User

I have two computers, a MacBook Pro (MBP) running Big Sur and a desktop running Ubuntu 20.04. I live in a building which provides its own wired Internet connection. I have a local WiFi setup which connects to the MBP and the Linux machine. I have routinely used ssh on the local network without problems.

However, in about a week, I have to move away and leave the desktop behind. I need access to my Linux machine via the Internet. I have tried to test if remote access works by connecting my MBP to the Internet via my phone. While the computer is able to browse the web, I am not able to access the Linux machine via ssh.

How can I accomplish this? I think my router is behind a NAT as the router’s reported IP address is different than the one I find from https://www.whatismyip.com/.

For SSH, I have opened the WAN port of 60000 and the LAN port of 22 forwards to my Linux machine. Some things to note: wherever I move, I won’t have admin access to the network. I can connect to it but I am not sure what is blocked on that network. The IP address of the MBP may also change.

What is the most robust way to establish ssh (and VNC) connection to my Linux Machine when I’m away?

ssh works with port specified but not without

So if I try to ssh into our server without specifying the port I get asked for a password but if I specify -p443 it uses correctly from ssh-agent or whatever. Why does this happen? Is the key stored relating to the port used? Or is this somehow related to the way my devops team setup the infra security?

linux – How to forward my SSH agent to a remote docker daemon?

Accessing an SSH agent local to the docker daemon during build is a popular and well-documented use case. I need a remote docker daemon to access my local SSH agent. SSH agent forwarding generally provides remote access to a local agent.

But the remote builder insists that no SSH agent is running when I forward it and attempt to mount it in my Dockerfile:

$ cat ~/.ssh/config
Host <my-docker-daemon-host>
  ForwardAgent yes

$ ssh-add -l
256 SHA256:... ... (ED25519)

$ docker --version
Docker version 19.03.13, build 4484c46d9d

$ DOCKER_BUILDKIT=1 docker -H ssh://root@<my-docker-daemon-host> build --ssh default .
(...)
#63 (test-image 33/51) RUN --mount=type=ssh ssh-add -l
#63 0.427 Error connecting to agent: No such file or directory
#63 ERROR: executor failed running (/bin/sh -c ssh-add -l): runc did not terminate sucessfully

How can I direct the remote docker daemon to mount a forwarded SSH agent when executing RUN --mount=type=ssh in a Dockerfile?

centos8 – Centos 8 VPS Use openvpn client without Lost SSH connection?

I want use openvpn connection(client) Centos 8 VPS server.

But when I connect direct “redirect-gateway” I lost SSH connection.

When I try –pull-filter ignore redirect-gateway, I connection But cannot access internet with Tun driver

How can I fix this problem?

ssh – sshd never accepts public key offer

I have used PublicKey logins on a number of my servers for months without trouble. I generated the keys on my client machine and copied to the server’s ~/.ssh/authorized_keys using ssh-copy-id. All well and good until one machine stopped accepting key-based logins the other day. Obviously there has been a change, but the sshd_config is the same as it was and as the other server.

Running the connection verbosely offers the following:

debug1: Authentications that can continue: publickey,password
debug3: start over, passed a different list publickey,password
debug3: preferred gssapi-with-mic,publickey,keyboard-interactive,password
debug3: authmethod_lookup publickey
debug3: remaining preferred: keyboard-interactive,password
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Offering public key: /home/kapn/.ssh/id_rsa RSA SHA256: <deleted for post>
debug3: send packet: type 50
debug2: we sent a publickey packet, wait for reply
debug3: receive packet: type 51
debug1: Authentications that can continue: publickey,password
debug1: Trying private key: /home/kapn/.ssh/id_dsa 
<and so on until it asks for a password>

My sshd_config file

Port 2201
PermitRootLogin without-password
PubkeyAuthentication yes
ChallengeResponseAuthentication no
UsePAM yes
TCPKeepAlive yes
# All else is at default settings.
# With the exception of the Port, PubKeyAuthentication and PermitRootLogin settings, 
# I didn't intentionally change anything here.

Any thoughts on where to look for trouble? Is there data to be gathered other than via the -vv switch on ssh?

linux – How can I debug an intermittent server crash that rejects SSH connections, causes I/O errors, and renders existing sessions useless?

I’ve been running a home server based on debian for ~10 years now and recently decided to replace it with an HP EliteDesk 705 65W G4 Desktop Mini PC, but the new machine keeps crashing.

The machine will run fine for a few hours, then suddenly begins:

  1. Rejecting SSH connections immediately
  2. Returning “Command not found” for any commands run in existing ssh connections (e.g., ls)
  3. Giving I/O errors in response to running processes like docker stats
  4. Not showing any display output to connected monitors

I typically run a few home services in docker containers and initially thought an oddity of my config might be causing the crashes, so I decided to select a random existing github repo with a few containers and run it from scratch. I decided to use this HTPC download box repo, which seems to have a few linuxserver.io containers and should be a reasonable approximation for the lower bound of the workload my services would put on the machine.

Steps I have followed to create the crash:

  1. Install headless debian (netinstall image); configure the OS by following the below steps:
  2. Set hostname: test
  3. Set domain: example.com
  4. Add a new user, add user to sudoers, set up SSH to allow for only keys and only nick can log in (including adding my desktop’s public key to ~/.ssh/authorized_keys)
adduser nick
usermod -aG sudo nick
sudo nano /etc/ssh/sshd_config; the specific settings you want are:
PermitRootLogin no
Protocol 2
MaxAuthTries 3
PubkeyAuthentication yes
PasswordAuthentication no
PermitEmptyPasswords no
ChallengeResponseAuthentication no
UsePAM no
AllowUsers nick
  1. Restart ssh: sudo service sshd restart
  2. Install necessary services: sudo apt-get install docker docker-compose git
  3. Add your user to the docker group: sudo usermod -aG docker nick
  4. Generate a new ssh key and add it to your GitHub account: ssh-keygen -t ed25519, then copy the public key to GH
  5. Set your global git vars: git config --global user.name 'Nick'; git config --global user.email nick@example.com

Wait 2 days, verify no crash occurs

  1. Run the following commands:
cd /opt
sudo mkdir htpc-download-box
sudo chown -R nick:nick htpc-download-box
git clone git@github.com:sebgl/htpc-download-box.git
cd htpc-download-box
docker-compose up -d

(Note: I do no configuration whatsoever of the containers in the docker-compose file, I just start them running and then confirm I can access them via browser. I use the exact .env.example as the .env for the project.)

Wait a few hours, observe that server has crashed. Unable to log in via SSH and other issues as stated above. Interestingly, I can still view the web UI of some containers (e.g., sonarr) but when trying to browse the filesystem via that web UI, I am unable to see any folders and manually typing the path indicates that the user has no permissions to view that folder.

Since I observe crashes when running either my actual suite of services or the example repo detailed here, I must conclude it’s an issue with the machine itself. I have tested the nvme drive with smartmontools and both the short and long tests report no errors.

I am not familiar enough with Linux to know how to proceed from here (maybe give it another 10 years!) – what logs can I examine to determine what might cause the crash? Should I be setting up additional logging of some sort to try to ascertain the cause?

All of the issues are so general (I/O errors, SSH refusal, etc.) that Googling for the past week has not gotten me anywhere; I was sure the clean reinstall and using a new repo would not crash and I could then incrementally add my actual docker containers until a crash occurred, therefore finding the problematic container via trial and error, but I am now at a complete loss for how to proceed.

networking – Are ssh connections initiated from my local machine to remote bidirectional?

I just recently setup a VPS server for myself and I did take all the necessary precautions when I set it up. I intend to access access a website on my local machine using a command like ssh -L 8080:127.0.0.1:3000 remote-user@remote-ssh-server.

While I did take all the security precautions, I am wondering that if the remote server is somehow compromised without me knowing about it, would the intruder be able to access my local computer when I am connected to it?

Further, is the above mechanism for browsing local pages on my remote secure?