amazon web services – How to restrict user to access particular ec2 machines?

With this link, I can restrict user only to poweron/off a machine and terminate option is removed.

But, I want to restrict user to access only few machines assigned to the user and not show other machines.

With only the Owner or any custom tag we can filter.

How to do that?

amazon web services – AWS EC2 access attempts on blocked ports

I have recently installed OSSEC on a RHEL 8 server being hosted on AWS EC2. Since then I have been receiving brute force attempts and other attempts on ports that are not open in my security group.

How are users able to get to my server at all when these ports are not open in the security group for the EC2 instance, and how do I stop them from reaching the server at all?

Example report:

OSSEC HIDS Notification. 2020 Oct 18 20:45:33

Received From: shared->/var/log/secure Rule: 5712 fired (level 10) ->
“SSHD brute force trying to get access to the system.” Src IP: Portion of the log(s):

Oct 18 20:45:32 shared sshd(3097608): Disconnected from invalid user
pi port 49568 (preauth) Oct 18 20:45:32 shared
sshd(3097608): Invalid user pi from port 49568 Oct 18
20:45:12 shared sshd(3097603): Disconnected from invalid user admin port 58720 (preauth) Oct 18 20:45:12 shared sshd(3097603): Invalid user admin from port 58720 Oct
18 20:44:51 shared sshd(3097591): Disconnected from invalid user admin port 39802 (preauth) Oct 18 20:44:50 shared sshd(3097591): Invalid user admin from port 39802 Oct
18 20:44:30 shared sshd(3097582): Disconnected from invalid user admin port 49134 (preauth) Oct 18 20:44:30 shared sshd(3097582): Invalid user admin from port 49134


amazon web services – EC2 Instance cannot connect to ECS Cluster

I have empty AWS ECS Cluster but I am unable to put instances into it.
I wanted to use Launch templates and Autoscaling Group, but I am unable to assign created EC2 Instance.

The issue is in shown in ecs-agent.log

level=error time=2020-10-17T23:23:37Z msg="Unable to register as a container instance with ECS: RequestError: send request failedncaused by: Post "": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" module=client.go
level=error time=2020-10-17T23:23:37Z msg="Error registering: RequestError: send request failedncaused by: Post "": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" module=agent.go


  • Using AMI ami-0eff571a24849e852
  • Cluster name: debug
  • Region is eu-central-1
  • Instance has no public IP
  • Instance is in subnet ( and VPN subnet is
  • Instance can reach the internet through NAT Instance:
(ec2-user@ip-10-10-100-14 ecs)$ ping
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=109 time=50.1 ms
64 bytes from ( icmp_seq=2 ttl=109 time=40.1 ms
  • DNS to outside is resolving fine
(ec2-user@ip-10-10-100-14 ecs)$ nslookup

Non-authoritative answer:
  • Just to be sure, I have created Endpoints from VPC and Subnet where Instance is to ECS
  • I have attached the security group with no restrictions for test
  • ecs.config:
(ec2-user@ip-10-10-100-14 ecs)$ nslookup

Non-authoritative answer:

Does anyone have any suggestions?

Selling – cPanel survey & Amazon EC2 For WHMCS v1.3.0 are here to steal your attention! |

1. 60-Second Survey For cPanel Users

The recent decision of cPanel to introduce yet another price increase has evoked mixed feelings across the web hosting community, to say the least. Would you call it a change for the better?

Devote just 60 seconds to answering a few simple questions, and by doing so give us better understanding of how we can help with what your business goes through due to the controversial cPanel pricing policies. We will be happy to compensate for your time with an exclusive 10% promo code on the entire range of ModulesGarden software gear! Sounds appealing enough?

Share your cPanel experience with us!

2. Buzzworthy Reveal Into WHMCS V8.0

If you are with us for a while, you already know that we often like to spice things up. This time we have chosen to add some extra thrill to the ongoing WHMCS V8.0 vogue by sharing our very personal viewpoint on the novelties packed into this major update.

Curious to find out what we really think about its all-new features, and how much they will contribute to transforming our WHMCS offer?

Sneak a look at our latest Blog publication for some straight talk!

3. Proxmox Reselling Revolution – 50% OFF!

If you are toying with the idea of reselling your Proxmox servers directly through WHMCS or any other platform of your choice – there is no time like the present!

With the outstanding promotion we have just put in motion, you can join the powers of your Proxmox VPS For WHMCS with our newly developed Products Reseller For WHMCS module at a colossal 50% discount, equal to $100!

Worried that such a huge deal will slip you by because you are not into Proxmox offerings? Cheer up, Products Reseller For WHMCS has got you covered on all fronts, as it allows you to resell products and services of other types as well – and you can still indulge your business with an appealing cost-cutting offer!

Ready to brainstorm this concept some further?

4. Amazon EC2 For WHMCS v1.3.0

We are pleased to announce that recent efforts of our Product Development team have led into the quality update of Amazon EC2 For WHMCS. There are several new implements that the module’s 1.3.0 version bears to make the provisioning of Amazon EC2 instances nothing short of a pleasure:

  • Clients are now able to inject their SSH keys for the already existing machines
  • The SSH key will be from now on auto-generated, and accessible in the client area, in case it was not provided by the user during the ordering proces
  • The subnet into which the instance will be launched can be currently chosen in the product configuration

Be sure to have a taste of other improvements this noteworthy release has been powered with!

Learn more exciting details about Amazon EC2 For WHMCS v1.3.0!

Need Custom Software Development For Your Business?

Specially for you we will adapt an application and its design to your own needs, create a new module or even a completely new system built from scratch!

amazon ec2 – Frontend, Backend, NGINX and Containers in EC2 – Configuration

It’s been 3 days now I am trying to get this setup to work in EC2 and it simply won’t work. I am hopeful that the good people in ServerFault will give me a hand.

I have a VueJs front end sending API requests to a Flask Python backend via an NGINX reverse proxy.
All that in their respective Docker containers. Testing this whole setup in my local Windows machine, everything works fine, but then I bring it to EC2, it doesn’t work.
By that I mean, my API requests arrive at the backend, but I always get unauthorized (for a login call) even though I am 100% sure the the login credentials are correct as I literally created them seconds before. I check in my mongo database as well and the object is there.

So here’s my details:

VueJS Dockerfile

FROM node:latest as build-stage
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build

FROM nginx:latest as production-stage
RUN mkdir /app
COPY --from=build-stage /app/dist /app
COPY nginx_vue.conf /etc/nginx/nginx.conf

.env.production in VueJS

My API calls in the frontend are using these two variables


Flask Dockerfile

FROM osgeo/gdal:ubuntu-small-latest

RUN apt update
RUN apt -y upgrade
RUN apt install -y python3-pip

# set the working directory in the container

# copy the dependencies file to the working directory
COPY requirements.txt .

ENV PYTHONPATH "/code:/code/modules/"

# install dependencies
RUN pip3 install -r requirements.txt

# copy the content of the local src directory to the working directory
COPY . .

# command to run on container start
CMD ( "python", "./" )

VueJS NGINX conf

user  nginx;
worker_processes  1;
error_log  /var/log/nginx/error.log warn;
pid        /var/run/;

events {
    worker_connections  1024;

http {
    include            /etc/nginx/mime.types;
    default_type       application/octet-stream;
    log_format  main   '$remote_addr - $remote_user ($time_local) "$request" '
                       '$status $body_bytes_sent "$http_referer" '
                       '"$http_user_agent" "$http_x_forwarded_for"';
    access_log         /var/log/nginx/access.log  main;
    sendfile           on;
    keepalive_timeout  65;

    server {
        listen       80;
        server_name  localhost;
        root   /app;

        location / {
            index  index.html;
            try_files $uri $uri/ /index.html;

        error_page   500 502 503 504  /50x.html;

        location = /50x.html {
            root   /usr/share/nginx/html;

NGINX Reverse Proxy

user www-data;
worker_processes auto;
pid /run/;
include /etc/nginx/modules-enabled/*.conf;

events {
    worker_connections 1024;

http {
    log_format  main   '$remote_addr - $remote_user ($time_local) "$request" '
                       '$status $body_bytes_sent "$http_referer" '
                       '"$http_user_agent" "$http_x_forwarded_for"';
    access_log         /var/log/nginx/access.log  main;
         server {
            listen 80;
            server_name localhost;

            location / {
                proxy_pass          http://frontend:80;
                proxy_set_header    X-Forwarded-For $remote_addr;

            location /api {
                proxy_pass          http://backend:5001/api;
                #proxy_set_header    X-Forwarded-For $remote_addr;
                # Only requests matching the whitelist expectations will
                # get sent to the application server
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection 'upgrade';
                proxy_set_header Host $host;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_cache_bypass $http_upgrade;


version: "3.8"

    image: mongo
    restart: always
    container_name: mongo
      - "27017:27017"
      - ../mongo/mongo-volume:/data/db
      - ../mongo/mongo-config:/data/configdb

    image: mongo-express
    restart: always
      - 8081:8081

      context: ../backend
      dockerfile: Dockerfile
    container_name: backend
      - "5001:5001"
      - "../backend:/app"
      - mongo
      PORT: 5001
      ENV: development
      DB: mongodb://admin:secret@mongo:27017/poc?authSource=admin
    restart: on-failure

    image: nginx:1.17.10
    container_name: reverse_proxy
      - frontend
      - backend
      - ./nginx_reverse_proxy.conf:/etc/nginx/nginx.conf
      - "80:80"

      context: ../frontend
      dockerfile: Dockerfile
    container_name: frontend
      - "8080:80"
      - backend
      - VUE_APP_BACKEND_HOSTNAME=localhost
    restart: on-failure

So with the VueJS app, I build it for production and host it in an internal Nginx server to that docker container.
That app sends API calls to “localhost:80” which I think the reverse proxy receives and forwards to my backend.
I get a response back from the backend, but always unauthorized. Funnily enough, there’s never any logs in my backend docker container, although logging is set to DEBUG, so I am very very confused.

I am hosting both my VueJS Nginx and the Nginx Reverse Proxy on port 80. Does that cause any issue when running in EC2? I’d imagine it wouldn’t, since it works like I charm in my Windows machine.

Appreciate any help and make myself available to chat or even a session through TeamViewer.

Thank you

amazon ec2 – Is it better to have a centralized [redis] caching instance or per instance?

I have a VPC in AWS that contains a public and private subnet. In the private subnet, I have two load balanced EC2 app servers, and an EC2 Database/Cache server.

The two app servers connect to the Database/Cache server for database queries, but there is also an instance of Redis running on the database server. Both of the app servers are configured to connect to this redis instance.

My question is – is this performant? Would it be better to have an instance of Redis installed on each of the App server nodes?

Or are we better off leaving redis on the database/cache server?

Bitcoin private network on Amazon EC2

I am looking to set up a private network of Bitcoin nodes over multiple machines (e.g, on multiple VMs on Amazon EC2) for experimentation. I want to be able to control how the different nodes are interconnected to each other in this network. Is there a way to do this on Bitcoind? The regtest mode that is commonly cited as a solution for this seems to have the ability to create multiple nodes on a single machine. Can it also be used to interconnect nodes across different machines according to a specified topology?

SQL Server HA Design (FCI) – Windows Storage Replication – Test-SRTopology – Physical Memory Error – AWS EC2

I’m trying to set up AlwaysOn Failover Cluster Instance (Not Availability Groups) on 2 VMs(each on different AWS Availability Zone) in AWS-EC2 instances.

To make this happen, I’ve to set up Windows Server(2016+ feature) stretch cluster Storage Replication between the disks in the respective VMs.

When I run the “Test-SRTopology” power shell command, I get the physical memory error as below. My newly spun-up VMs has nothing else running on it and has 32GB of memory and almost all of it is free.

Any idea why I’m getting this memory error? Is Windows Stretch-cluster Storage replica designed to only work on Physical machines and NOT VM? If not, any idea what is going on?

Thanks for your guidance.

Physical Memory Requirement Test: SQL-01 does not have the required physical memory to deploy Storage Replica. The minimum physical memory requirement to deploy Storage Replica is 2GB. Actual physical memory available on TM-SQL-01 is 0GB.
Physical Memory Requirement Test: SQL-02 does not have the required physical memory to deploy Storage Replica. The minimum physical memory requirement to deploy Storage Replica is 2GB. Actual physical memory available on TM-SQL-02 is 0GB.

hosting – Fatal error: Out of memory in WordPress running on AWS EC2?

I launch an instance (EC2): Amazon Linux AMI 2018.03.0 (HVM), SSD Volume Type
all the options are default (free tier), PHP 7.3.21 (cli) (built: Aug 21 2020 23:21:45) ( NTS )
and mysql Ver 14.14 Distrib 5.7.30, for Linux (x86_64) using EditLine wrapper

In installed WordPress 5.5.1 and run it online for testing.

I configured as bellow :
sudo vim /etc/php.ini (php.ini file)

memory_limit = 1024M

I added two lines in wp-config.php

define('WP_MEMORY_LIMIT', '1024M');

Using .htaccess

# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /wordpress/
RewriteRule ^index.php$ - (L)
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /wordpress/index.php (L)
# END WordPress

<IfModule mod_php7.c>
php_value memory_limit 1024M

But sometimes I still get the error messages like:

Fatal error: Out of memory (allocated 28311552) (tried to allocate
65536 bytes) in Unknown on line 0


Fatal error: Out of memory (allocated 24117248) (tried to allocate
143360 bytes) in file…

Does anyone know how to fix this issue? Please help?
In php.ini file I see
; Maximum amount of memory a script may consume (128MB)
before line
memory_limit = 1024M
Is that mean even I set memory_limit = 1024M the memory limit still is 128MB on EC2?

php – I can’t connect to websockets with Nginx + Ratchet using SSL from Let’s Encrypt in EC2

I’m trying to start my chat server that uses the Ratchet (PHP) library, but I’m not getting it, does everything indicate that I need a reverse proxy? The server starts perfectly on the php script console, but on the Google console, it reports a connection timed out error, so what can that be?

I use EC2 to host my website, and I have Let’s Encrypt’s SSL certificate, along with the Nginx web server, what can I do to start a websocket connection with HTTPS and Nginx?

Do I need to make reverse proxy? How can I do it in Nginx configuration? My link from the chat is

Which port should I use in the PHP script and JavaScript code, 8080 or 8443?

Nginx Configuration:

    server {
            root /var/www/;
            index index.html index.php index.htm index.nginx-debian.html;
            location ~ .php$ {
                    include snippets/fastcgi-php.conf;
                    fastcgi_pass unix:/run/php/php7.4-fpm.sock;
            location / {
                    proxy_set_header        Host $host;
                    proxy_set_header        X-Real-IP $remote_addr;
                    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                    proxy_set_header        X-Forwarded-Proto $scheme;
                    proxy_pass          http://localhost:8080;
                    proxy_read_timeout  90;
                    proxy_redirect      http://localhost:8080;
                    #try_files $uri $uri/ =404;
        listen (::):443 ssl ipv6only=on; # managed by Certbot
        listen 443 ssl; # managed by Certbot
        ssl_certificate /etc/letsencrypt/live/; # managed by Certbot
        ssl_certificate_key /etc/letsencrypt/live/; # managed by Certbot
        include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
server {
    if ($host = {
        return 301 https://$host$request_uri;
    } # managed by Certbot

    if ($host = {
        return 301 https://$host$request_uri;
    } # managed by Certbot

        listen 80;
        listen (::):80;

   return 404;  # managed by Certbot

server {
        listen 80;
        listen (::):80;

        access_log /var/log/nginx/reverse-access.log;
        error_log /var/log/nginx/reverse-error.log;

        location / {