linux – How do I get snort to detect traffic going to metasploitable docker container?

I have setup Snort and I have metasploitable 2 running in a docker container on Centos 7 host. I am trying to get snort to detect traffic traveling to the Metasploitable 2 container. I’ve tried pinging the metasploitable2 container from my host but snort doesn’t detect it. It doesn’t detect when i ping my host from my metasploitable container.

However Snort detects my pings whenever i ping google.com or other websites from my host. It

How do I get Snort to detect traffic traveling to a docker container?

Docker host from container and vice versa via default bridge

Problem description

Docker bridge is not working as per expectation

Host not reachable from docker container and vice versa

Steps to reproduce

docker network ls

NETWORK ID          NAME                DRIVER              SCOPE
e83fc4ed421c        bridge              bridge              local
7a85d027a7f6        host                host                local
09b1dcfaa497        none                null                local

brctl show

bridge name     bridge id               STP enabled     interfaces
docker0         8000.000000000000       no

ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eno16777984: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:a8:b3:b5 brd ff:ff:ff:ff:ff:ff
    inet 172.24.91.47/24 brd 172.24.91.255 scope global eno16777984
       valid_lft forever preferred_lft forever
83: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 02:42:ce:69:0a:60 brd ff:ff:ff:ff:ff:ff
    inet 172.26.0.1/16 brd 172.26.255.255 scope global docker0
       valid_lft forever preferred_lft forever

docker run -dt ubuntu sleep infinity

fef2c3aaf64ccacc21a16de6029d22e1ba7ff8de770c9c14532f8d659d0d694d

brctl show

bridge name     bridge id               STP enabled     interfaces
docker0         8000.000000000000       no              veth38adad5

docker network inspect bridge

"Containers": {
            "fef2c3aaf64ccacc21a16de6029d22e1ba7ff8de770c9c14532f8d659d0d694d": {
                "Name": "objective_aryabhata",
                "EndpointID": "84e7836885750933a76d1fe0ac4fc86d020eb453ca45f62c1e2fb3e27afd7e9c",
                "MacAddress": "02:42:ac:1a:00:02",
                "IPv4Address": "172.26.0.2/16",
                "IPv6Address": ""
            }
        }

Host to container ping ping 172.17.0.2
Expected : Ping should work
Actual : (Not working )

Container to host ping docker run busybox ping 172.24.91.47
Expected : Ping should work
Actual : (Not working )

Container to internet ping docker run busybox ping 8.8.8.8
Expected : Ping should work
Actual : (Not working )

Additional information

uname -a

Linux teleblnk9147 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

docker -v

Docker version 19.03.8, build afacb8b

sudo iptables-save

# Generated by iptables-save v1.4.21 on Tue Nov 24 16:50:20 2020
*filter
:INPUT ACCEPT (387891:46446579)
:FORWARD ACCEPT (0:0)
:OUTPUT ACCEPT (341373:46544705)
:DOCKER - (0:0)
:DOCKER-ISOLATION-STAGE-1 - (0:0)
:DOCKER-ISOLATION-STAGE-2 - (0:0)
:DOCKER-USER - (0:0)
-A INPUT -i docker0 -j ACCEPT
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Tue Nov 24 16:50:20 2020
# Generated by iptables-save v1.4.21 on Tue Nov 24 16:50:20 2020
*nat
:PREROUTING ACCEPT (3399:203940)
:INPUT ACCEPT (3399:203940)
:OUTPUT ACCEPT (2430:149091)
:POSTROUTING ACCEPT (2430:149091)
:DOCKER - (0:0)
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.26.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
COMMIT
# Completed on Tue Nov 24 16:50:20 2020

service iptables status

Redirecting to /bin/systemctl status  iptables.service
● iptables.service - IPv4 firewall with iptables
   Loaded: loaded (/usr/lib/systemd/system/iptables.service; disabled; vendor preset: disabled)
   Active: active (exited) since Mon 2020-11-23 16:13:06 IST; 24h ago
 Main PID: 85593 (code=exited, status=0/SUCCESS)
   Memory: 0B
   CGroup: /system.slice/iptables.service

macos – Recover Apple CoreStorage Volume, LVG, Logical Volume Group -container / partition / volume

I wasn’t able to mount the disk on my MacBook (I had removed both hard drives from iMac, since it’s logic board went bad and I thought not to put more money on Apple products due to such bad hardware performances that I recently experienced for 2015 iMac and 2018 MacBook Pro).

So for data recovery from Harddrive and make it mountable after I connected both on my MacBook Pro from thunderbolt ports using their apdaters:

  1. I create a coreStorage named FUSION using “diskutil coreStorage create FUSION /dev/disk2 /dev/disk3“. i.e. disk2 – 128GB and disk3 – 2TB

  2. It looked good, So I thought of shortcut by simply trying “First Aid” on diskutil for the drive to be able to make Drive mountable.

  3. It ran without error but it created an empty new volume of total 2TB and I wasn’t able to see any of my old files in it.

  4. So upon try to undo this FUSION volume, I accidentally ran “diskutil cs deleteLVG (Logical volume Group guid id)“.

  5. Now I do not want to perform any thing unless I make sure I am right and restore the container and partitions that existed.

Anyone can please help me with good suggestion to restore my partition and make my disk mountable?

How to make remove free space and make APFS container take it up

I am on 11.1 Big Sur.
I have a partition with no disk identifier called (free space)
I would like to remove this, and then make the APFS container take up the free space.

Here is a screenshot of my GPT and Disks

enter image description here

3d Dynamic liquid wobble inside a container in Unity3d

How to implement dynamic liquid wobble inside a bottle in Unity3D as seen in the following link.
link (Yellow liquid inside the spray can at 3:10)

How to remove a container disk and reclaim space with boot partision?

I want to delete Apple_APFS Container disk1 and reclaim the space one disk2

I boot on diks 2 (Macintosh HD)

enter image description here

networking – Ports exposed by docker container are shown as filtered – unable to connect

I am working on a fresh server installation of Ubuntu 20.04
I started a sample nginx by running docker run --rm -p 80:80 nginx
Port 80 appears to be open on the machine, I cant curl the nginx default page though:

$ nmap localhost
Starting Nmap 7.80 ( https://nmap.org ) at 2020-11-15 13:06 GMT
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000077s latency).
Other addresses for localhost (not scanned): ::1
Not shown: 998 closed ports
PORT   STATE SERVICE
22/tcp open  ssh
80/tcp open  http

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:25:90:d7:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 81.169.xxx.xxx/32 scope global dynamic eno1
       valid_lft 60728sec preferred_lft 60728sec
    inet6 fe80::225:90ff:xxxx:xxxx/64 scope link
       valid_lft forever preferred_lft forever
3: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether 00:25:90:d7:xx:xx brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:70:d9:xx:xx brd ff:ff:ff:ff:ff:ff
    inet6 fe80::42:70ff:xxxx:xxxx/64 scope link
       valid_lft forever preferred_lft forever
48: br-49042740d2e8: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:63:fe:xx:xx brd ff:ff:ff:ff:ff:ff
    inet6 fe80::42:63ff:xxxx:xxxx/64 scope link
       valid_lft forever preferred_lft forever
68: veth17ce2e9@if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether d6:e2:53:0b:xx:xx brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::d4e2:53ff:xxxx:xxxx/64 scope link
       valid_lft forever preferred_lft forever


# Generated by iptables-save v1.8.4 on Sun Nov 15 13:00:57 2020
*filter
:INPUT ACCEPT (151:14142)
:FORWARD DROP (15:780)
:OUTPUT ACCEPT (123:16348)
:DOCKER - (0:0)
:DOCKER-ISOLATION-STAGE-1 - (0:0)
:DOCKER-ISOLATION-STAGE-2 - (0:0)
:DOCKER-USER - (0:0)
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o br-49042740d2e8 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-49042740d2e8 -j DOCKER
-A FORWARD -i br-49042740d2e8 ! -o br-49042740d2e8 -j ACCEPT
-A FORWARD -i br-49042740d2e8 -o br-49042740d2e8 -j ACCEPT
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i br-49042740d2e8 ! -o br-49042740d2e8 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o br-49042740d2e8 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Sun Nov 15 13:00:57 2020
# Generated by iptables-save v1.8.4 on Sun Nov 15 13:00:57 2020
*nat
:PREROUTING ACCEPT (20:1254)
:INPUT ACCEPT (20:1254)
:OUTPUT ACCEPT (0:0)
:POSTROUTING ACCEPT (0:0)
:DOCKER - (0:0)
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.19.0.0/16 ! -o br-49042740d2e8 -j MASQUERADE
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER -i br-49042740d2e8 -j RETURN
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.17.0.2:80
COMMIT
# Completed on Sun Nov 15 13:00:57 2020

From my local machine, I am unable to connect to the server. Ports are being shown as filtered:

$ nmap example.de -Pn
Starting Nmap 7.80 ( https://nmap.org ) at 2020-11-15 14:12 CET
Nmap scan report for example.de (81.169.xxx.xxx)
Host is up (0.037s latency).
rDNS record for 81.169.xxx.xxx: h290xxxx.stratoserver.net
Not shown: 994 closed ports
PORT     STATE    SERVICE
22/tcp   open     ssh
80/tcp   filtered http
135/tcp  filtered msrpc
139/tcp  filtered netbios-ssn
445/tcp  filtered microsoft-ds
9876/tcp filtered sd

Nmap done: 1 IP address (1 host up) scanned in 2.67 seconds

Running the container in network mode host works as expected and I can access the nginx default page via localhost and on my local machine.
docker run --rm --network host nginx

Why is the exposing of the ports not working as expected?
How can I fix this / analyze the problem further?

Docker container with GRE network incorrect checksum

I have a server with docker, whose containers connect to the internet via a GRE tunnel. My configuration looks like this (bridge network):

{
    "Subnet": "10.0.0.128/25",
    "Gateway": "10.0.0.129"
}

GRE config on server A:

ip tunnel add gre1 mode gre local (server A) remote (server B) ttl 255
ip addr add 10.0.0.1/24 dev gre1
ip link set gre1 up

iptables -t nat -A POSTROUTING -s 10.0.0.0/24 ! -o gre+ -j SNAT --to-source (server A)
sudo iptables -A FORWARD -d 10.0.0.2 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
sudo iptables -A FORWARD -s 10.0.0.2 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
iptables -t nat -I POSTROUTING -s 10.0.0.0/24 -j SNAT --to-source (server A)

iptables -t nat -A PREROUTING -m tcp  -p tcp -i venet0 -d (server A) -j DNAT --to-destination 10.0.0.2
iptables -t nat -A PREROUTING -m udp -p udp -i venet0 -d (server A) -j DNAT --to-destination 10.0.0.2

GRE config on server B:

ip tunnel add gre1 mode gre local (server B) remote (server A) ttl 255
ip addr add 10.0.0.2/24 dev gre1
ip link set gre1 up

ip rule add from 10.0.0.0/24 table GRE
ip route add default via 10.0.0.1 table GRE

The GRE tunnel has been tested and is working. Outbound traffic from containers to, for example, an HTTP API are working too.

The problem is: when I connect to a Minecraft server inside a docker container, it connect (some packets come through), but doesn’t join.

This is a tcpdump log from my computer (Minecraft client):

    Jens-PC.36024 > (server A).25502: Flags (S), cksum 0x8e27 (incorrect -> 0xf938), seq 2212875319, win 64240, options (mss 1460,sackOK,TS val 794580493 ecr 0,nop,wscale 7), length 0
    (server A).25502 > Jens-PC.36024: Flags (S.), cksum 0x444e (correct), seq 1251234693, ack 2212875320, win 65160, options (mss 1460,sackOK,TS val 1677569834 ecr 794580493,nop,wscale 7), length 0
    Jens-PC.36024 > (server A).25502: Flags (.), cksum 0x8e1f (incorrect -> 0x6f91), ack 1, win 502, options (nop,nop,TS val 794580521 ecr 1677569834), length 0
    Jens-PC.36024 > (server A).25502: Flags (P.), cksum 0x8e33 (incorrect -> 0x90d5), seq 1:21, ack 1, win 502, options (nop,nop,TS val 794580523 ecr 1677569834), length 20
    Jens-PC.36024 > (server A).25502: Flags (P.), cksum 0x8e2f (incorrect -> 0xcf7b), seq 21:37, ack 1, win 502, options (nop,nop,TS val 794580523 ecr 1677569834), length 16
    (server A).25502 > Jens-PC.36024: Flags (.), cksum 0x6f52 (correct), ack 21, win 509, options (nop,nop,TS val 1677569868 ecr 794580523), length 0

Everything in server A’s tcpdump shows up fine, no incorrect packets.

This is a tcpdump log from server B (Minecraft server):

    (my home ip).36024 > 10.0.0.2.25502: Flags (.), cksum 0x7328 (correct), ack 174, win 501, options (nop,nop,TS val 794580561 ecr 1677569870), length 0
    (my home ip).36024 > 10.0.0.2.25502: Flags (P.), cksum 0x18d9 (correct), seq 37:300, ack 174, win 501, options (nop,nop,TS val 794580740 ecr 1677569870), length 263
    10.0.0.2.25502 > (my home ip).36024: Flags (.), cksum 0x896c (incorrect -> 0x708a), ack 300, win 507, options (nop,nop,TS val 1677570092 ecr 794580740), length 0
    10.0.0.2.25502 > (my home ip).36024: Flags (P.), cksum 0x8970 (incorrect -> 0x7bb7), seq 174:178, ack 300, win 507, options (nop,nop,TS val 1677570365 ecr 794580740), length 4
    10.0.0.2.25502 > (my home ip).36024: Flags (P.), cksum 0x898d (incorrect -> 0x13ae), seq 178:211, ack 300, win 507, options (nop,nop,TS val 1677570365 ecr 794580740), length 33

Apache docker container RAM memory limit needed

If a single user is downloading a single file of 2 GB, my apache container (docker, in demo website) uses more than 1 GB RAM.

In production website I tried with 2 apache containers * 14 GB RAM limit, and it is still not enough.

How can I optimize this? Is there a simple way to prevent out of memory?

c# – In-proc event dispatching through IoC container

Here is the sender and handler interfaces:

public interface ISender
{
    Task SendAsync(object e);
}

public interface IHandler<in TEvent>
{
    Task HandleAsync(TEvent e);
}

So I register in IoC container a sender service implementation, which dispatches events to all the compatible IHandler<in T> implementations. I use Autofac with a contravariance source, but there could be something else:

(Service)
public class Sender : ISender
{
    public Sender(IServiceProvider provider) => Provider = provider;
    IServiceProvider Provider { get; }

    public async Task SendAsync(object e)
    {
        var eventType = e.GetType();
        var handlerType = typeof(IHandler<>).MakeGenericType(eventType);
        var handlerListType = typeof(IEnumerable<>).MakeGenericType(handlerType);
        var method = handlerType.GetMethod("HandleAsync", new() { eventType });
        var handlers = ((IEnumerable)Provider.GetService(handlerListType)).OfType<object>();
        await Task.WhenAll(
            handlers.Select(h =>
                Task.Run(() => (Task)method.Invoke(h, new() { e }))
                    .ContinueWith(_ => { })));
    }
}