linux – Docker fails external connectivity for Redis

I have tried all the possible solutions available on either StackOverflow or other forums, that I could find. I begin installing redis with docker by actual instructions available on Hub Docker. But I was not able to connect to Redis outside container.

My initial command:

docker run --name c-redis -d redis

After further searching I found that I needed to execute it as:

docker run --name mag-redis -d redis -p 6379:6379 

But this failed as well, as I got the following error.

$ docker run --name c-redis -d redis -p 6379:6379 
c2dbf68f52b46e90671a7efaafbe46898368bb"
Unable to find image 'redis:latest' locally
latest: Pulling from library/redis
8559a31e96f4: Already exists
85a6a5c53ff0: Already exists
b69876b7abed: Already exists
a72d84b9df6a: Already exists
5ce7b314b19c: Already exists
04c4bfb0b023: Already exists
Digest: sha256:800f2587bf3376cb01e6307afe599ddce9439deafbd4fb8562829da96085c9c5
Status: Downloaded newer image for redis:latest
075d68ec71abf3752050c947e44a4b1c52305fb6153febe815e31659284612cf
docker: Error response from daemon: driver failed programming external connectivity on endpoint c-redis (f251e744aeacbd1a084f11b0e01731b1e1a36454ca8ad634889dd38dae66314d):  (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 6379 -j DNAT --to-destination 172.17.0.3:6379 ! -i docker0: iptables: No chain/target/match by that name.
 (exit status 1)).

I then restarted iptables as one of the solutions available online was this. But this did not help, and same error again. I then found another query on Stackoverflow i.e.

docker run --name c-redis -p 6379:6379 -d redis --restart unless-stopped -v /etc/redis/:/data --appendonly yes --requirepass "password"

However, same error of iptables… I then removed the image/container, and executed with first command (docker run --name c-redis -d redis), it redis was installed but again I was not able to access to externally (by the same host, outside container).

I again removed the container/image, and tried those other 2 commands, but each time I was having same iptables error, I even tried to reboot the docker. Still no use.

I am using Centos 7. Please let me know if anyone else faced such issue. I am totally stuck here for the past several hour(s).

docker: Error response from daemon: driver failed programming external
connectivity on endpoint c-redis
(f251e744aeacbd1a084f11b0e01731b1e1a36454ca8ad634889dd38dae66314d):
(iptables failed: iptables –wait -t nat -A DOCKER -p tcp -d 0/0
–dport 6379 -j DNAT –to-destination 172.17.0.3:6379 ! -i docker0: iptables: No chain/target/match by that name. (exit status 1)).

EDIT:

Docker version:

Client: Docker Engine - Community
 Version:           19.03.12
 API version:       1.40
 Go version:        go1.13.10
 Git commit:        48a66213fe
 Built:             Mon Jun 22 15:46:54 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.12
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.10
  Git commit:       48a66213fe
  Built:            Mon Jun 22 15:45:28 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Thank you!

How to remount a lost sshfs in a docker container

I have a mounted volume in my docker container filesystem which is itself a mounted sshfs on my host. The thing is when (for a reason or another), my sshfs mount is lost, I also loose my volume in docker. If I reconnect my sshfs, docker can’t still access to the volume. I have to restart it.
How to achieve this automatically?

Can’t attach Docker container to overlay network

I’m trying to replicate a production environment (CentOS 7) on a macbook, the environment consist basically of several docker swarm services that interact with a couple of databases. These databases are in containers attached to the same overlay network of the services, but out of the swarm.

I created the swarm and the network in the macbook

docker swarm init
docker network create --attachable --driver overlay --subnet 10.0.1.0/24 test-net

I have this compose file to run one of the databases

version: '3.8'

services:
  cassandra:
    image: cassandra:3.11.4
    networks:
      test-net:
        ipv4_address: 10.0.1.10

networks:
  test-net:
    external: true

but whenever I try to run it with docker-compose up it shows either

ERROR: for d129d5ad7ec8_cassandra-test  Cannot start service cassandra: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded

ERROR: for cassandra  Cannot start service cassandra: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded
ERROR: Encountered errors while bringing up the project.

or

ERROR: for d129d5ad7ec8_cassandra-test  Cannot start service cassandra: network test-net not found

ERROR: for cassandra  Cannot start service cassandra: network test-net not found
ERROR: Encountered errors while bringing up the project.

docker network inspect test-net seems completely normal and coherent with the production environment and I haven’t found a solution that works by browsing similar issues. I also tried to run the container without the compose file with similar results. Why can’t I attach this container to the overlay network?

VPS and Docker containers | Web Hosting Talk

ttps://www.webhostingtalk.com/”http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd”>


VPS and Docker containers | Web Hosting Talk
‘);
var sidebar_align = ‘right’;
var content_container_margin = parseInt(‘350px’);
var sidebar_width = parseInt(‘330px’);
//–>






  1. VPS and Docker containers


    Is there any existing hosting software Wich allows for managing docker containers and VPS resource quotas (bandwidth, databases, disk usage)?

    Do I see it right, thinking that docker containers seem to be secure and better way to setup server environment for clients, compared to conventional VPS?












https://www.webhostingtalk.com/
Similar Threads


  1. Replies: 0


    Last Post: 10-15-2018, 02:24 AM


  2. Replies: 0


    Last Post: 05-26-2017, 01:23 PM


  3. Replies: 0


    Last Post: 08-10-2014, 11:12 AM


  4. Replies: 0


    Last Post: 12-11-2002, 10:43 PM


  5. Replies: 2


    Last Post: 12-04-2002, 06:19 PM

https://www.webhostingtalk.com/Tags for this Thread


https://www.webhostingtalk.com/
Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  








linux networking – Difference between VMWare Bridged network and Docker ipvlan/macvlan?

In Virtual Networking, I have seen two techniques to connect the guest machine with the Host machines network.

In VMWare/VirtualBox – Bridged Networking is used to connect the guest machine with the host machines network.
Ex – if the host is on 172.16.0.1/12 subnet with IP 172.16.0.2 as host IP,
then using Bridge networking any guest running on the above host can be connected to the host network i.e 172.16.0.1/12 subnet and the guest will receive an IP on this subnet say 172.16.0.6 (just picked a random valid IP in this subnet).

In Docker, the same is achieved using IPVLAN or MACVLAN.
Ex: – if host is on 172.16.0.1/12 subnet with IP 172.16.0.2 as host IP,
then using MACVLAN or IPVLAN any container running on this host can be connected to host network i.e 172.16.0.1/12 subnet and the container will receive an IP on this subnet say 172.16.0.6 (again, just picked a random valid IP in this subnet).

Though the end result is same, the techniques used seem to be different. Am I right?
So just exploring to understand the difference between these two approches and how bridging differs from IPVLAN/MACVLAN?

magento2 – Curl magento 2 store on docker

I’m trying to curl a docker dev install from itself

using MagentoFrameworkHTTPClientCurl

I get a failure and the result SSL certificate problem: self signed certificate

Got the options set

$this->curl->setOption(CURLOPT_SSL_VERIFYHOST, 0);
$this->curl->setOption(CURLOPT_SSL_VERIFYPEER, 0);

Because it’s local/docker I can’t resolve the host

Therefore using CURLOPT_PROXY. Syntax looks like this: https://web.magento2.docker:443

Also tried this option: CURLOPT_HTTPPROXYTUNNEL

If I tail nginx logs they look like this

2020/06/21 23:41:48 (info) 16#16: *75 SSL_do_handshake() failed (SSL: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca:SSL alert number 48) while SSL handshaking, client: 172.27.0.3, server: 0.0.0.0:443

Has anyone experienced similar issue and got workaround?

On command line I can use this curl https://web.magento2.docker --insecure

nodejs – Problema al balancear carga de contenedores de Docker con Nginx

Tengo un server con 4vCPU y 8GB de RAM hosteando una API REST en Node. Quería utilizar de una forma mas eficiente los CPU’s por lo que aproveche que estoy probando Docker y cree una imagen de Node con mi API e inicie 4 contenedores. Configure mi server Nginx para que balance la carga entre esos cuatro contenedores. El problema es que no mejoro el rendimiento, es mas, empeoro un poco.

Configuracion Nginx:

upstream app{
        server 192.168.1.12:3000;
        server 192.168.1.12:3001;
        server 192.168.1.12:3002;
        server 192.168.1.12:3003;
}

server{
        location / {
                proxy_pass http://app;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection 'upgrade';
                proxy_set_header Host $host;
                proxy_cache_bypass $http_upgrade;
        }
}

Inicio de los contenedores:

docker run -dit --name node -p 3000:3000 node-api:0.4

PD: probé utilizar el modulo cluster de node y se nota una mejora en el rendimiento. Los server que tengo dentro de los containers no utilizan este modulo

docker – Pod using Vernemq helm package cannot start

I’m using helm to install vernemq on my kubernetes cluster

The problems is it can’t start, I accepted the EULA

Here is the log:

02:31:56.552 (error) CRASH REPORT Process <0.195.0> with 0 neighbours exited with reason: {{{badmatch,{error,{vmq_generic_msg_store,{bad_return,{{vmq_generic_msg_store_app,start,(normal,())},{'EXIT',{{badmatch,{error,{{undef,({eleveldb,validate_options,(open,({block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{data,({dir,"./data"})},{data_root,"./data/leveldb"},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{store_dir,"./data..."},...)),...},...)},...}}},...}}}}}}},...},...} in application_master:init/4 line 138
02:31:56.552 (info) Application vmq_server exited with reason: {{{badmatch,{error,{vmq_generic_msg_store,{bad_return,{{vmq_generic_msg_store_app,start,(normal,())},{'EXIT',{{badmatch,{error,{{undef,({eleveldb,validate_options,(open,({block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{data,({dir,"./data"})},{data_root,"./data/leveldb"},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{store_dir,"./data..."},...)),...},...)},...}}},...}}}}}}},...},...}
Kernel pid terminated (application_controller) ({application_start_failure,vmq_server,{bad_return,{{vmq_server_app,start,(normal,())},{'EXIT',{{{badmatch,{error,{vmq_generic_msg_store,{bad_return,{{vm

{"Kernel pid terminated",application_controller,"{application_start_failure,vmq_server,{bad_return,{{vmq_server_app,start,(normal,())},{'EXIT',{{{badmatch,{error,{vmq_generic_msg_store,{bad_return,{{vmq_generic_msg_store_app,start,(normal,())},{'EXIT',{{badmatch,{error,{{undef,({eleveldb,validate_options,(open,({block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{data,({dir,"./data"})},{data_root,"./data/leveldb"},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{store_dir,"./data/msgstore"},{sync,false},{tiered_slow_level,0},{total_leveldb_mem_percent,70},{use_bloomfilter,true},{verify_checksums,true},{verify_compaction,true},{write_buffer_size,41777529},{write_buffer_size_max,62914560},{write_buffer_size_min,31457280})),()},{vmq_storage_engine_leveldb,init_state,2,({file,"/opt/vernemq/apps/vmq_generic_msg_store/src/engines/vmq_storage_engine_leveldb.erl"},{line,99})},{vmq_storage_engine_leveldb,open,2,({file,"/opt/vernemq/apps/vmq_generic_msg_store/src/engines/vmq_storage_engine_leveldb.erl"},{line,39})},{vmq_generic_msg_store,init,1,({file,"/opt/vernemq/apps/vmq_generic_msg_store/src/vmq_generic_msg_store.erl"},{line,181})},{gen_server,init_it,2,({file,"gen_server.erl"},{line,374})},{gen_server,init_it,6,({file,"gen_server.erl"},{line,342})},{proc_lib,init_p_do_apply,3,({file,"proc_lib.erl"},{line,249})})},{child,undefined,{vmq_generic_msg_store_bucket,1},{vmq_generic_msg_store,start_link,(1)},permanent,5000,worker,(vmq_generic_msg_store)}}}},({vmq_generic_msg_store_sup,'-start_link/0-lc$^0/1-0-',2,({file,"/opt/vernemq/apps/vmq_generic_msg_store/src/vmq_generic_msg_store_sup.erl"},{line,40})},{vmq_generic_msg_store_sup,start_link,0,({file,"/opt/vernemq/apps/vmq_generic_msg_store/src/vmq_generic_msg_store_sup.erl"},{line,42})},{application_master,start_it_old,4,({file,"application_master.erl"},{line,277})})}}}}}}},({vmq_plugin_mgr,start_plugin,1,({file,"/opt/vernemq/apps/vmq_plugin/src/vmq_plugin_mgr.erl"},{line,524})},{vmq_plugin_mgr,start_plugins,1,({file,"/opt/vernemq/apps/vmq_plugin/src/vmq_plugin_mgr.erl"},{line,503})},{vmq_plugin_mgr,check_updated_plugins,2,({file,"/opt/vernemq/apps/vmq_plugin/src/vmq_plugin_mgr.erl"},{line,444})},{vmq_plugin_mgr,handle_plugin_call,2,({file,"/opt/vernemq/apps/vmq_plugin/src/vmq_plugin_mgr.erl"},{line,246})},{gen_server,try_handle_call,4,({file,"gen_server.erl"},{line,661})},{gen_server,handle_msg,6,({file,"gen_server.erl"},{line,690})},{proc_lib,init_p_do_apply,3,({file,"proc_lib.erl"},{line,249})})},{gen_server,call,(vmq_plugin_mgr,{enable_system_plugin,vmq_generic_msg_store,(internal)},infinity)}}}}}}"}
Crash dump is being written to: /erl_crash.dump...

So where my problems at, I just use helm install vernemq vernemq/vernemq to install it.

networking – Docker Misbehaving on ubuntu 18.04 behind squid proxy

I have 3 servers behind an squid proxy. On all of them , proxy works without any problem for all site , but, http://registry-1.docker.io , works just on one of the servers. All /etc/hosts and /etc/resolv.conf configurations are the same , and I get this error on the two other servers :

Error response from daemon: Get https://registry-1.docker.io/v2/: dial
tcp: lookup registry-1.docker.io on 127.0.0.53:53: server misbehaving

Is there any way to check the problem ?

virtualbox – Vagrant Docker refuses connection

Stack

I’m running the following:

I’m trying to curl localhost:3000 to reach gitlab.

  • When I do that from the vagrant box, it works:

    vagrant@549f682a30a4:~/gdk$ curl localhost:3000
    <html><body>You are being <a href="http://localhost:3000/users/sign_in">redirected</a>.</body></html>vagrant@549f682a30a4:~/gdk$
    

    I also see the request in the gdk tail

  • When I do that from the ubuntu, it doesnt work:

    gdkuser@mymachine:~/gitlab-development-kit$ curl localhost:3000 -v --trace -
    Warning: --trace overrides an earlier trace/verbose option
    == Info: Rebuilt URL to: localhost:3000/
    == Info:   Trying 127.0.0.1...
    == Info: TCP_NODELAY set
    == Info: Connected to localhost (127.0.0.1) port 3000 (#0)
    => Send header, 78 bytes (0x4e)
    0000: 47 45 54 20 2f 20 48 54 54 50 2f 31 2e 31 0d 0a GET / HTTP/1.1..
    0010: 48 6f 73 74 3a 20 6c 6f 63 61 6c 68 6f 73 74 3a Host: localhost:
    0020: 33 30 30 30 0d 0a 55 73 65 72 2d 41 67 65 6e 74 3000..User-Agent
    0030: 3a 20 63 75 72 6c 2f 37 2e 35 38 2e 30 0d 0a 41 : curl/7.58.0..A
    0040: 63 63 65 70 74 3a 20 2a 2f 2a 0d 0a 0d 0a       ccept: */*....
    == Info: Recv failure: Connection reset by peer
    == Info: stopped the pause stream!
    == Info: Closing connection 0
    curl: (56) Recv failure: Connection reset by peer
    

    I don’t see anything in gdk tail

Analysis / Investigation

As I see the request received in gdk tail when it comes from inside vagrant and not, when it’s from the host, I assume that it’s stuck somewhere on the way. I’m not sure which route the request has to travel. I tried the following things to find out more:

  • Port Forwarding

    • Vagrantfile Network part at line 112
      config.vm.network "forwarded_port", guest: 3000, host: 3000, auto_correct: true

    • Host (Ubuntu): netstat -tulpn | grep 3000
      tcp6 0 0 :::3000 :::* LISTEN 6508/docker-proxy

    • Vagrant: netstat -tulpn | grep 3000
      tcp 0 0 127.0.0.1:3000 0.0.0.0:* LISTEN 12387/gitlab-workho
      Again, as internal calls are, but external calls are not in gdk tail I assume the request from host never arrives in vagrant.

  • docker
    0.0.0.0:3000->3000/tcp, thus the ports should be forwarded and requests should be seen in docker

    gdkuser@mymachine:~/gitlab-development-kit$ docker ps
    CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                            NAMES
    549f682a30a4        cb5d370c7aaf        "/bin/sh -c 'supervi…"   About an hour ago   Up About an hour    0.0.0.0:3000->3000/tcp, 127.0.0.1:2222->22/tcp   gitlab-development-kit_default_1592131487
    

    When attaching to the docker and executing curl on either Vagrant or Ubuntu, I don’t get any info in the running Info List in the docker container.

  • VirtualBox

    • I’ve read about troubles with virtualbox 5.2.xx, as I’m running virtualbox 6.1.10, I assume that doesn’t apply here. Nevertheless I don’t know how to verify, that virtualbox actually forwards the port.
      I’m also not quite sure, if VirtualBox forwards to Docker, or vice versa.

Do you have insights how to proceed figuring out, what’s wrong?
Thanks a lot!

DreamProxies - Cheapest USA Elite Private Proxies 100 Private Proxies 200 Private Proxies 400 Private Proxies 1000 Private Proxies 2000 Private Proxies ExtraProxies.com - Buy Cheap Private Proxies Buy 50 Private Proxies Buy 100 Private Proxies Buy 200 Private Proxies Buy 500 Private Proxies Buy 1000 Private Proxies Buy 2000 Private Proxies ProxiesLive Proxies-free.com New Proxy Lists Every Day Proxies123