Docker swarm with traefik, traefik will not pick up the service

So, I followed this tutorial:

https://medium.com/@tiangolo/docker-swarm-mode-and-traefik-for-a-https-cluster-20328dba6232

So far, it seems to work. But now I want to run my gitlab, but I can not get it to discover and represent it. This is my docker-compose.yml:

version: "3"

networks:
traefik-public
external: true
internal:
external: false

services:
gitlab
image: ulm0 / gitlab
ambient:
container_name: gitlab
restart: always
hostname: & # 39; gitlab.mydomain & # 39;
ambient:
GITLAB_OMNIBUS_CONFIG: |
external_url & # 39; https: //gitlab.mydomain '
gitlab_rails['gitlab_shell_ssh_port'] = 2224

to deploy:
tags:
- traefik.enable = true
- traefik.backend = gitlab
- traefik.frontend.rule = Host: gitlab.mydomain
- traefik.docker.network = traefik-public
- traefik.port = 443
volumes:
- / mnt / raid1 / docker / gitlab / config: / etc / gitlab
- / mnt / raid1 / docker / gitlab / logs: / var / log / gitlab
- / mnt / raid1 / docker / gitlab / data: / var / opt / gitlab

networks:
- internal
- traefik-public

I execute it like this:

deployment sudo docker stack –compose-file docker-compose.yml gitlab

It starts (and complains about the restart option, but that should not matter)

Ignore unsupported options: restart

Creating service gitlab_gitlab

But when I check traefik (records and web interface) nothing appears.

I am new to this, if someone could explain what I am doing wrong, I would be happy.

Docker: error when creating a service with "No command was specified"

I am very new in the container world.
I am trying to get the ora2pg running with the help of docker_composer, but the following error message appears:

postgres_db is up to date

Creating ora2pg_client … error

ERROR: for ora2pg_client Can not create a container for the ora2pg service: A command was not specified

ERROR: for ora2pg Can not create a container for the ora2pg service: A command has not been specified

ERROR: Errors were found when opening the project.

My docker-composer.yml is the following:

version: "3.7"
services:
postgresql:
restart: always
Image: postgres
container_name: "postgres_db"
ports:
- "5432: 5432"
ambient:
- DEBUG = false
- DB_USER =
- DB_PASS =
- DB_NAME =
- DB_TEMPLATE =
- DB_EXTENSION =
- REPLICATION_MODE =
- REPLICATION_USER =
- REPLICATION_PASS =
- REPLICATION_SSLMODE =
volumes:
- ./postgres/data:/var/lib/postgresql/data
- ./postgres/initdb:/docker-entrypoint-initdb.d

ora2pg:
Image: flockers / ora2pg
container_name: "ora2pg_client"
ambient:
- DB_HOST = 127.0.0.1
- DB_SID = xe
- ORA2PG_USER = MAX
- DB_PASS = MAX
volumes:
- ./ora2pg/export:/export

Note: I already have an Oracle database on the same machine.

Thank you!

How to make a copy / snapshot of a Docker container with all the data?

I'm trying to make a clone or a snapshot of a container of Docker and all its content.

More specifically, I have two containers where the databases are run, one with Cassandra and another with MySQL. They are used in tests, so I would like Snapshot Copy it in another container to be able to use it without "spoiling" the original, but I did not succeed in doing it.

When looking in the backup / clone / container snapshots, I came to the docker and the save commands that save the image and can then be loaded into another container, but I did not succeed with both.

What happens is that I can copy and even the configurations, but not the data of the instances of the banks, this comes empty.

I could generate scripts to do this, and every time I need to create a new container and run the scripts, but I think a clone / snapshot would be simpler and seems trivial, I just do not know how to do it 🙁

So my question is: how to make a backup or a snapshot of a Docker container and create a new one, keeping the data, especially the data of a container with a database, its tables and data?

Daemon error response when running an image docker

I just installed a docker v18.09.2.

And as indicated in your official document. I tried to run nginx.

But I get this error:

PS C:  Users  rmali> docker run --detach --publish 8090: 80 --name webserver nginx

d2e8a8df30520b2c379787a210d1203d56a3f78b9c38187ae04f20c8ad9f1745

C:  Program Files  Docker  Docker  Resources  bin  docker.exe: Daemon error response: the driver failed to schedule external connectivity to the endpoint web server (c58767e17064fffd8d5313a0a2f4ffcd713c1224524d3280873d69d1848136): Ff2ffcd61232328d69d1848136) 8090: tcp: 172.17.0.2: 80: input / output error.

What am I missing?

mysql – Docker MariaDB lost data

I was using the official image of mariadb 10.3 as a database with innodb and file by activated table. A few months ago I took the image for a minor release and let it run. This, as I discovered it now, did not update the database automatically, as indicated by the registers to execute mysql_upgrade. The database has not been restarted since that time.

New data has been added and used in that time, however, the container was restarted and it seems that now all the data since the extraction a few months ago was lost in certain tables and the .idb files even in the backups say no there were modifications to the file. .

Any suggestions on where these data could have been written or how to recover them?

Error logs before restarting

InnoDB: table mysql / innodb_table_stats does not match the length in the name of the column table_name. Please run mysql_upgrade
InnoDB: the table mysql / innodb_index_stats does not match the length in the name of the column table_name. Please run mysql_upgrade

Error records during the fall

                Using Linux native AIO
    [Note] InnoDB: Mutexes and rw_locks use GCC atomic components
 [Note] InnoDB: Use mutexes of events
[Note] InnoDB: compressed tables use zlib 1.2.11
 [Note] InnoDB: Number of pools: 1
 [Note] InnoDB: Using instructions SSE2 crc32
[Note] InnoDB: Initializing the group of buffers, total size = 512M, instances = 1, chunk size = 128M
[Note] InnoDB: Complete initialization of the buffers group
[Note] InnoDB: if the mysqld execution user is authorized, the priority of the page cleaner thread can be changed. See the man page of setpriority ().
[Note] InnoDB: Start error recovery from the control point LSN = 471249973567
[Note] InnoDB: 16 transactions that must be undone or cleaned in a total of 16 row operations to undo
[Note] InnoDB: The Trx ID counter is 602890104
InnoDB: starting the final batch to recover 170 pages of the redo record.

All recovery modes were tested but only 6 worked.

InnoDB: The table space for the table `name of the database`` name of the table` could not be found in the cache. Attempting to load the tablespace with the space id 1216

Docker and NAT to LAN on the same machine using iptables

I have been using iptables on my lab server (Ubuntu 18.04) to perform NAT on the rest of the devices in my network for a while:

-t nat -A PREROUTING -i eno1 -p tcp -m tcp -dport 23 -j DNAT - to-destination 10.0.1.2:22
-t nat -A POSTROUTING -o eno1 -j MASQUERADE

-A FORWARD -s 10.0.0.0/24 -i eno2 -o eno1 -m conntrack --ctstate NEW -j ACCEPT
-A FORWARD -m conntrack --ctstate RELATED, ESTABLISHED -j ACCEPT
-A FORWARD -d 10.0.1.2 -p tcp -m tcp --dport 22 -j ACCEPT

In the past, it has worked very well. However, it broke when I installed Docker. This is almost certainly because Docker rewrote all the rules of my iptables. By default, some of my rules survive:

% sudo iptables -t nat -v -L
PREROUTING string (ACCEPT policy 257 packets, 36440 bytes)
pkts bytes target prot choose in the destination of origin
6 1384 DNAT tcp - eno1 anywhere anywhere tcp dpt: telnet a: 10.0.1.2: 22
133 8676 DOCKER all: anyone, anywhere, anywhere, ADDRTYPE, that matches dst-type LOCAL

ENTRY of the chain (policy ACCEPT 122 packets, 8474 bytes)
pkts bytes target prot choose in the destination of origin

DEPART chain (policy ACCEPTS 42 packets, 3008 bytes)
pkts bytes target prot choose in the destination of origin
0 0 DOCKER all - any any any! 127.0.0.0/8 ADDRTYPE matches dst-type LOCAL

POSTROUTING chain (ACCEPT policy 21 packages, 2395 bytes)
pkts bytes target prot choose in the destination of origin
0 0 MASQUERADE all - any! Docker0 172.17.0.0/16 anywhere
0 0 MASQUERADE all - any! Br-643d6580203c 172.18.0.0/16 anywhere
39 2900 MASQUERADE all - any eno1 anywhere
0 0 MASQUERADE tcp - anyone 172.18.0.2 172.18.0.2 tcp dpt: 8443

DOCKER chain (2 references)
pkts bytes target prot choose in the destination of origin
0 0 RETURN to all - docker0 anywhere and everywhere
0 0 BACK to all - br-643d6580203c anywhere and everywhere
0 0 DNAT tcp -! Br-643d6580203c anywhere anywhere tcp dpt: https to: 172.18.0.2: 8443

% sudo iptables -v -L
INPUT of the chain (ACCEPT policy 600 packets, 44910 bytes)
pkts bytes target prot choose in the destination of origin

FORWARD chain (DROP policy 135 packages, 27966 bytes)
pkts bytes target prot choose in the destination of origin
176 32752 DOCKER-USER all - anywhere and everywhere
176 32752 DOCKER-ISOLATION-STAGE-1 all - anywhere and everywhere
0 0 ACCEPT everything: any docker0 anywhere ctstate RELATED, ESTABLISHED
0 0 DOCKER all - any docker0 anywhere
0 0 ACCEPT everything: docker0! Docker0 anywhere
0 0 ACCEPT everything - docker0 docker0 anywhere
0 0 ACCEPT everything: any br-643d6580203c anywhere ctstate RELATED, ESTABLISHED
0 0 DOCKER all - any br-643d6580203c anywhere
0 0 ACCEPT all - br-643d6580203c! Br-643d6580203c anywhere in any place
0 0 ACCEPT all - br-643d6580203c br-643d6580203c anywhere, anywhere
0 0 ACCEPT everything - eno2 eno1 10.0.0.0/24 anywhere ctstate NEW
23 2682 ACCEPT all - any anywhere ctstate RELATED, ESTABLISHED
6 1384 ACCEPT tcp - anyone anywhere dione tcp dpt: ssh

Chain output (ACCEPT policy packets 505, 66607 bytes)
pkts bytes target prot choose in the destination of origin

DOCKER chain (2 references)
pkts bytes target prot choose in the destination of origin
0 0 ACCEPT tcp -! Br-643d6580203c br-643d6580203c anywhere 172.18.0.2 tcp dpt: 8443

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
pkts bytes target prot choose in the destination of origin
0 0 DOCKER-ISOLATION-STAGE-2 all - docker0! Docker0 anywhere, anywhere
0 0 DOCKER-ISOLATION-STAGE-2 all - br-643d6580203c! Br-643d6580203c anywhere and anywhere
176 32752 RETURN everything - anyone, anywhere, anywhere

Chain DOCKER-ISOLATION-STAGE-2 (2 references)
pkts bytes target prot choose in the destination of origin
0 0 DROP all - any docker0 anywhere in any place
0 0 DROP all - any br-643d6580203c anywhere
0 0 RETURN to all - anyone anywhere, anywhere

DOCKER-USER chain (1 references)
pkts bytes target prot choose in the destination of origin
176 32752 RETURN everything - anyone, anywhere, anywhere

For example, static routes work. I can still access my workstation in 10.0.1.2 through port 22, but that same machine can not exit. Looking at the traffic that leaves the server, it seems that a ping is not even doing it, much less back.

I tried to simply add my rules back to the top of the running Docker instance, but that did not work. The documentation for Docker suggests putting things in the DOCKER-USER chain, although that does not exist in the nat table. The docker documentation also suggests that I can disable Docker's table manipulation, although I do not know how to manually route the network to the containers.

Honestly, I do not know enough about Docker's rules. Has anyone done this work?

java: Invalid or damaged Jar error when using Docker & Jenkins to deploy the Spring Boot application in Azure

My application is implemented in green in Jenkins, but it does not actually run when it is deployed in Azure due to an "invalid or corrupt jar file".

when I run java -jar build / libs / my-app-name.0.0.1.jar locally my application works well. I thought maybe I was not making a real distant jar, so I brought the shadow as Baeldung suggests, but it still does not seem to work.

Dockerfile:

FROM some / enterprise / jdk8 / url
COPY . .
ADD build / libs / my-app-name-0.0.1.jar app.jar

ENTRY POINT ["java","-Djava.security.egd=file:/dev/./urandom","- 
jar","build/libs/my-app-name-0.0.1.jar"]`` `

Jenkinsfile:

pipe {
// not shown, agent declaration
stages {
stage ("configuration") {
Steps {
sh ("chmod -R 777 / script")
}
}

stage ("Build") {
Steps{
sh ("./ script / build.sh")
}
}

stage ("Docker: Build Latest Image") {
Steps {
sh ("./ script / docker-build.sh $ {name} latest")
}

stage ("Deploy") {
Steps {
sh ("./ script / docker-build.sh $ {name} latest")
sh ("./ script / docker-push.sh $ {name} latest")
sh ("./ script / deploy.sh myEnv $ {name}")
}
}

fragment of build.gradle:

apply the add-on: & # 39; com.github.johnrengelman.shadow & # 39; com.github.johnrengelman.shadow & # 39;

bootJar {
baseName = & # 39; my-app-name & # 39;
version = & # 39; 0.0.1 & # 39;
}

repositories {
mavenCentral ()
jcenter ()
}    

build.sh practically executes this:

docker window executed -v $ (pwd): / host -w / host ourUrl / our-ci-java: 1.0.0 ./gradlew clean assemble

linux – docker, external symbol, gccgo

Compile docker-client using "hack / make.sh dynbinary" successfully.

docker-17.06.0-dev: 64-bit ELF executable LSB, x86-64, version 1 (SYSV), dynamically linked, interpreter / lib64 / l, for GNU / Linux 2.6.32

But, the symbol in main.main is local, like time.Now.

Subprocess 1 "docker-17.06.0-" reached breakpoint 1, 0x0000000000404d10 in main.main ()
(gdb) disassembled
Dump assembly code for the main.main function:
0x0000000000404dcf <+191>: callq 0x4651a0 
0x0000000000404dd4 <+196>: callq 0x4b8b70   

When the demo program is compiled with "go build -compiler = gccgo", the symbol is external, as shown.

The thread 1 "test_2" reached breakpoint 1, main.main () in /usr1/Code_Practice/Golang/src/test/test.go:7
(gdb) disassembled

Dump assembly code for the main.main function:
0x0000000000401fd6 <+224>: callq 0x401cd0 
0x0000000000401fdb <+229>: add $ 0x20,% rsp
0x0000000000401fdf <+233>: mov -0xd0 (% rbp),% rax
0x0000000000401fe6 <+240>: mov% rax, -0x1a0 (% rbp)
0x0000000000401fed <+247>: mov -0xc8 (% rbp),% rax
0x0000000000401ff4 <+254>: mov% rax, -0x198 (% rbp)
0x0000000000401ffb <+261>: mov -0xc0 (% rbp),% rax
0x0000000000402002 <+268>: mov% rax, -0x190 (% rbp)
0x0000000000402009 <+275>: read -0x170 (% rbp),% rax
0x0000000000402010 <+282>: mov% rax,% rdi
0x0000000000402013 <+285>: callq 0x401cf0 

In addition, different compilation methods will result in time.now.

Dump assembly code for the time.now function:
=> 0x000000000047ef50 <+0>: sub $ 0x18,% rsp
0x000000000047ef54 <+4>: xor% edi,% edi
0x000000000047ef56 <+6>: mov% rsp,% rsi
0x000000000047ef59 <+9>: callq 0x527bc0 
0x000000000047ef5e <+14>: mov 0x8 (% rsp),% edx
0x000000000047ef62 <+18>: mov (% rsp),% rax
0x000000000047ef66 <+22>: add $ 0x18,% rsp
0x000000000047ef6a <+26>: retq
End of the assembler dump.

Dump assembly code for the time.now function:
=> 0x0000000000483250 <+0>: sub $ 0x18,% rsp
0x0000000000483254 <+4>: xor% edi,% edi
0x0000000000483256 <+6>: mov% rsp,% rsi
0x0000000000483259 <+9>: callq 0x404060         
0x000000000048325e <+14>: mov 0x8 (% rsp),% edx
0x0000000000483262 <+18>: mov (% rsp),% rax
0x0000000000483266 <+22>: add $ 0x18,% rsp
0x000000000048326a <+26>: retq
End of the assembler dump.

Dump assembly code for the time.now function:
=> 0x00000000004566b0 <+0>: sub $ 0x10,% rsp
0x00000000004566b4 <+4>: mov 0x1484d5 (% rip),% rax # 0x59eb90 
0x00000000004566bb <+11>: cmp $ 0x0,% rax
0x00000000004566bf <+15>: je 0x4566e0 
0x00000000004566c1 <+17>: xor% edi,% edi
0x00000000004566c3 <+19>: read (% rsp),% rsi
0x00000000004566c7 <+23>: callq *% rax

I want to compile the docker window to a dynamically pure linked binary, which means that the symbol is external, resolved by plt and got. In order to be able to connect it using LD_PRELOAD.
Can someone show me some method?

bitcoind – Bitcoin Core 0.18.0 in the Docker container: could not connect to the server

I probably do not understand how Docker works, but I've tried everything I can imagine. I'm trying to start bitcoind in regtest mode inside a Docker container, then execute JSON-RPC commands against the container from the host machine. I am running Bitcoin 0.18.0.

My Dockerfile looks like

FROM ubuntu: 18.04

Run apt -y update && apt -y install curl
RUN curl - or bitcoin-0.18.0-x86_64-linux-gnu.tar.gz https://bitcoin.org/bin/bitcoin-core-0.18.0/bitcoin-0.18.0-x86_64-linux-gnu.tar .gz
RUN tar xvf bitcoin-0.18.0-x86_64-linux-gnu.tar.gz

EXECUTE mkdir -p /root/.bitcoin
RUN echo "regtest = 1" >> /root/.bitcoin/bitcoin.conf 
&& echo "rpcuser = bitcoin" >> /root/.bitcoin/bitcoin.conf 
&& echo "rpcpassword = test" >> /root/.bitcoin/bitcoin.conf 
&& echo "regtest.rpcallowip = 172.17.0.0 / 16" >> /root/.bitcoin/bitcoin.conf 
&& echo "regtest.rpcbind = 127.0.0.1: 18443" >> /root/.bitcoin/bitcoin.conf

Exhibit 18443

CMD "/ bin / bash"

After building the image, I start the container by running

docker window run -p-127.0.0.1: 18443: 18443 

Then I start bitcoind in the container with bitcoind -printtoconsole. I can run bitcoin-cli Successful commands from inside the container.

When trying to execute a bitcoin-cli command from the host machine I get this:

error: Could not connect to server 127.0.0.1:18443 (error code 1 - "EOF reached")

Make sure that the bitcoind server is running and that it is connecting to the correct RPC port.

If I run docker ps I see this:

CONTAINER ID IMAGE COMMAND STATE CREATED NAMES OF THE PORTS
9c7f81b5dc10 bff200c182ca "/ bin / sh -c " / bin / ba ... "2 minutes ago Up to 2 minutes 127.0.0.1:18443->18443/tcp stupefied_bhaskara

I can run bitcoind on the host machine and run successfully bitcoin-cli Commands against you, so it does not seem to be a bad configuration with the client.

I wonder if I'm finding this in the release notes:

The rpcallowip option can no longer be used to listen automatically on all network interfaces. Instead, the rpcbind parameter must be used to specify the IP addresses to listen to. Listening to RPC commands through a public network connection is insecure and must be disabled, so a warning is now printed if a user selects such settings. If you need to expose RPC to use a tool like Docker, be sure to link RPC to your local host, for example. docker window […] -p 127.0.0.1:8332:8332 (this is an extra: 8332 over the normal Docker port specification).

Is it possible to run Bitcoin 0.18.0 in a Docker container and use JSON-RPC from the host machine?

Stop and remove all containers and images from the docker.

List all containers (ID only)

docker ps -aq

Stop all running containers

stop docker $ (docker ps -aq)

Remove all containers

docker rm $ (docker ps -aq)

Delete all images

docker rmi $ (images docker -q)