Implement a specific stack service from the docker composition file

I have a docker-compose.yml that defines multiple services.

Yes i run docker stack deploy -c docker-compose.yml then create / implement all those services.

Is it possible to create / implement a specific service from the editorial file?

Ports linked to the net core of points in the docker

launchSetting It has specified https (5003) and http (5002).


"applicationUrl": "https://localhost:5003;http://localhost:5002"


    command: bash -c "dotnet build --configuration Release Customer.csproj && dotnet bin/Release/netcoreapp2.2/Customer.dll"
      - 6001:80
      - 6002:443 # is it possible?

From my docker-compose I can see that port 80 is where the dotnet nucleus is listening and is assigned to 6001 in my localhost

Is there a port mapping of 443 (SSL) in the docker that I can assign to a localhost port, the question is how can I send an https request to the docker as core applicationUrl setting allows for both?

docker compose: nginx will not serve the json file unless you specify .json in the URL

I have the following nginx.conf file:

server {
    listen 8080;
    server_name dummy_server;

    root /usr/share/nginx/html;

    location / {
      if ($request_method = 'OPTIONS') {
        add_header 'Access-Control-Allow-Origin' '*';
        add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
        add_header 'Access-Control-Allow-Headers' 'Authorization,DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
        add_header 'Access-Control-Max-Age' 1728000;
        add_header 'Content-Type' 'text/plain; charset=utf-8';
        add_header 'Content-Length' 0;
        return 204;

      default_type 'application/json';
      add_header 'Access-Control-Allow-Origin' '*' always;
      add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
      add_header 'Content-Type' 'application/json';
      if ($request_uri ~* "((^/)*$)" ) {
        set  $last_path_component  $1;

      try_files $last_path_component.json $last_path_component.json =404;

I would like to serve healthcheck.json when localhost:8080/v1/healthcheck is requested
However, nginx refuses to store any json file, if I am not explicitly pressing localhost:8080/v1/healthcheck.jsonotherwise, register the file not found, and the answer is page 404 of nginx.
If I add .json at the end of the url, the file is magically found and great magic happens.
I tried any way to force application/json type as well as change the try files row to explicitly return the file that I would like to return (so I configured try_files healthcheck.json =404;) everything returns the same problem.

I'm pretty lost … does anyone have any ideas?

How to AUTOMATICALLY safely implement a new instance of mySQL with Docker?

I would like to be able to run a script that implements my application on a new server with the least possible number of manual steps, without exposing the database password to the world.

The application consists of a web application (nginx + custom material) and a mysql database. I know how to implement it automatically, but not in a secure way: just store the root password in docker-compose and somewhere in the web application settings. This is obviously not safe at all.

I know how to implement it safely: the current mySQL documentation suggests implementing the default mySQL image, selecting the generated root password and then running mysql inside the container to change this password to "the password you choose". So, I guess I would have to put this "password of my choice" in the web application settings, so that my web application can connect to the database.

What I don't know is what is the recommended way to implement mySQL both automatically and securely. I can generate the "password of my choice" during deployment, but if I put it in the settings anyway, what is the point of bothering me with the randomly generated mySQL password? Can anyone point me to the best practices?

Can an intruder use a Docker Desktop installation to run the keyboard or other capture (audio / video, network) on a Windows 10 system?

I am not looking for a tutorial for a feat.

"LostVicking" in a Docker forum post seems to be trying to mount its webcam device in a Docker container, but it is unsuccessful:

Is it possible to forward the webcam video to an image that can be coupled from Windows
10? I have seen the same question for Linux and the solution seems
be to use:

docker run –privileged -v / dev / video0: / dev / video0

Is there any similar trick when I am running Docker on Windows 10?
Presumably, isn't there an equivalent mount point that can be linked?

This made me wonder if Docker Desktop could facilitate the installation of the keyboard capture or other capture (audio, video, network), either by an adversary user with physical access to a shared machine (university computer lab, cyber café) or an intruder online. Or can Windows USB devices not be shared with Docker containers through Docker Desktop?

it's possible?

Is there an obvious countermeasure besides uninstalling Docker Desktop?

Obviously, someone with physical access to a Windows machine can install native Windows malware. This question implies whether Docker Desktop adds an additional, less monitored vector.


Hi, I am trying to create a python server for container communication. I can create the image, the container and start, but when I try to communicate with it, it doesn't work. He listens but takes nothing.
Python server below:

import socket             
print("Servidor está rodando!!!!") 
soc = socket.socket()         
host = "1234"
port = 1234              
soc.bind((host, port))      

while True:
   print("Esperando mensagem ou conexão ...")
   conn, addr = soc.accept()   
       print ("Obteve conexão de",addr)
       msg = conn.recv(1024)
       print (msg)

if ( msg == "Oi" ):
   print("Oi, tudo bem?")
   print("Até mais!!")

Here my dockfile:

FROM python:3


RUN pip install pystrich

CMD ( "python", "./", "-p 1234" )

I create the image with the command docker image build -t servidor .
The container with the command docker container run -it -p 1234:1234 --name python-server servidor

I tried to reallocate the doors, but nothing works.

This done, the container listens. I go to the terminal of my real machine and use the function nc. nc localhost 1234But it does not work. Am I wrong in the code or in the creation of the service?

Note: I use the xubuntu 64x system

Should I disable CoW btrfs for / var / lib / docker?

I saw that it is not a good idea to use btrfs CoW functionality for large files, such as the data directories of a PostgreSQL database.

As I use docker for databases, now I wonder if I should disable CoW for everything /var/lib/docker directory. But I'm not sure, because Docker's layered file system makes use of this function, doesn't it?

Or is it possible to deactivate CoW only for some specific volumes?

Docker MediaWiki Family – Superuser

I know how to run MediaWiki with docker-compose, but I don't know how to do it if I want a MediaWiki family. I don't understand how I need to configure my wiki family. You must share media and also accounts, so you must create only one account.

Let's say this is my docker-compose.yml, what should I do?

version: '3'
    image: mediawiki
    container_name: wiki
    restart: always
      - 8080:80
      - database
      - /var/www/html/images
      - ./LocalSettings_de.php:/var/www/html/LocalSettings.php
    image: mariadb
    restart: always
      # @see
      MYSQL_DATABASE: my_wiki
      MYSQL_USER: wikiuser
      MYSQL_PASSWORD: example

Can't Docker Server route network traffic to containers?

I have a Docker server (, however, I cannot access any container with the container's IP address (

IP forwarding has been enabled on the Docker server and I could only reach the docker interface on that server, nothing more.

For example, the following shows the network interface in Docker:

eth0:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet brd scope global vboxnet0
       valid_lft forever preferred_lft forever
    inet6 fe80::800:27ff:fe00:0/64 scope link 
       valid_lft forever preferred_lft forever
br-f03a36c56e51:  mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:7c:73:13:6a brd ff:ff:ff:ff:ff:ff
    inet brd scope global br-f03a36c56e51
       valid_lft forever preferred_lft forever
    inet6 fe80::42:7cff:fe73:136a/64 scope link 
       valid_lft forever preferred_lft forever

br is the coupler interface.

The containers are using the network.

My client is on the same network (, and I could ping However, the client could not ping the containers, for example:

But, if I entered the container, I could ping my client successfully.

There is no firewall configured in the container.

Installation – Scripts and masks removed when installing Magento 1.9.x on Docker

While installing magento, all scripts and masks are missing.enter the description of the image here

If I check the records, I can see this problem:

(error) 6#6: *9 open() "/var/www/skin/install/default/default/favicon.ico" failed (13: Permission denied), client:, server: docker.test, request: "GET /skin/install/default/default/favicon.ico HTTP/1.1", host: "docker.test", referrer: "http://docker.test/index.php/install/"

How can i solve this problem?
I install magento with dokcer-compose.yml (Nginx + MySQL + PHP7.2-fpm)