ldap – Nextcloud Docker migration

I have the following Setup fora small nextcloud instance at the moment:

Nextcloud v19 on a Docker host with the data stored in a docker volume with an LDAP integration.
Now I want to upgrade the installation to Nextcloud v20, add an nginx loadbalancer and have persistent data.

How can I migrate the data from the docker volume to a more persistant storage like bind mount?

What happens with the link to LDAP? Will all the files and users work as usual or do I have to link everything manually?

How can I migrate the database?

docker – How can you set up NFS to automatically create the destination folder?

I am using Docker to create an NFS mount e.g.

    driver: local
      type: nfs
      device: ":/zipkin-cassandra"
      o: addr=efs.internal,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport

What I usually do is create the folder in the NFS server before, but I want it to create it automatically. Is there some sort of option to do that on NFS?

networking – equivalent of –net=host in dockerfile when creating docker container

I trieddocker run --net=host -d --name pdns-recursor pschiffe/pdns-recursor and it works. Now my goal is to use dockerfile to pass some environment without errors.

I tried:

  name: host


  name: "host"

and also the examples indicated here.

I always get an The Compose file './docker-compose.yml' is invalid because: services.recursor.networks.name contains an invalid type, it should be an object, or a null.

Any suggestions are much appreciated.

php – Error 403 Forbidden en Contenedor Docker

Tengo un problema en el cual estoy atascado, soy nuevo en docker y al intentar generar y correr una imagen de PHP+Apache hago lo siguiente:

-Creo una carpeta(prueba) que contiene otra carpeta(src) y un archivo Dockerfile.Dentro de la carpeta src esta un index.php que tiene un Hello world

-Construyo la imagen con docker build -t imagen_web_php .

-Corro la imagen docker run -it -p 80:80 imagen_web_php

-Me da todo correcto pero al intentar acceder a localhost me da el error de Forbidden que dejo en la imagen. He probado con varias versiones del php,tanto en linux como windows,he cambiado la ruta de las carpetas,he aƱadido permisos a /var/www/html…pero nada me funciona.

Codigo Dockerfile:

FROM php:7.3-apache

COPY . /var/www/html/


introducir la descripción de la imagen aquí

Unable to create and connect to Town docker registry

I’m trying to host a private docker registry.

i’ve follow: https://docs.docker.com/registry/ to create my own registry?

But when running:


It exit without error, but without result either.

Do you know what I do wrong ?

Terminating a Docker container with 1 process in S6 Overlay takes > 10 sec

I’m frustrated at the time it takes my container to shut down when using the S6 overlay service. As far as i understand, s6 should run as PID 1 and should issue the SIGTERM to all children processes (postfix) when you issue docker stop. I confirmed it is running as PID 1 but still takes 10sec to stop. I tried with Tini init system and it closes instantaneously. What am i doing wrong here?


FROM ubuntu:latest

# Add S6 Overlay
ADD https://github.com/just-containers/s6-overlay/releases/download/v2.2.0.1/s6-overlay-amd64-installer /tmp/
RUN chmod +x /tmp/s6-overlay-amd64-installer && /tmp/s6-overlay-amd64-installer /

# Add S6 Socklog
ADD https://github.com/just-containers/socklog-overlay/releases/download/v3.1.1-1/socklog-overlay-amd64.tar.gz /tmp/
RUN tar xzf /tmp/socklog-overlay-amd64.tar.gz -C /

ARG TZ=America/Denver
RUN ln -snf /usr/share/zoneinfo/${TZ} /etc/localtime && echo ${TZ} > /etc/timezone

RUN ("/bin/bash", "-c","debconf-set-selections <<< "postfix postfix/mailname string test.com"") && 
    ("/bin/bash", "-c","debconf-set-selections <<< "postfix postfix/main_mailer_type string 'Internet Site'"")

RUN apt update && 
    apt upgrade -y && 
    DEBIAN_FRONTEND=noninteractive apt install -y --no-install-recommends 
        postfix && 
    apt -y autoremove && 
    apt -y clean autoclean && 
    rm -drf /var/lib/apt/lists/* /tmp/* /var/tmp /var/cache

ENTRYPOINT ("/init")
CMD ( "postfix", "start-fg" )

Build the Image: docker build -t test .

Run the Image: docker run --name test --rm -d test

Building docker image with jenkins but no registry

In my local machine I’ve installed docker, docker-compose. I start a Jenkins container ( and I want to use it to build my Dockerfile to an image (let’s say my-project:latest) and the host machine can run this image directly (I mean NO need to push my-project:latest to any registry)

Ps: I followed this guide and the Jenkins can access to the host Docker daemon (in my case is, but the agent could not be started when I built my project.


Started container ID 9fb1cc36171748dfa864afc5eaaf63ed32ab0e2a359236c985b06a2440af4dc3 for node docker-000uq7rjeu1h2 from image: benhall/dind-jenkins-agent

Feb 24, 2021 10:08:58 AM WARNING io.jenkins.docker.client.DockerMultiplexedInputStream readInternal
Unexpected data on container stderr: Exception in thread "main" 
Feb 24, 2021 10:08:58 AM WARNING io.jenkins.docker.client.DockerMultiplexedInputStream readInternal
Unexpected data on container stderr: java.lang.UnsupportedClassVersionError: hudson/remoting/Launcher : Unsupported major.minor version 52.0

Feb 24, 2021 10:08:58 AM WARNING io.jenkins.docker.client.DockerMultiplexedInputStream readInternal
Unexpected data on container stderr:    at java.lang.ClassLoader.defineClass1(Native Method)

Feb 24, 2021 10:08:58 AM WARNING io.jenkins.docker.client.DockerMultiplexedInputStream readInternal
Unexpected data on container stderr:    at java.lang.ClassLoader.defineClass(ClassLoader.java:803)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
Feb 24, 2021 10:08:58 AM WARNING io.jenkins.docker.client.DockerMultiplexedInputStream readInternal
Unexpected data on container stderr: 

Feb 24, 2021 10:08:58 AM WARNING io.jenkins.docker.client.DockerMultiplexedInputStream readInternal
Unexpected data on container stderr:    at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)

docker – Which part of the software stack actually chooses the default settings for `/sys/fs/cgroup` mount passed to OCI runtime?


Now that docker supports cgroups v2 I would like to take full advantage of it.

When I run a container with a private group using --cgroupns=private the nested cgroup2 filesystem created by systemd scope gets mounted into the containers /sys/fs/cgroup path properly, however, docker mounts it read-only by default:

cgroup2 on /sys/fs/cgroup type cgroup2 (ro,nosuid,nodev,noexec)


Technical considerations

I think that this is legacy behaviour which was correct for cgroupv1 where the system-global cgroupfs was mounted into the container as rw rights would be a gaping security hole.

According to my knowledge a nested cgroup with delegated controllers should be able to write into /sys/fs/cgroup by design without negative security implications.

Target use-cases

Right now running containers with (nested) systemd init or other container runtime requires multiple hacks which seriously expose security and have portability problems.

Solving this problem would enable an easier, more secure and possibly even transparent mechanism for:

  • allowing containers with nested systemd init to manage its own resources via slices and scopes – kind of like with LXC’s nested mode but without the nasty security implication of bind mounting the real cgroupfs into the container
  • allowing nested containerized workloads with the help of fuse-overlayfs

The goal

My goal is to adjust the code so the cgroup2 filesystem is mounted read-write when container is run with a private cgroupns with delegated controllers.

The problem

The problem is that I don’t really know where to look. Which part of the stack is actually responsible for this? Is it docker, moby, containerd, runc or maybe systemd?

So far I’ve found the default settings in the moby project, but they are for cgroupv1.

Where do I find the code that I need to modify and submit a PR to?

PS For a more detailed writeup see my answer on serverfault and my post on r/docker.

How to set disk limit for a container in docker via docker-compose

I want to limit the disk size of an image that I create via docker-compose.yml file, but I have not found any ways to do that.

I checked Docks of Docker Compose v3, but it does not have such thing. Just v2 has an option called storage_opt, but it does not work, neither in docker-compose file version 3, nor 2.

This is the error even when I change the version to 2:

Unsupported config option for services.httpd: ‘storage_opt’

The same error goes for version 3.

And this is my docker-compose.yml contents:

version: '2'

                build: .
                restart: always
                       - "80:80"
                # CPU, Memory and disk limit
                mem_limit: "60m"
                cpus: "0.25"
                hostname: "dockering"
                       size: '5G'
      name: proxy
    db_data: {}

Is there a way to limit the size of each container so that it would not be bigger than for example 5GB?

wordpress – Custom Ubuntu/Apache docker image in Azure App Service is using IIS?

I have a mystery. So I set up an Azure Web app on Linux that pulls from a custom docker image stored in Azure container repo. The dockerfile for the image is pretty simple – ubuntu:latest as the base, installs apache, php, and wordpress using apt-get, sets config files, etc. I can bring up the WordPress site just fine and do all the expected operations in the administration of it. Except none of the images load (png/jpg/svg/woff/woff2), even if I upload a new image in the wp-admin interface. Every image gives a 502 Bad gateway error.

Looking a bit deeper at the server response, the response headers for images say “Server: Microsoft-IIS/10.0.” Wth? How did IIS get on here? If I look at the response for the base WP home page (or any text/html/php file), it says “Server: Apache/2.4.41 (Ubuntu)” as expected.

The bad gateway error makes me think somehow IIS has been set up as a proxy for certain file types, but I haven’t set up any proxies in my configs (I may set up nginx in the future, but that’s a story for another day). I’m fairly new to Azure web apps, so I wouldn’t be surprised if I’m missing something obvious. Any ideas?