docker – Setup different user permissions on files copied in Dockerfile

I have this Dockerfile setup:

FROM node:14.5-buster-slim AS base

FROM base AS production
ENV NODE_ENV=production
RUN chown -R node:node /app
RUN chmod 755 /app
USER node
COPY ./scripts/ ./
COPY ./scripts/ ./
CMD ["./"]

The problem I’m facing is that I can’t execute ./ because it’s only executable by the node user. When I commented out the two RUN and the USER commands, I could execute the file just fine. But I want to enforce the executable permissions only to the node for security reasons.

I need the ./ to be externally executable by Kubernetes’ liveness & rediness probes.

How can I make it so? Folder restructuring or stuff like that are fine with me.

docker – Deployment using Dockerfile fails

I was trying to deploy my customized Saleor in DigitalOcean droplet using Dockerfile. For production environment Dockerfile is

FROM node:10 as builder
COPY package*.json ./
RUN npm install
COPY . .
ENV API_URI ${API_URI:-http://localhost:8000/graphql/}
RUN API_URI=${API_URI} npm run build

FROM nginx:stable
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /app/dist/ /app/

For developement environment is

FROM node:10

COPY package*.json ./
RUN npm install
COPY . .
ENV API_URI ${API_URI:-http://localhost:8000/graphql/}

CMD npm start -- --host

I can deploy using but can’t deploy using Dockerfile for production. End with the following errors

npm ERR! errno 2
npm ERR! saleor-site@2.10.4 build: `webpack -p`
npm ERR! Exit status 2
npm ERR! 
npm ERR! Failed at the saleor-site@2.10.4 build script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2020-08-23T18_40_53_645Z-debug.log
ERROR: Service 'storefront' failed to build: The command '/bin/sh -c API_URI=${API_URI} npm run build' returned a non-zero code: 2

manifiesto para python:python3.7-slim no encontrado: manifiesto desconocido cuando se construye desde Dockerfile

Estoy tratando de desplegar un pequeño script pitón usando Selenio en mi máquina virtual de GCP siguiendo este tutorial. Desafortunadamente, no puedo pasar el requirements.txt al construir la imagen del contenedor. De hecho, como se puede leer:

mikempc3@instance-1:~$ sudo docker pull python:3.7-slim
3.7-slim: Pulling from library/python
6ec8c9369e08: Already exists 
401b5acb42e6: Already exists 
2e487de6656a: Pull complete 
519de614852e: Pull complete 
a3d1a61e090c: Pull complete 
Digest: sha256:47081c7bca01b314e26c64d777970d46b2ad7049601a6f702d424881af9f2738
Status: Downloaded newer image for python:3.7-slim
mikempc3@instance-1:~$ sudo docker build --tag my-python-app:1 .
Sending build context to Docker daemon  387.1MB
Step 1/6 : FROM python:python3.7-slim
manifest for python:python3.7-slim not found: manifest unknown: manifest unknown

mikempc3@instance-1:~$ sudo docker build --tag my-python-app:1 .
Sending build context to Docker daemon  387.1MB
Step 1/6 : FROM python:python3.7-slim
manifest for python:python3.7-slim not found: manifest unknown: manifest unknown

Aquí está mi archivo requirements.txt:


Y aquí está el archivo que estoy tratando de contener:

from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from import By
from import WebDriverWait
from import expected_conditions as EC
from selenium.common.exceptions import ElementClickInterceptedException
from selenium.common.exceptions import NoSuchElementException
from import Options

import pandas as pd
import numpy as np

from collections import defaultdict
import json

import time

import requests
from requests.exceptions import ConnectionError

# Define Browser Options
chrome_options = Options()
chrome_options.add_argument("--headless") # Hides the browser window

# Reference the local Chromedriver instance
chrome_path = r"C:Programschromedriver.exe"
driver = webdriver.Chrome(executable_path=chrome_path, options=chrome_options)

df = pd.read_csv('path/to/file')    

tradable = ()
for ticker in df('Ticker'):
    print("ticker: ", ticker)
    location = "" + ticker.lower()
        request = requests.get(location)
        current_url = driver.current_url
        if current_url == location:
            print("no page but request= ", request)
    except ConnectionError:
        print('Ticker isn't tradable')

Aquí está mi Dockerfile:

FROM python:python3.7-slim
COPY . /app
RUN pip install -r requirements.txt
CMD python ./

Aquí están mi nombre y mi versión:

mikempc3@instance-1:~$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION="9 (stretch)"

Aquí está mi versión del núcleo de Linux:

mikempc3@instance-1:~$ uname -r

docker – Can’t change permissions with dockerfile

I have a web app running in Docker Container.
This is my docker-compose.yml :

        - ./app:/var/app
        context: .
        dockerfile: docker/php/Dockerfile
    restart: on-failure
        APP_ENV: dev
        APP_DEBUG: 1
        SECRET: fzefzefezfezfzefzefzefze
        DB_USER: fzefez
        DB_PWD: fez45FZEfzefezfzefze1fez0fzefF
        DB_NAME: fzefez
        DB_VER: mariadb-10.4.12
        DB_PORT: 3306
        GRECAPTCHA: zefzefezfzefzefzefez
        MJ_APIKEY_PUBLIC: a3da8e8zzdfzfeze418b7cfzefezfzefzefezfzefezfec87a68fab203eaf94e3
        MJ_APIKEY_PRIVATE: fzefzefzefzefzefze512fze

And the Dockerfile

# ./docker/php/Dockerfile
FROM php:7.4-fpm
RUN docker-php-ext-install pdo_mysql

RUN docker-php-ext-install opcache
COPY docker/php/opcache.ini /usr/local/etc/php/conf.d/opcache.ini

#PHP Config
COPY docker/php/php.ini-development /usr/local/etc/php/php.ini

RUN pecl install apcu

RUN apt-get update && 
apt-get install -y 

RUN apt-get install -y 
  && docker-php-ext-install zip

RUN docker-php-ext-enable apcu 
    && docker-php-ext-install intl

# Install Composer
RUN curl -sS | php -- --install-dir=/usr/local/bin --filename=composer
COPY app/ /var/app

WORKDIR /var/app
RUN PATH=$PATH:/var/app/vendor/bin:bin

RUN chown www-data:www-data -R /var/app

But permissions still at root user…
What am I doing wrong ?

docker – How to execute the updated Dockerfile from bash?

In Tutorial 2 from AWS named MythicalMysfits, the Dockerfile uses python instead of python3, so I modified Docker file as suggested. I saved it. But when I run it from bash, the unmodified file was executed, not the modified file. I tried 10 times within 2 hours, but still don’t know why it always execute the old file.

I replace line 4 by:

RUN apt-get install -y python3-pip python-dev build-essential

Replace line 5 by:

RUN pip3 install –upgrade pip

Replace line 10 by:

RUN pip3 install -r ./requirements.txt

dockerfile – Bash script to send certain data from AWC EC2 to CloudWatch

Here’s a little bash script I wrote, which basically checks something on an Amazon EC2 instance and reports data to CloudWatch every 60 seconds. I’ll run this inside a container. Everything works just fine, but I think it’s clunky, doesn’t write anything to stdout/stderr so no logs, and doesn’t handle any errors. I am open to refactoring this or even writing it in Python, to make my docker image more efficient. Any suggestions, criticism is most welcome.

Here’s the full script less some sensitive data that I’ve xxx’d:


INSTANCEID=$(curl --silent
AZ=$(curl --silent
REGION= $(echo $AZ | sed -e 's:((0-9)(0-9)*)(a-z)*$:\1:')


putdata() {
    aws cloudwatch put-metric-data "xxx"      
    sleep 60
while true; do
    HTTP_RESPONSE=$(curl --write-out "%{http_code}" --silent --output /dev/null "$URL")
    if ( "$HTTP_RESPONSE" = "200" ); then
        putdata 0

        putdata 1

Architecture: Is it considered good practice to code package versions into something as important as a Dockerfile?

We had an interruption of the application in production during an implementation because a load balancer package in our top-level Dockerfile had released its latest version, which had a new API. Our application broke down during a time when most of our developers were out of the office, so I and another developer had to spend the night trying to fix the error. Because our latest compilation had many new features, it took us a few hours to discover that it was a version change in the Dockerfile that caused the entire application to fail.

As we use the CI / CD practices, I thought it might be a good idea to code the version of this package in the Dockerfile, since it is a high-level component of the application. Which I did.

My reasoning is that in the future, when the staff is "practical" and available to solve any problem, we can update the higher level packages in our Dockerfile (there are not many of them), carefully checking the versions that do not work the application .

Is this considered a good or bad practice? Why?

docker: cache files that are downloaded to Dockerfile, for faster rebuilds

I am using apt-cache-ng, which acts as a proxy between my Docker Build and the apt package server, so all my downloads through apt-get they are in cache

I would like to do something similar for the files that I wget. For example, to install the latest version of scala, I can't get it from apt and I need to install it from a .deb File downloaded from your website.

Is there an easy way to cache calls made (maybe all HTTP (S) calls made for file downloads) when I'm compiling with Docker?

linux: the Dockerfile & # 39; COPY & # 39; do not copy repository files

I encounter a frustrating problem when I try to create a new dockable container. When I upload my code to a Github repository and use to compile it, the compilation is completed without any errors. However, when I try to create and then run the new container on my server, I find that the necessary files and directories have not been copied to the new container or to the local directories linked to the container directories (obviously). Although it seems strange, it seems that the script referenced at the end of my Dockerfile is It is copied because it is executed, however, it fails because the other files that were supposed to be copied to the WORKDIR are not there. I have tried different file structures and COPY commands, but I copied my current Dockerfile below. All files and directories that will be copied are at the root of the repository at this time.

FROM node:latest

RUN mkdir -p /usr/src/app

WORKDIR /usr/src/app

COPY package*.json /usr/src/app/

  npm install -g @adonisjs/cli && 
  npm install

COPY . /usr/src/app/

VOLUME /usr/src/app

  cp /usr/src/app/ /usr/local/bin/ && 
  chmod 755 /usr/local/bin/ && 
  ln -s usr/local/bin/ / # backwards compat

CMD ("/")

And my Docker creates and Docker executes commands:

docker create --name='ferdi-server' -p '3333:80' -v '/mnt/cache/appdata/ferdi-server':'/usr/src/app' 'xthursdayx/ferdi-server-docker' 

docker run -d --name='ferdi-server' -p '3333:80' -v '/mnt/cache/appdata/ferdi-server':'/usr/src/app' 'xthursdayx/ferdi-server-docker'

Any ideas? I've been trying to solve this for two days and I'm in a dead end.

FYI other file structures and COPY commands that I tried

├── Dockerfile
│   └── api
│       ├── package.json
│       ├── package-lock.json
│       ├── .env.example
│       ├── etc
│       ├──
│       └── app
│       |   └── directory
│       └── config
│           └── app.js
│           └── etc.js

with command COPY api/ /usr/src/app/

├── Dockerfile
├── package.json
├── package-lock.json
├── .env.example
├── etc
│   └── app
│   └── directory2
│   └── config
│       └── app.js
│       └── etc.js

with command COPY . /usr/src/app/

docker: Apt-get update fails through dockerfile

I am running a dockerfile and it fails when it reaches the apt-get update. I'm using Debian and I'm behind a proxy, I don't know if that would be a problem. I am getting the following:

Sending compilation context to Docker daemon 4.608kB

Step 1/16: FROM tensorflow / tensorflow: 1.12.0-rc2-devel
—> f643a5376d9c

Step 2/16: ENV HTTPS_PROXY = http: // 3128 /
—> Using cache
—> 259158810060

Step 3/16: ENV HTTP_PROXY = http: // 3128 /
—> Using cache
—> f26e1847ffbc

Step 4/16: EXECUTE git clone && mv models / tensorflow / models
—> Using cache
—> 071dcc53bb8f

Step 5/16: EXECUTE apt-get update
—> Running on b694f8183119

Err: 17 xenial / main amd64 Packages
Unable to connect to [IP: 80]
Err: 25 xenial-updates / main packages amd64
Unable to connect to [IP: 80]

Reading package lists …

E: Error getting Cannot connect to [IP: 80 ]
E: Error getting Cannot connect to [IP: 80]
E: Error getting Cannot connect to [IP: 80 ]
E: Some index files could not be downloaded. They have been ignored, or old ones have been used instead.
The command & # 39; / bin / sh -c apt-get update & # 39; returned a nonzero code: 100

Does anyone know how I could solve this?