manifiesto para python:python3.7-slim no encontrado: manifiesto desconocido cuando se construye desde Dockerfile

Estoy tratando de desplegar un pequeño script pitón usando Selenio en mi máquina virtual de GCP siguiendo este tutorial. Desafortunadamente, no puedo pasar el requirements.txt al construir la imagen del contenedor. De hecho, como se puede leer:

mikempc3@instance-1:~$ sudo docker pull python:3.7-slim
3.7-slim: Pulling from library/python
6ec8c9369e08: Already exists 
401b5acb42e6: Already exists 
2e487de6656a: Pull complete 
519de614852e: Pull complete 
a3d1a61e090c: Pull complete 
Digest: sha256:47081c7bca01b314e26c64d777970d46b2ad7049601a6f702d424881af9f2738
Status: Downloaded newer image for python:3.7-slim
docker.io/library/python:3.7-slim
mikempc3@instance-1:~$ sudo docker build --tag my-python-app:1 .
Sending build context to Docker daemon  387.1MB
Step 1/6 : FROM python:python3.7-slim
manifest for python:python3.7-slim not found: manifest unknown: manifest unknown

mikempc3@instance-1:~$ sudo docker build --tag my-python-app:1 .
Sending build context to Docker daemon  387.1MB
Step 1/6 : FROM python:python3.7-slim
manifest for python:python3.7-slim not found: manifest unknown: manifest unknown

Aquí está mi archivo requirements.txt:

selenium
pandas
numpy
requests

Y aquí está el archivo que estoy tratando de contener:

from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import ElementClickInterceptedException
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.chrome.options import Options



import pandas as pd
import numpy as np

from collections import defaultdict
import json

import time

import requests
from requests.exceptions import ConnectionError

# Define Browser Options
chrome_options = Options()
chrome_options.add_argument("--headless") # Hides the browser window

# Reference the local Chromedriver instance
chrome_path = r"C:Programschromedriver.exe"
driver = webdriver.Chrome(executable_path=chrome_path, options=chrome_options)

df = pd.read_csv('path/to/file')    

tradable = ()
print(len(df('Ticker')))
for ticker in df('Ticker'):
    print("ticker: ", ticker)
    location = "https://www.etoro.com/markets/" + ticker.lower()
    try:
        request = requests.get(location)
        driver.get(location)
        time.sleep(2)
        current_url = driver.current_url
        if current_url == location:
            tradable.append(ticker)
        else:
            print("no page but request= ", request)
    except ConnectionError:
        print('Ticker isn't tradable')
    else:
        tradable.append(ticker)

Aquí está mi Dockerfile:

FROM python:python3.7-slim
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD python ./find_tradable.py

Aquí están mi nombre y mi versión:

mikempc3@instance-1:~$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
VERSION_CODENAME=stretch
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

Aquí está mi versión del núcleo de Linux:

mikempc3@instance-1:~$ uname -r
4.9.0-12-amd64

docker – Can’t change permissions with dockerfile

I have a web app running in Docker Container.
This is my docker-compose.yml :

php:
    volumes:
        - ./app:/var/app
    build:
        context: .
        dockerfile: docker/php/Dockerfile
    restart: on-failure
    environment:
        APP_ENV: dev
        APP_DEBUG: 1
        SECRET: fzefzefezfezfzefzefzefze
        DB_USER: fzefez
        DB_PWD: fez45FZEfzefezfzefze1fez0fzefF
        DB_NAME: fzefez
        DB_VER: mariadb-10.4.12
        DB_PORT: 3306
        GRECAPTCHA: zefzefezfzefzefzefez
        MJ_APIKEY_PUBLIC: a3da8e8zzdfzfeze418b7cfzefezfzefzefezfzefezfec87a68fab203eaf94e3
        MJ_APIKEY_PRIVATE: fzefzefzefzefzefze512fze
        MJ_EMAIL_DEFAULT: ff@ff.com

And the Dockerfile

# ./docker/php/Dockerfile
FROM php:7.4-fpm
RUN docker-php-ext-install pdo_mysql

#OPCACHE
RUN docker-php-ext-install opcache
COPY docker/php/opcache.ini /usr/local/etc/php/conf.d/opcache.ini

#PHP Config
COPY docker/php/php.ini-development /usr/local/etc/php/php.ini

RUN pecl install apcu

RUN apt-get update && 
apt-get install -y 
zlib1g-dev

RUN apt-get install -y 
        libzip-dev 
        libicu-dev 
        zip 
  && docker-php-ext-install zip

RUN docker-php-ext-enable apcu 
    && docker-php-ext-install intl

# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
COPY app/ /var/app

WORKDIR /var/app
RUN PATH=$PATH:/var/app/vendor/bin:bin

RUN chown www-data:www-data -R /var/app

But permissions still at root user…
What am I doing wrong ?

docker – How to execute the updated Dockerfile from bash?

In Tutorial 2 from AWS named MythicalMysfits, the Dockerfile uses python instead of python3, so I modified Docker file as suggested. I saved it. But when I run it from bash, the unmodified file was executed, not the modified file. I tried 10 times within 2 hours, but still don’t know why it always execute the old file.

I replace line 4 by:

RUN apt-get install -y python3-pip python-dev build-essential

Replace line 5 by:

RUN pip3 install –upgrade pip

Replace line 10 by:

RUN pip3 install -r ./requirements.txt

dockerfile – Bash script to send certain data from AWC EC2 to CloudWatch

Here’s a little bash script I wrote, which basically checks something on an Amazon EC2 instance and reports data to CloudWatch every 60 seconds. I’ll run this inside a container. Everything works just fine, but I think it’s clunky, doesn’t write anything to stdout/stderr so no logs, and doesn’t handle any errors. I am open to refactoring this or even writing it in Python, to make my docker image more efficient. Any suggestions, criticism is most welcome.

Here’s the full script less some sensitive data that I’ve xxx’d:

#!/bin/bash

INSTANCEID=$(curl --silent http://169.254.169.254/latest/meta-data/instance-id/)
AZ=$(curl --silent http://169.254.169.254/latest/meta-data/placement/availability-zone)
REGION= $(echo $AZ | sed -e 's:((0-9)(0-9)*)(a-z)*$:\1:')

URL="xxx"

putdata() {
    aws cloudwatch put-metric-data "xxx"      
    sleep 60
}
while true; do
    HTTP_RESPONSE=$(curl --write-out "%{http_code}" --silent --output /dev/null "$URL")
    if ( "$HTTP_RESPONSE" = "200" ); then
        putdata 0

    else
        putdata 1
    fi
done

Architecture: Is it considered good practice to code package versions into something as important as a Dockerfile?

We had an interruption of the application in production during an implementation because a load balancer package in our top-level Dockerfile had released its latest version, which had a new API. Our application broke down during a time when most of our developers were out of the office, so I and another developer had to spend the night trying to fix the error. Because our latest compilation had many new features, it took us a few hours to discover that it was a version change in the Dockerfile that caused the entire application to fail.

As we use the CI / CD practices, I thought it might be a good idea to code the version of this package in the Dockerfile, since it is a high-level component of the application. Which I did.

My reasoning is that in the future, when the staff is "practical" and available to solve any problem, we can update the higher level packages in our Dockerfile (there are not many of them), carefully checking the versions that do not work the application .

Is this considered a good or bad practice? Why?

docker: cache files that are downloaded to Dockerfile, for faster rebuilds

I am using apt-cache-ng, which acts as a proxy between my Docker Build and the apt package server, so all my downloads through apt-get they are in cache

I would like to do something similar for the files that I wget. For example, to install the latest version of scala, I can't get it from apt and I need to install it from a .deb File downloaded from your website.

Is there an easy way to cache calls made (maybe all HTTP (S) calls made for file downloads) when I'm compiling with Docker?

linux: the Dockerfile & # 39; COPY & # 39; do not copy repository files

I encounter a frustrating problem when I try to create a new dockable container. When I upload my code to a Github repository and use hub.docker.com to compile it, the compilation is completed without any errors. However, when I try to create and then run the new container on my server, I find that the necessary files and directories have not been copied to the new container or to the local directories linked to the container directories (obviously). Although it seems strange, it seems that the init.sh script referenced at the end of my Dockerfile is It is copied because it is executed, however, it fails because the other files that were supposed to be copied to the WORKDIR are not there. I have tried different file structures and COPY commands, but I copied my current Dockerfile below. All files and directories that will be copied are at the root of the repository at this time.

FROM node:latest

RUN mkdir -p /usr/src/app

WORKDIR /usr/src/app

COPY package*.json /usr/src/app/

RUN 
  npm install -g @adonisjs/cli && 
  npm install

COPY . /usr/src/app/

VOLUME /usr/src/app

RUN 
  cp /usr/src/app/init.sh /usr/local/bin/ && 
  chmod 755 /usr/local/bin/init.sh && 
  ln -s usr/local/bin/init.sh / # backwards compat

EXPOSE 8080
CMD ("/init.sh")

And my Docker creates and Docker executes commands:

docker create --name='ferdi-server' -p '3333:80' -v '/mnt/cache/appdata/ferdi-server':'/usr/src/app' 'xthursdayx/ferdi-server-docker' 

docker run -d --name='ferdi-server' -p '3333:80' -v '/mnt/cache/appdata/ferdi-server':'/usr/src/app' 'xthursdayx/ferdi-server-docker'

Any ideas? I've been trying to solve this for two days and I'm in a dead end.


FYI other file structures and COPY commands that I tried

├── Dockerfile
├── init.sh
│   └── api
│       ├── package.json
│       ├── package-lock.json
│       ├── .env.example
│       ├── etc
│       ├── init.sh
│       └── app
│       |   └── directory
│       └── config
│           └── app.js
│           └── etc.js

with command COPY api/ /usr/src/app/

├── Dockerfile
├── init.sh
├── package.json
├── package-lock.json
├── .env.example
├── etc
├── init.sh
│   └── app
│   └── directory2
│   └── config
│       └── app.js
│       └── etc.js

with command COPY . /usr/src/app/

docker: Apt-get update fails through dockerfile

I am running a dockerfile and it fails when it reaches the apt-get update. I'm using Debian and I'm behind a proxy, I don't know if that would be a problem. I am getting the following:

Sending compilation context to Docker daemon 4.608kB

Step 1/16: FROM tensorflow / tensorflow: 1.12.0-rc2-devel
—> f643a5376d9c

Step 2/16: ENV HTTPS_PROXY = http: //wwwproxy.se.axis.com: 3128 /
—> Using cache
—> 259158810060

Step 3/16: ENV HTTP_PROXY = http: //wwwproxy.se.axis.com: 3128 /
—> Using cache
—> f26e1847ffbc

Step 4/16: EXECUTE git clone https://github.com/tensorflow/models.git && mv models / tensorflow / models
—> Using cache
—> 071dcc53bb8f

Step 5/16: EXECUTE apt-get update
—> Running on b694f8183119

Err: 17 http://archive.ubuntu.com/ubuntu xenial / main amd64 Packages
Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.162 80]
Err: 25 http://archive.ubuntu.com/ubuntu xenial-updates / main packages amd64
Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.162 80]

Reading package lists …

E: Error getting http://security.ubuntu.com/ubuntu/dists/xenial-security/main/binary-amd64/Packages Cannot connect to security.ubuntu.com:http: [IP: 91.189.91.23 80 ]
E: Error getting http://archive.ubuntu.com/ubuntu/dists/xenial/main/binary-amd64/Packages Cannot connect to archive.ubuntu.com:http: [IP: 91.189.88.162 80]
E: Error getting http://archive.ubuntu.com/ubuntu/dists/xenial-updates/main/binary-amd64/Packages Cannot connect to archive.ubuntu.com:http: [IP: 91.189.88.162 80 ]
E: Some index files could not be downloaded. They have been ignored, or old ones have been used instead.
The command & # 39; / bin / sh -c apt-get update & # 39; returned a nonzero code: 100

Does anyone know how I could solve this?

Disable dockers DOCKER_BUILDKIT / dockerfile: specialized experimental compilation messages

Stack exchange network

The Stack Exchange network consists of 175 question and answer communities including Stack Overflow, the largest and most reliable online community for developers to learn, share their knowledge and develop their careers.

Visit Stack Exchange

docker – Dockerfile VOLUME not visible on the host

In the WordPress Dockerfile, there is a VOLUME / var / www / html declaration. If I understand correctly, this means that the WordPress files (in / var / www / html) must be assigned to the directory on my host that contains the docker-compose.yml, BUT this is not happening. You know why?

I created my own WordPress Dockerfile that extends the original WordPress Dockerfile where you will find said VOLUME / var / www / html statement on line 44 (https://github.com/docker-library/wordpress/blob/b3739870faafe1886544ddda7d2f2a88882eeb31/php7.2/apache/Dockerfile).

I even tried to add the VOLUME / var / www / html statement at the bottom of my Dockerfile as you can see in my Dockerfile below. I added it just in case, but I do not think anything goes wrong there.

FROM wordpress: 4.9.8-php7.2-apache

##########
# XDebug #
##########
# Install
RUN pecl install xdebug-2.6.1; 
docker-php-ext-enable xdebug
# Set up
RUN echo "error_reporting = E_ALL" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; 
echo "display_startup_errors = On" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; 
echo "display_errors = On" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; 
echo "xdebug.idekey = " PHPSTORM  "" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; 
echo "xdebug.remote_port = 9000" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; 
echo "xdebug.remote_enable = 1" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; 
echo "xdebug.remote_host = docker.for.win.localhost" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini
#RUN echo "xdebug.remote_autostart = 1" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini ##

###########
# PHPUnit #
###########
Run apt-get update; 
apt-get install wget

EXECUTE wget https://phar.phpunit.de/phpunit-7.4.phar; 
chmod + x phpunit-7.4.phar; 
mv phpunit-7.4.phar / usr / local / bin / phpunit

EXECUTE phpunit --version

##################
# PHP Codesniffer #
##################
RUN curl -OL https://squizlabs.github.io/PHP_CodeSniffer/phpcs.phar; 
mp phpcs.phar / usr / local / bin / phpcs; 
chmod + x / usr / local / bin / phpcs

############
# Composer #
############
EXECUTE php -r "copy (& # 39; https: //getcomposer.org/installer&#39 ;, & # 39; composer-setup.php & # 39;);"; 
php -r "" / "" / "" / "" "" / "" / "" "" / "" "" / "" "" / "" "" / "" "" / "" "" / "" "" / "" "" / "" "" / "" "" / "" "" / "" "" / "" "" / "" "" / "" "" / "" "" / "" "" / "" php & # 39;);} echo PHP_EOL; "; 
php composer-setup.php; 
php -r "unlink (& # 39; composer-setup.php & # 39;);"; 
mv composer.phar / usr / local / bin / composer

#################
# Install Nodejs #
#################
EXECUTE apt-get install -y gnupg2; 
curl -sL https://deb.nodesource.com/setup_11.x | bash -; 
apt-get install -y nodejs

#################
# Install Grunt #
#################
Run npm install -g grunt-cli

#################
# Customization of BASH #
#################
RUN echo "alias ll = & # 39; ls --color = auto -lA & # 39;" >> ~ / .bashrc

VOLUME / var / www / html