Getting started with Docker containers and images to deploy Python application (new automation system – part 16)

  1. Preface
    Hi all,As I continue developing the new automation system which at “run time”, in an “application wise” perspective –  is essentially a Python 3 application, I now got into a situation where I need to deploy (“have”) several “instances” of the automation application running at the same time. In order to do so, I have decided to leverage the great power that comes with containers. In this post I will go over the basic settings required to be able to deploy the automation project (application) within its “desired environment” (i.e. – all the needed settings other than the project itself that are needed by the application) using a Docker container.

    1. Logistics:
      1. The Docker engine (will be described later) will be installed on my Ubuntu 16.04 development machine
  2.  Installing Docker engine
    1. Note about containers:
      I have decided, in this post, not to go over all the theory that should (must ?) be introduced regarding containers and Docker – so I will simply “dive” into technical hands-on details right away.
      Note, however, that Docker comes in two “flavors” – Community Edition (CE) and Enterprise Edition (EE). I will use the CE.
    2. The installation procedure I have performed is as follows:
      1. Fetch the latest Docker Engine CE installation script:
        curl -fsSL -o
      2. Switch to root account:
        sudo su
      3. Run the docker Engine CE installation script:
        sudo sh
      4. In order to be able to run Docker as a non root user, add your user (or any other user that is not root and might want to run the Docker engine) to the docker group as follows (here guya is the user to add):
        sudo usermod -aG docker guya
        newgrp docker
      5. Check Docker version:
        docker version
      6. Run the docker “hello world” using the command:
        docker run hello-world
  3. Docker image requirements for the automation project
    In order to understand and figure that out, first lets take a look on a high level description of the containers (and Docker in particular) model Vs the virtual machine model so that it will be possible to deduce what exactly is required for the automation Docker image.
    The way a Docker container “looks” like Vs a VM can be described in the figure below:
    Note that, at the end, each one of the “App x” above, in my case, will be in run time the automation project running – Python application on a Linux host – in general. In addition, there will be “system prerequisites” that the host will need to have such as SSH capabilities that the automation uses and obviously some 3rd party Python packages.
    So at the very start, the automation project Docker image will need to be consist of:

    1. A Linux host, say Ubuntu 18.04 (with which I have been working so far).
    2. Python 3 (preferably the latest, so nowadays it is Python 3.8).
  4. Docker containers and images
    Very briefly speaking, Docker containers are a “running instance” of an Docker image. One can think of it (in the way I see it), with an analogy to C++ terms, that the image is the C++ template while the containers are objects of this template that are created from these images at run time. The images of the Docker container are “maintained” (structured) in a layered structure, see the figure below for an example:
    As you can see, the idea here is to have the images being “accumulated” and NOT have a monolithic image that will (most likely) contain lots of stuff that are not really needed by everyone who will use it.
  5. Dockerfiles – high level
    Perhaps the most powerful way to compose a Docker image (that later will be used to run a Docker container) is a via a “special” file called Dockerfile.
    Some notes about it:

    1. Its name MUST be exactly Dockerfile.
    2. The location (i.e. folder where it reside) is very crucial due to the fact that when it will be used to “build” a container, ALL files in this folder and its sub folders will be taking into “account” as well. So usually you will place it in an empty folder “just for it” (and other files that might be added there on purpose).
  6. Dockerfile – requirements
    As a reminder, what the container of my interest needs is:

    1. Ubuntu 18.04
    2. Python 3 – new, but not newest –>  nowadays, I will go with Python 3.8
  7. Dockerfile – contents
    The very “straight forward” Dockerfile that fulfills the requirements mentioned above is as follows:

    # This Dockerfile is used in order to build (construct) the very minimal software that is needed to host the Python automation code. For that
    # it has the following main requirements:
    # 1) Ubuntu 18.04
    # 2) Python 3.8
    # 1) 
    FROM ubuntu:18.04
    # 2.1) get general updates and set the Python repository as one of apt repositories
    RUN apt-get update && \
      apt-get install -y software-properties-common && \
      add-apt-repository ppa:deadsnakes/ppa
    # 2.2) Install some Python 3.8 and its "fundamental" modules
    RUN apt-get install -y build-essential python3.8 python3.8-dev python3-pip python3.8-venv
    RUN apt-get install -y git
    # 2.3) update pip
    RUN python3.8 -m pip install pip --upgrade
    RUN python3.8 -m pip install wheel
    # 2.4) by default Ubuntu 18.04 comes with Python 3.6 so after installing the newer Python 3.8 "switch" preferences between them
    # so that Python 3.8 will be used when using the "command" python3
    RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 1 && \
      update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 2 && \
      update-alternatives --set python3 /usr/bin/python3.8

    Without going into “too many” technical details (cause it should, Dockerfile is a topic that can easily be much more elaborated on), there are two “Dockerfile commands” used here

    1. FROM – which MUST be the first command and it starts from what “base image” does this Dockerfile (which essentially compose an image) will be consist of. You guesses right, in this case it is Ubuntu 18.04
    2. RUN – each RUN command creates an additional “sub layer” for the final image that will be constructed. In this case several RUN commands are used in order to perform “separate” sub step in the Python “installation”. Each 2.x comment more or less speaks for itself regarding to what (and why) does it do what it does.
  8. Conclusion
    In this post I have described a very high level and straight forward introduction to containers, Docker images and Dockerfile. Later I have composed an “initial” (i.e. – more fine tuning adjustments might take place) Dockerfile for the needs of my Python application which I wish to run on a container.

a) Official Docker installation notes using the “installation script”
b) Change the default Python version on Ubuntu

The picture: The city of Ica, Peru.

One thought on “Getting started with Docker containers and images to deploy Python application (new automation system – part 16)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s