Back to top

Rummager & docker-compose

If your docker environment consists of more than one container, then knowing the docker-compose will make your life easier. Besides, even with one or maybe – two containers it becomes a very helpful tool. What makes using the "bare" docker difficult, so the multiplicity of options and parameters indicated at start-up of the containers in more advanced (than "hello world") implementations, is quite clearly and comfortably simplified by the docker-compose itself. Thanks to that tool we can manage our environment by the defined names/aliases without the necessity to remember (and constantly inserting) given details of implementation. Of course we could do this as well using the clever shell scripts, however yaml (this format of the configuration file is used by the docker-compose) seems more… elegant and elegance is an important thing.

As we can read on the website dedicated to the docker-compose, it is a tool used for configuring and managing the multi-container environments. The "configuration" is all the parameters of the docker commands that we normally should have inserted in the command line – here we save them in a configuration file. After preparing such file working with our environment (starting / resetting / stopping the containers) is limited to giving a right command and a name of the container – all the additional parameters are read from the ready file, in this way the commands (from my memory):

$ docker build –t rummager-service ./images/service
$ docker run –name rum-rumsrv rummager-service \
        -v ${SRC_RUMSRV}:/project/rumsrv \
        -p ${LOCALHOST}:80:80/tcp \
        --network rum-net –ip ${NETWORK_ADDRESS}.${HOST_IP_RUMSRV} \
        –network-alias rumsrv.${DOMAIN_NAME} \
        –network-alias rumsrv.local

(I omit the issue of initializing the used variables – SRC_RUMSRV, LOCALHOST, DOMAIN_NAME, etc., and creating the "rum-net" network).

We substitute it with one command (of course there must be a right docker-compose.yml configuration file):

$ docker-compose up –d rum-rumsrv

The above command will read the variables from the ".env" file and will create the "rum-net" Network if it doesn’t exist yet (according to the configuration saved in the docker-compose.yml file) – isn’t it more comfortable?

Here we should notice one thing – the docker-compose in any way substitutes the Dockefile files! Those files remain the main "tool" for building the "images". The Docker-compose lets us improve the device and the work organization with the containers (built based on images) – so helps us in a place that is already outside the area of influence of the Dockerfile files (more details on the docker can be found in my previous article)

Let’s focus on the most important issues – while writing about the docker-compose I used my own project as an example – rummager (unfortunately the project page is only in polish for the time being) or rather the developer environment prepared for it. I won’t get into technical details of the structure and commands of the file itself, I would like only to discuss the "designed" configuration inside – what was included and why.

Let’s start

The developer environment, the article describes this environment in the 1.2 version.

The variables used in the configuration of the whole environment were saved in the env-local file, the changes in the file docker-compose.yml should not be necessary – if they are, let me know because this means that something apparently should be fixed The Docker-compose will automatically "look for" the .env file in the catalogue in which the docker-compose.yml file is placed and in this case the .env is a symbolic link to the env-local file (only because the files starting with a dot in the unix/GNU systems are the so-called hidden files, so the same ".env" file will not always be visible). The references to the variables in the docker-compose.yml file (so the names starting with the sign $) concern the variables defined in the env-local file, so in the record:

  - mysql.${DOMAIN_NAME}

in the place of ${DOMAIN_NAME} the docker-compose places the value of DOMAIN_NAME from the env-local file.

In the case of the rummage app, the docker-compose.yml file consists of two sections – networks and services (it is also possible to define the "volumes" section, but here it is not used). Now let’s look at the sections.

Network (section: networks)

The configuration of the project assumes using the network of the C class, the first three bytes of which were determined in the configuration file env-local – here 172.29.4. In the docker-compose.yml file there are already given only the appropriate "complements" (the forth byte, alternatively the netmask in the CIDR format) of three values connected to the network:

  • subnet - here: ${NETWORK_ADDRESS}.0/24, so in the case of default value from env-local: NETWORK_ADDRESS=172.29.4, the used network is 172.29.4.0/24.
  • gateway – the address of the gateway.
  • ip_range – this setting determines the pool of addresses for the dynamic use. From this pool derive the addresses for all the containers assigned to a given network for which the static address was not assigned. In this case the range is determined to be: ${NETWORK_ADDRESS}.128/25, so the dynamic addresses are assigned from the range 172.29.4.128 to 172.29.4.254.
  •  

Containers (section: services)

The whole environment consists of few containers with a short description below.
 
rum-mysql
 
Database – the container with the MySQL Server based on the newest official image MySQL in the 5.x version is available in the dockerhub.com service. The container can be built and run by the command:

$ docker-compose up –d rum-mysql

Where "rum-mysql" is the name of the container defined in the configuration file in which the start parameters of this container are read. It’s important to remember that while using the docker-compose we must be in the same directory as the file docker-compose.yml that we work with (if we don’t give the path to the file explicitly). One more significant information – if the network defined in the configuration file does not exist in the host system, it will be automatically created along with the start of the first container.

In the case of this very environment, the data of the MySQL server is kept in the file system (layer) of the container – it is not the only possible solution, it isn’t even the recommended solution! The more "elegant" one seems placing the data outside the container which will let us maintain them after deleting the container (the Docker documentation recommends the "statelessness" of the containers). However here the data is not very important and the developer environment "starts" always with a "clean" database. Because we do not recreate the state of the database, but only its structure, in the described version of the environment I chose a simpler solution without using the external (towards the container) location of the data files.

Let’s look at the full configuration of the rum-mysql container:

  rum-mysql:
    image: mysql:5
    container_name: rum-mysql
    networks: 
      rum-net:
        aliases:
          - mysql.${DOMAIN_NAME}
        ipv4_address: ${NETWORK_ADDRESS}.${HOST_IP_MYSQL}
    ports:
      - "${LOCALHOST}:3306:3306/tcp"
    environment:
      MYSQL_ROOT_PASSWORD: root

The value of the "image" parameter has been already discussed and I will only add that in the case of the rest of the containers (defined as part of this environment) this parameter is used in a slightly different way. It doesn’t indicate the image on which we build our container (and which must be taken from a remote repository, by default: dockerhub.com, as in this particular case of "rum-mysql" container), but the name of the locally built image that has to be inserted in the local image repository. It is because in all the rest of the cases also the "build" parameter is given (which indicates the context of the docker image, so the directory where the Dockerfile should be found). So depending on whether the "build" parameter will be given or not, the "image" parameter has a different meaning.

"Environment" is a key contains variables set in the container session, MYSQL_ROOT_PASSWORD sets the user root password root (in the sense of the administrator of the MySQL server) – these variables, their names and meaning obviously depend on the app (their description may usually by found on the website of a given image in the dockerhub.org service, in the case of MySQL see this page).

"Ports" – the meaning of this parameter is identical to option "-p" of the decker command (however here we can give many mappings as next elements of the list). The choice of the name of the variable used here - "LOCALHOST" is probably not so accurate. In reality we can use not only the addresses from the range 127.0.0.0/8 – but also the addresses of the interfaces defined in the host system.

"Networks" – the configuration of the network may be limited to giving only the name of one of the defined networks (here "run-net") just as it looks like in the definition of the "rum-worker" container, or it may be also more developed. Generally, the docker-compose lets us use all the functions of the "docker create" command and sometimes, as in the case of the network configuration, by organizing parameters into bigger structures – the details may be found in the documentation. What is worth mentioning here - the so-called "user defined networks" (as in this case) use the Internal DNS (defined and functioning as part of the docker app). Both the names of containers (as e.g. "rum-mysql") and aliases defined in a given network (in this case mysql.${DOMAIN_NAME}) will be resolved by the Internal DNS and will be visible to all the containers in the range of this network. So to the MySQL server we can refer (from other containers defined in the "rum-net" network) using one of two names. Of course it may be used in a wider range – also from the level of the host system we can use the Internal DNS of the docker system, however we should take care of an additional configuration (both on the part of the host and docker, in the second case we can e.g. use the dnsmasq container) – currently this environment does not have such a configuration, however I do not rule that possibility out.

If in the structure of the network configuration the static address of the container (ipv4_address) is not determined, it will be assigned automatically from the “ip_range” pool of a given network.

rum-rumsrv
 
The container with the service used by processes of "workers" (see the container: rum-worker). It includes the HTTP server with a php interpreter, the code of the service (which should be cloned earlier into the directory determined in env-local by SRC_RUMSRV) is mounted inside the container in /project/rumsrv (see also the README.md file)

rum-smtp
 
Is the container used only for the tests – it simulates the activity of the SMTP servers "questioned" by the app. All the connections of the workers with the SMTP servers are directed to this container (by writing in the iptables of the rum-worker container). The rules of iptables of the rum-smtp container realize randomly one of three connection scenarios:

  1. the connection is rejected by the REJECT message.
  2. the connection is rejected implicitly (DROP).
  3. the connection is approved and a normal SMTP session begins.

rum-worker
 
the container in which the app rummager is started – so the process managing the threads of workers (each worker is started in a separate thread). The rummager process is started automatically after starting the container through the script "start.sh" indicated by the CMD command in the Dockerfile file.
 

CMD["/bin/bash","-c","/usr/local/bin/start.sh"]

 
rum-tech
 
The technical container which doesn’t realize any function in the project, but has the software needed to build or prepare the components of the project, such as:

  1. database – script bin/createdb.sh use this container at recreating the database.
  2. the installation of the dependency rum-rumsrv - composer (the composer alone is not installed automatically, however if the user doesn’t have the installed composer program on his local machine, it is best to install the updated version of composer in this container according to the instructions on the composer website and use it exactly from this place)

Short summary

Docker-compose is powerful, but there will surely be those who would like to manage the environment through the Shell scripts or python (in which the docker-compose was written, I guess). The Python, supported by the proper modules, has a great potential and it seems like it doesn’t have any limits, which we can’t say about the docker-compose (I myself currently work on python in the context of docker in the project docker-image-builder). However if we want to use more advanced possibilities of python we obviously need to start learning to program in this language – not everyone may like it. I omit the fact that the advanced possibilities are useful in the advanced applications – in the case of basic usage (or even the intermediate ones) the docker-compose works perfectly! Besides, probably each "live" environment will be a kind of mixture of all the technologies mentioned here – so the configurations in the docker-compose.yml (as bases), shell scripts for implementation of different tasks in our environment (preparing the database, etc…) and maybe the programs written in python (as e.g. docker-image-builder)… but I will try to write about all of these another time, for now I would like to thank you for your attention and have fun!