Packing and Shipping Code With Docker

Nethania
4 min readJun 4, 2021
Photo by Henry Be on Unsplash

As software developers, we’ve all had to run someone else’s code on our own machines. Docker is a popular tool to make it easier and safer. It allows us to encapsulate a project, including the code and dependencies, and ship it to any machine with no hassle.

Why Docker?

Faster Delivery Cycles

Docker containers decrease deployment time significantly. It saves developers much time and cost.

Ability to Run Anywhere

Like discussed above, one of the main advantages of Docker is that it enables us to run a project on any machine, regardless of the operating system.

Simple Configuration

Packing and shipping a project with Docker is pretty straightforward. We’ll discuss how to do it later in this article.

Resources

The Docker community shares numerous public images that everyone can find in Docker Hub. These images are ready to use, and it’s very easy to find the ones that we need.

Docker Concepts

These are the basic Docker concepts to get an overview of how Docker works.

Docker Container

Docker containers work like virtual machines in the sense that they both aim to isolate a project along with its dependencies so that it can be run anywhere. The main difference is that while a VM has an operating system on its own, a Docker container uses the host machine’s operating system. This is what makes a docker container more lightweight and efficient than a virtual machine.

https://www.docker.com/resources/what-container

Docker Image

To use Docker, we “pack” our code, dependencies, and settings into a Docker Image. It’s an executable that becomes a container when it runs on Docker Engine.

Docker Engine

Docker Engine is an application that consists of a Docker Daemon, a Docker Client, and a REST API. The Docker Client executes commands by interacting remotely with the Docker Daemon through the REST API.

Docker Client

As users, Docker Client is the interface that we communicate with. It’s a CLI that lets us input commands and forwards these commands to Docker Daemon.

Docker Daemon

Docker Daemon executes the commands that we send through Docker Client. It manages Docker objects, like images, containers, and volumes.

Volume

Volumes keep the data of containers. With volumes, we can share containers’ data. We can also reuse volumes in multiple containers.

Dockerfile

To build a Docker image, we write a set of instructions in a file called Dockerfile. As an example, here’s the Dockerfile that we use in our PPL project.

In the file above, we specify the base image that is required for the app to run. Since this is a Python project, we use the python3:8 image. Then we define the environment variables that we’re going to use. With the RUN command, we install the required libraries that are already specified in the requirements.txt file. Last but not least, we run the application with CMD.

To execute it, we can use the command docker build. Now that we have the image, we can run it with the command docker run. It will execute the Docker Image that contains our application, so it will look like we’re running the application as usual. To stop it, we can use the command docker stop.

Docker Orchestration

Orchestration is the automation of managing, scaling, and maintaining containers. Orchestrators are tools that help the automation, which includes the following tasks.

  • Health monitoring
  • Load balancing
  • Resources allocation
  • Containers reconfiguration and scheduling
  • Containers provisioning and deployment
  • Traffic routing
  • Containers update
https://avinetworks.com/glossary/container-orchestration

The most common examples of orchestrators are Kubernetes and Docker Swarm.

How It Works

To enable orchestration, we write a configuration file, typically in YAML or JSON format, that includes where to collect images, how to network between containers, how to mount storage volumes, and where to store logs. Once a container is running, the orchestrator reads this configuration file and manages its lifecycle according to the specifications written there.

One example of orchestration with Docker is Docker Compose. It’s a tool that creates the required containers to deploy multi-container applications. The configuration file is called docker-compose.yml. For example, we want to build a project that needs two containers, one for the web server and another one for the Redis server. This is how we can define the configuration file.

As we can see, we can specify the image and ports for each of the container. Other than these attributes, we can also define the environment variables, dependencies, volumes, etc. With Docker Compose, we can build both of these containers using one file and one command. To run it, we can use the command docker-compose up.

Conclusion

I hope this article has provided you with fundamental Docker knowledge that you need to create your first project with Docker. Thanks for reading!

References

--

--