Introduction
This is not a in-depth guide for Docker but rather understanding enough to get comfortable with it.
Docker is a containerization tool that creates an isolated environment for applications to not rely on system level dependencies so its behavior doesn’t change from system to system.
It can be considered as defining a pure function rather than a function that relies on global variables or side effects. With the same inputs, it’ll produce the same output, here inputs being the system you define for your application to execute.
What is Docker
Docker is one of the many other containerization technologies, its lightweight nature made it quite popular.
The course from FrontendMasters by Brian Holt can be eye opening especially understanding internals of it. *Spoiler alert* I was quite surprised to find out there is not really a container, it’s rather a jailed linux process, you kind of define a new Linux OS within your Linux OS you provide utilities for this “jailed” process
Also one analogy I thought of, imagine you have a tall and big building, you decide to add new contained mini homes (apartments) to make it available for renting for other people, but you don’t want to give up on your privacy, so you build mini homes with all the infrastructure within these mini apartments. Because you want your kitchen, bedroom and bathroom private to you, but tenants also need to cook and take a shower so you also provide this infrastructure and utilities for your tenants. And you can even put a camera in these apartments to monitor them :D
Why Do We Need It
To continue with the analogy, every tenant has different preferences, some prefer kitchen to be small, some prefer it to be big, some put their sauces in a refrigerator some prefer not to do it, on top of that you have your preferences, such a chaos it would be after a few tenants to share these places with them
That’s the exact reason why we need Docker, Docker creates well defined environments for each process, so each will have their own preferences in place, hence no chaos.
How to Run a Container
To run a container, you need to define a base image, base image can be an OS or specifically some application or program that is already configured by the image sharing party. Here image refers to the blueprint for our containers to run.
Let’s investigate an example command
docker run --name db -e POSTGRES_PASSWORD=secret -p 5432:5432 -d postgres:17
This command runs a container named db
based on the image postgres:17
, and sets an environment variable for this
container, in this case it’s POSTGRES_PASSWORD
, this password will be used to authenticate. Some environment variables
may be required, you should check the specific images documentation if you need to configure it or find the required inputs
to run them. Postgres allow many configuration options, and you can pass them all the way described in its documentation
as if it’s running on the host itself.
-p 5432:5432
configures which ports to expose to your local machine, Postgres by default runs on the port 5432
, and
left side of the colon refers toyour host machine port, here we are telling docker to bind our host machine port 5432
to container5432
, as if it’s runnnig on the host itself. If we change it to -p 5433:5432
then this container will be
accessible on your host machine through the port 5433
.
Back to our tenant analogy, here Postgres container is a tenant with all of its specific configuration and requirements are contained within this image, postgres:17 is the image, Images are blueprints and containers are running instances of the images. We just built a self contained apartment for Postgres to run.
Now we have acces to a one of the most advanced databases, without installing or upgrading (or downgrading) any system level program. Do we want to kick this tenant out? It’s as easy as running the following command
docker container stop db
First we need to stop the running container
docker container rm db
And we remove the container
But also note that, docker caches many things to get things up for you as quick as possible, the image itself is still there
You can list the images with the command
docker images
And again either by their image id, or by their repository name
docker rmi <image-id>
docker rmi <image-repository-name>
Images should not be in use by containers for you to be able to delete them, although you shouldn’t monitor and manage this list manually, there are other commands to handle them in bulk smartly. See the sections below for maintaince commands.
docker-compose
to Run Multiple Containers
In the modern era of application development it’s rare to expect single docker container to have for your application, modern applications usually take advantage of many services while running. Like the example we saw before, you would probably need a database, or in memory database to cache things, or you would want to set up an observability stack to monitor your application and containers and imagine going through installation for each of the components rather than executing a single command.
That command being docker compose up -d
, as the name of the command suggests, it composes and orchestrates the starting
and configuration of one or many containers.
If you want to rebuild the images, you would need to add the --build
flag.
Whole command being docker compose up -d --build
Docker compose not just any other command but it’s a command with context, the directory you are in, allows docker to have context, for example the following commnad would list all the containers running on your system
docker ps
But running it with docker compose would only list the containers running and from the given (working directory) context.
docker compose ps
would list all the container that are part of the current context, there are many other commands follow
this logic.
docker stats
would give you the container resource usage statistics, while docker compose stats
would list the stats
from the current context containers.
Here again, you need to pay attention to your working directory while executing docker compose command, because it will look
for docker-compose.yml
files.
It’s also common to have different docker-compose configurations for different environemnts, by default it’s
docker-compose.yml
but you can specify a custom configuration file for the docker compose
command.
docker compose -f docker-compose.prod.yml up -d --build
notice how -f
is passed to compose command and not to the up
command.
Example Configuration
Daily Commands
Here some list of daily commands and their purposes
To list containers running on your system
docker ps
To watch and tail logs
docker logs -f <container-name>
There is a corresponding compose
command to tail logs of a service that is part of the compose file
docker compose logs -f <service-name>
Here service name refers to service names in your docker-compose file:
services:
db:
image: postgres:17
# ...
if you want to tail logs for db service, then you would need to run the following command. It can be easier to spot than the container name.
docker compose logs -f db
Notice how compose is defining a context here.
If you want to run specific container from a compose file, then you can pass their service name to the docker compose up
command. Assuming you have other services defined in your docker-compose file, but you don’t want to run them all, then
you can user the command below.
docker compose up -d --build app db
--build
flag rebuilds the images, most of the times, it’s enough to reflect all the changes you made, but sometimes
you may need to run docker compose down
command before running the docker compose up
command. If for example there
are changes to the volumes, then for this change to take effect, it needs to be put down first to up again. If you see
something isn’t taking effect, and you ran with --build
command, you should probably try docker compose down
before
another docker compose up
command if you can’t verify your changes are being applied.