Containers are isolated software environments that allow packages applications to run across different platforms regardless of the underlying infrastructure. Docker is one such platform that facilitates this.
Containers fix the challenges of deploying application across inconsistent environments, environments with resource constraints, and enables quicker deployments and scaling.
The Opem Container Initiative (OCI) has 3 main specs which define
- Runtime specs - container engine
- Image specs - image format and...
- Distribution specs - standardised API to facilitate the distribution of content
Shared depedencies as binaries and libraries live on the same OS.
- Inefficient from a resource utilisation point-of-view
- slow start up and shutdown
- provisioning tedious
Shared hardware but OSes are independent.
- Better utilisation of resources
- faster start up and shutdowns
- faster provisioning and templating
Shared OS with container runtime; containers can run on either bare metal or virtual machines.
- Application and binaries are sharing the Linux kernel; Windows is different π€
- Binaries and libraries are isolated to the container
- Start up and shutdowns in seconds
- Excellent resource utilisation
- Docker
- Podman
- Containerd (k8s uses this)
- Cri-o
Namespaces enable the isolation of systems resources e.g. the process namespace isolates processes so a container process cannot see host processes or processes in other containers.
There are networking, file system mount points, Naming, User and inter-process communication namespaces which allow containers to run isolated in Linux.
To view cgroups...
cat /proc/cgroups
Limits amount of resources used per process...
βοΈ specific to Linux Docker Desktop runs a Linux Virtual Machine...which is where the containers reside and it's that Linux Kernel that controls / isolates access.
A running process with access to a given set of resources
process within container -> kernel -> allocated hardware resources
Unifies several filesystems into one; Docker uses overlayfs. Directories with the same path are merged whereas files at the upper layer take precedence over the same files at the lower layer.
The Docker Engine is the open source components of Docker Desktop; specifically the client CLI, dockerd and Docker API.
This component is Part of the Docker Engine.
docker --version
Passes commands to the Docker Server
...
This component is Part of the Docker Engine.
The installation of Docker Desktop creates a virtual machine locally that exposes the Docker API and runs dockerd.
On a Windows System Docker Desktop can use WSL or Hyper-V as the virtual machine.
This component is Part of the Docker Engine. ...
Part of the Docker Engine. ...
Downloads an image from the registry...this command would be run by Docker Run too if the image doesn't exist locally.
docker pull image_name
#e.g.
docker pull busybox # pull the latest version of busybox
Build a Docker image from a dockerfile; -f
is useful if the dockerfile isn't called dockerfile e.g. dockerfile.dev
.
docker build -f ./dockerfile -t name:tag .
pass in --progress=plain
to enable a more verbose...
if for some reason you wanted to disable caching pass in --no-cache
To view how an image was built, you can use the history
command.
docker login
images=$(docker image list --all --format json | jq -rc .ID)
for image in $images
do
# Docker scout is a separate application that needs installing
docker scout cves $image
done
Inspect outputs the image as json
.
List the images available locally...
docker image ls
$dom in ../Docker on ξ main [ π ποΈ Γ2 ]
2s bash $ β sudo docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
heathen1878/basic latest 755bfc9736da 2 hours ago 7.8MB
basic latest 755bfc9736da 2 hours ago 7.8MB
docker image rm basic
docker image ls
$dom in ../Docker on ξ main [ π ποΈ Γ2 ]
2s bash $ β sudo docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
heathen1878/basic latest 755bfc9736da 2 hours ago 7.8MB
To remove multiple images you could use something like...
images=$(sudo docker image list --all --format json | jq -rc .ID)
for image in $images
do
sudo docker image rm $image --force
done
Removes any images not associated with a container.
Docker run creates a container from the image and runs that container locally.
docker run
pretty much equals docker create
and docker start
π Docker Server will check the image cache for cached copies of the requested image.
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c1ec31eb5944: Pull complete
...
Status: Downloaded newer image for hello-world:latest
then run the container...
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
This is similar to overiding the entrypoint using --entrypoint
.
docker run busybox ls
$dom in ../Docker on ξ master [ π ]
3s bash $ β sudo docker run busybox ls
bin
dev
etc
home
lib
lib64
proc
root
sys
tmp
usr
var
In the example below a web server is being run locally on port 80 in a detached state.
docker run -d -p 80:80 --name frontend frontend
$dom in ../Docker on ξ main [ π ποΈ Γ2 ]
39ms bash $ β sudo docker run -d -p 80:80 --name frontend frontend
4f0a7ff2e2f04f7443034a3529dbf6c790c7e8e71640b24ef5f3a3da992ede15
The init flag can be used to ensure the init process runs as PID 1
; therefore can control cleaning up zombie processes and handling signal forwarding. If you have coded for signal handling and are unlikely to have any zombie processes then --init
isn't needed.
Used to assign a predefined name to a container; NOTE containers must be unique on your computer.
Allows you to connect to a predefined Docker network; by default all containers share the same network.
Useful to pull specific image platform architecture e.g. arm, amd64...
USed to define the restart policy for a container; the options are no restart, restart on failure and optionally retry x no. of times, restart unless stopped or always restart.
docker run --restart always ubuntu
watch "docker container list"
You can use --cap-drop=all
and then --cap-add=blah
to lock down the runtime security of the container.
--cpus
can limit the number of core allocated to a container and --cpu-shares
can assign a relative share of CPU time to a container.
--memory
can limit the memory allocated to a container; should it exceed that amount it will be restarted whereas memory reservation guarantee an amount of memory.
--user
allow you to specify a non root user; by default the container will run as root. Ideally the dockerfile would specify the USER
instruction to ensure that user is used to run the entrypoint and cmd. If the group is not specified then the user will be run with the root group.
--read-only
this forces the container file system to be read only; works well with volume or tmpfs mounts for locations that require writes.
Docker scout can be used to look for vulnerabilities in images.
docker scout cves image_name
β Image stored for indexing
β Indexed 130 packages
β Detected 5 vulnerable packages with a total of 5 vulnerabilities
...
Attach to a container stdin...
docker attach container_id
From the example above the running container is...
docker container ls
dom in ../Docker on ξ main [ π ]
1s bash $ β sudo docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f0a7ff2e2f0 frontend "/docker-entrypoint.β¦" 8 minutes ago Up 8 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp frontend
docker container ls --all
Docker start by default doesn't output STDOUT or STDERR. You can use docker logs container id
to view the logs from a container
To stop a container you can run docker stop container id
or docker kill container id
.
Docker stop
uses SIGTERM
a.k.a. a graceful shutdown whereas docker kill uses SIGKILL
a.k.a. stop now...
If the container doesn't stop after 10 seconds when docker stop was issued then docker will automatically issue docker kill. It can depend on whether the running process understands SIGTERM, if not a SIGKILL will be needed.
docker system prune
βοΈ deletes stopped containers and cleans up build cache
docker logs container id
# to tail logs
docker logs container_id -f
docker exec -it container id sh | bash
You can also use -it
with docker run.
You can also connect to an existing running container using docker attach containerid
; the limitation of this is, stdin is only connected to the primary process.
By default any changes within the container are ephemeral; containers are stateless by nature. If data changes should persist then consider using volumes, bind mounts, or tmpfs mounts.
By default docker volumes are stored in /var/lib/docker/volumes/
, you can modify the locatio used by Docker by editing /lib/systemd/system/docker.service
and changing ExecStart
to include --data-root /something
Volume mounts exist within the virtual machine running the container therefore allowing data to be persisted across container restarts.
# Create a volume
docker volume create docker_volume_name
docker run -v docker_volume_name:/path_within_the_container docker_image_name
βοΈ it more difficult to inspect the contents of a Docker Volume compared with a bind mount. There is a privileged container you can run to view the volumes. See here
Bind mounts connect back to the host filesystem also persisting data across container restarts; this option may have a sllight performance overhead for heavy erad / writes.
Bind mounts tend to be used where software developers are making code changes and want those changes to be reflected automatically within the container without having to rebuild the container or where you want to pass a start-up configuration file to postgres, nginx or similar. See the projects section for examples of these.
docker run -v local_path:/container_path
# You can also bookmark a container path within a path reference e.g
docker run -v container_path/directory_within_container -v local_path:/container_path
βοΈ in the example above directory_within_container would not reference the local filesystem even though the root directory references the local filesystem.
NOTE Docker Compose is useful when you need to pass many options to Docker.
Tmpfs mounts are in-memory storage...
Default public registry, use docker login.
docker login
Use az cli for Azure Container Registry
NOTE: If you need to use sudo
prefix all the commands below with sudo
.
az login
# List out the container registries...and take the first one...
acr=$(az acr list | jq -rc .[0].name)
# Authenticate
az acr login --name $acr
# Tag for ACR
# Use docker build -f dockerfile -t crmanual.azurecr.io/terraform_wrapper/tfcli:latest or tag an existing image using...
docker tag heathen1878/tfcli $acr.azurecr.io/terraform_wrapper/tfcli:latest
# Push to the ACR
docker push acr_name.azurecr.io/repo/image:tag
See here for GitHub Workflow for doing the above.
Webhooks are useful to notify other applications that an application or service image has been built or updated.
Images should be tagged using semantic versioning but you'll see lots of different methods; most are descriptive see here for examples.
docker run -it apline
π grab the ID of this container.
run the commands within the container...
docker commit -c 'CMD [ "redis-server" ] container-id
The dockerfile is a text documentation that contains instructions that docker should execute; see docker build. The .
represents the build context i.e. where the source code resides.
Within the build context you can include a .dockerignore
file which tells docker to ignore files, folders...
- Pin specific versions
- base images
- system dependencies
- application dependencies
- small and secure π
Generally it is best to try and use the alpine
, slim
, minimal
, or whichever variant defines small of the docker image; and a specific version too.
FROM almalinux:8-minimal
or for language specific images...
FROM node:lts-alpine
FROM golang:alpine
-
Protect the cache layer
- order copy commands by frequency of change
- use cache mounts
- use COPY --link - creates a new layer not tied to the previous layer # Requires dockerfile version 1.5
- combine steps that are always linked...using heredocs... π
RUN <<CMDS apt update apt upgrade -y apt install iputils-ping -y CMDS
-
Set the working directory
-
Set the expose port
-
Define any environmental variables - can be used at build and runtime
-
Define any build arguments - can only be used at build time
WORKDIR /app
EXPOSE 8080
ENV variable=env_var
ARG variable=build_var
-
Use
.dockerignore
-
Use a non root user
-
Ensure only the required dependencies are installed...between dev and prod
-
Use multi stage builds
USER nonroot
FROM image:version as build-base
COPY --from=build-base /some/file /some/file
- Directives
- version
- escape characters
- Label
- Add author details
# syntax=docker/dockerfile:1.5
# escape=\
LABEL org.opencontainers.image.authors="dom@domain.com"
Scan your images using Snyk. Example in the workflow. You can also use Docker Scout too amongst others.
Buildx allows you to create images for multiple architectures from a single dockerfile.
This example docker_build.yml file builds a Docker Image and deploys it to an Azure Container Registry tagged with the build ID. The template reference can be found here
This example is taken from here but uses the base Linux image above rather as a starting point. It does assume the repository name from the base Linux image is named azdodockerbase.
Example variable file
variables:
service_connection: '' # your ACR service connection name
image_repository: 'azdoagent'
container_registry: '' # your ACR url...azurecr.io
dockerfile_path: $(Build.SourcesDirectory)/azdo_self_hosted_linux_agent/Dockerfile
tags: '233'
Postgresql can easily be deployed using a prebuilt official image; and easily customised using bind mounts and volumes.
docker volume create pgdata
docker build -f projects/postgresql/dockerfile projects/postgresql/ -t heathen1878/postgres:dom
docker run -d -v pgdata:/var/lib/postgresql/data -e POSTGRES_PASSWORD=p@ssw0rd -p 5432:5432 heathen1878/postgres:dom
Using Docker Compose...
# passing sudo -E to expose the postgres password to Docker Compose. The script create_environment_variables.sh can pull values from KV or GitHub and create environmental variables.
sudo -E docker compose --project-directory projects/postgresql/ up
...
Useful for running code against runtime not installed locally.
The Terraform wrapper contains tooling to assist in running Terraform.
docker build -f projects/terraform_wrapper/dockerfile projects/terraform_wrapper --build-arg TERRAFORM_VERSION="1.9.3" -t heathen1878/tfcli:22.04 -t heathen1878/tfcli:latest
# Create a local alias which run the container mounting local source directories and .ssh directories into the container.
alias 'tfcli=sudo docker run --rm -it -v ~/source:/root/source -v ~/.ssh:/root/.ssh heathen1878/tfcli:latest bash'
# Run the contaner from the alias above
tfcli
# Test authentication using Az Cli
tfauth
# Exit the container
exit
This is a simple Node Js web app running as a container; see instructions here
This example uses docker compose to build the networking between in each container. The dockerfile defines how the image should be built and docker compose builds it and run the container with any additional instructions. Instructions here
This example uses docker compose to build the networking between in each container. The project has several dockerfiles...
which define how each container image should be built and docker compose builds it and runs the containers with any additional instructions. The project depends on PostgreSQL and Redis; the docker compose file builds them from specified images hosted on Docker Hub. Instructions here
In a cloud environment you may use managed instances of these. See these examples...
This example uses docker compose to build all the required containers and ensure all traffic is routed via nginx. The dockerfiles for each container are stored within the directory for that applicatione e.g. Go Lang API.
The Go Lang API is using Air and the Node API is using Nodemon; this functionality requires each containers code repository be bind mounted; see the docker compose file link above.
Air is installed in the Go Lang API container and configured by air.
Nodemon is defined within package.json and installed and run in the dockerfile.
The docker compose debug file contains the overrides for the Node API...
command:
- "npm"
- "run"
- "debug-docker"
This overrides the command specified in the dockerfile and executes...
and Go Lang API...
command:
- "dlv"
- "debug"
- "--headless"
...
You can attach VS Code using the Run and Debug options and adding this vscode configuration to your repo.
The docker compose test file contains overrides for running tests for the applications. See here for an example.
...