Docker containers are meant to be disposable and easily replaced. When a new version of a container’s base image is released, you should pull the new image and start a new container instance. Here’s how to manage image updates across your container fleet.

Pulling New Images

The basic way of applying an image update is to pull the new image, destroy running containers based on the old version, and then start new containers in their place.

Here’s an example for a container using the nginx:latest image:

Docker lacks a built-in way to detect image updates and replace your running containers. The result is a convoluted manual replacement process. It can be simplified by using Docker Compose to start your containers instead of the plain docker run command.

Replacing Containers With Docker Compose

Docker Compose lets you create declarative representations of container stacks using a docker-compose.yml file. The stack is started with docker-compose up, using the configuration contained in the file. This replaces the long list of flags usually given to docker run.

Docker Compose has a built-in pull command that will pull updated versions of all the images in your stack. It’s still a two-stage procedure as you must manually run docker-compose up again afterwards.

Docker Compose offers a simpler and more memorable experience where you don’t need to type image names or remember the flags you passed to docker run. The two commands can be readily shortened to a single shell alias:

Managing Image Tags

You need to reference the correct tag when you pull images manually. Docker Compose will handle this for you and select the tags specified in your docker-compose.yml.

Pulling the new version of a tag is not necessarily the same as using the most recent release of an image. If you want to be using the latest version of software inside the container, pay attention to the image author’s tagging practices.

As an example, pulling a new version of node:14 will get you the latest patch release of Node.js 14. Pulling node:latest will deliver the most recent Node.js version, currently 16. If an old container was using this image, a pull and replace process would trigger a major version bump for the Node binary inside the container.

Rebuilding Images

So far we’ve seen how to handle containers started from images you’re pulling directly from Docker Hub or another registry. Images which you’re building yourself need to be rebuilt when their base image changes.

First rebuild the image:

Then replace your containers:

The –pull flag given to docker build instructs Docker to pull the base image referenced in your Dockerfile. Without this flag, Docker would reuse the existing tag reference if the image was already present on the system.

Docker Compose users can achieve the same results with the corresponding docker-compose commands:

Compose again offers a simpler, albeit still two-stage, process. You can forget specific image names and tags, instead trusting Compose to pull changed base images, rebuild your layers atop them, and then recreate your containers.

Software Inside Containers

Sometimes it can be tempting to manually update software inside your containers. This should be avoided as it goes again Docker’s principles.

Running apt-get update && apt get upgrade -y on a schedule (or your package manager’s counterparts) is standard practice when administering a bare metal Linux server. These commands aren’t normally run within a Docker container, although they may be included as part of a Dockerfile to get the very latest security patches during an image build.

Periodically pulling the base image and recreating your containers is the preferred way to keep them updated. This gives you all the upstream security fixes and shortens the lifespan of individual containers. Container environments aren’t meant to be modified after an instance is created; filesystem changes should be limited to writes to temporary paths and dedicated Docker volumes which outlive the container.

Automating Container Updates

You can automate the process of checking for updated image tags and restarting your containers using third-party projects. Watchtower is a popular choice which monitors running containers and replaces them when their Docker Hub image changes.

Watchtower itself is deployed as a container:

Now you’ve got a functioning Watchtower install. Your host’s Docker socket is mounted into the Watchtower container, allowing it to run Docker commands to create and delete containers.

Watchtower will automatically detect new image releases on Docker Hub, pull them to your machine, and replace containers using the image. Existing containers will be shutdown and new identical ones created in their place. The same flags you gave to docker run will be supplied to the replacement containers.

Watchtower only works with Docker Hub by default. You can use it with private image registries by supplying credentials in a configuration file.

Create a JSON file with the following content:

Replace example.com with the path to your registry.

Next generate a credentials string from your registry username and password:

Paste the resulting Base64-encoded string into the config file, replacing the credentials placeholder text.

Mount the config file into your Watchtower container to enable access to your registry:

Conclusion

Docker lacks any mechanism to detect and apply upstream image updates to your running containers. You can either use Docker CLI commands in sequence, docker-compose as a higher-level abstraction, or a third-party tool like Watchtower to replace your containers when new image versions are released.

Depending on your circumstances, you might not feel a need to upgrade containers in this way at all. If your team uses CI pipelines to build a Docker image on each commit, you might already be producing and deploying updated images multiple times a day. In this case, make sure you’re using the –pull flag with docker build so upstream fixes are included in your images.