I am creating new Docker images and encountering various issues; debugging a Docker image build can be surprisingly tricky.
Here are a bunch of things that I learned.
Docker basics
In case you are not familiar with Docker: it’s basically a system akin to a lightweight virtual machine. It uses files to build images, which are then run as containers.
The same Dockerfile can be used to build multiple images using different arguments ; and the same image can be used to start multiple containers using different arguments.

There is also a “compose” system that allows you to run multiple containers as a group, configure them in a single place, and other cool things.
A common question that is not often answered by tutorials is “why use Docker”: because it allows to standardize your applications build and deployment, and if correctly used, it can ease up your development by a lot (you can attach a debugger to a program running in Docker), especially if you have a microservice architecture.
Here is the official documentation:
- Command-Line Interface: https://docs.docker.com/reference/cli/docker/
- Dockerfile syntax: https://docs.docker.com/reference/dockerfile/
- Compose file syntax: https://docs.docker.com/reference/compose-file/
Advanced basics
If you’re using Windows, don’t forget that your containers will mostly use Linux, and that Linux uses LF file ending.
When built using a Dockerfile, each instruction in the file results in an intermediate image, called a layer. These layers truly are images, and can be used to start containers, inspected, etc. It’s an important thing to remember because it’s critical to remember that for debugging. More on that below.
Images and containers can accumulate over time and result in a huge disk loss. Use the Docker pruning to do some cleanup: https://docs.docker.com/engine/manage-resources/pruning/
When building an image or starting a container, either you specify a name, or an ID will be generated for you. Remember to use names if you need to find the image or container again later.
Take specific care with the order of arguments in commands: most do not accept arguments after unnamed parameters. For example, docker build -f .\Dockerfile src will work, but docker build src -f .\Dockerfile will not. If it looks like your argument is not used, it may be why.
By chaining multiple FROM in a Dockerfile you can use a single file to build a program using a big SDK and still have a lightweight image to run afterwards, for example:
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine AS base
# prepare the final image using EXPOSE, WORKDIR, RUN apk add, etc
FROM mcr.microsoft.com/dotnet/sdk:8.0-bookworm-slim AS build
# build the project using COPY and RUN dotnet build
FROM base AS final
# copy the build artifacts into the final image using COPY --from=build
Usual Docker commands
Building and running
-
docker build -f .\Dockerfile . uses the specified Dockerfile and builds the project, using the current folder as context; add
-t xxxto name the resulting image “xxx” -
docker run xxx creates a new container based on the image named “xxx”, then starts it; you can add
--name=yyyto specify the name of the container (otherwise it will have a random name) - docker create xxx creates the container based on the image named “xxx” but does not start it
- docker start xxx starts the container with the ID or name xxx, but it has to already exist
Checking the status
- docker image ls lists the already-built images that can be run
-
docker ps displays a list of running containers (add
-ato also list the ones that are stopped)
Debugging a Docker image creation
- docker inspect xxx shows metadata details about the specified image: layers, entrypoints, environment variables, exposed ports, etc.
- Set the
DOCKER_BUILDKITenvironment variable to 0 to enable more debugging (Powershell:$env:DOCKER_BUILDKIT=0; docker build -f path/to/Dockerfile .; Linux:DOCKER_BUILDKIT=0 docker build -f path/to/Dockerfile .). This will give you “short guids” at each steps, and these layers IDs can be used with docker run (see below) to start a container at that build step and see what’s happenning.
Note: this is deprecated, but the alternative (docker buildx debug build -f path/to/Dockerfile .) is experimental and does not work properly at the time of writing. - Another, more basic way to debug, is simply to comment everything in your Dockerfile except what’s working, but it can be a bit annoying with large, complex files.
Debugging a Docker container
- docker inspect xxx shows metadata details about the specified container: mounted volumes, networks, system stuff (virtual CPU, I/O, etc), environment variables, exposed ports, etc.
-
docker run --rm -it xxx sh creates a container based on the image named “xxx”, starts it and opens a shell (sh) on it (but you can run any command you want like “ls”) ;
-itruns the command in interactive (-i) TTY (-t) ;--rmremoves the container when it exists - docker run --rm -it --entrypoint sh xxx does the same but skips the ENTRYPOINT instruction of the Dockerfile (in case this renders it unable to start)
- docker exec -it xxx sh opens a shell (sh) inside a running container
- docker attach xxx attaches the current terminal to the main process of a container
- docker export xxx -o xxx.tar exports the content of the specified container to a tar file so it can be explored using an archive explorer (ex: 7zip). This command cannot use an image, it has to use a container, so if you cannot start your container you’ll have to use docker create first (see above)
-
docker logs xxx shows the logs of the container (actually stdout); adding
-fwill tail the logs
Dockerfile layers cache optimization
As previously stated, each instruction of a Dockerfile will result in a temporary image, called layer, to be created. Let’s take an example, a basic nginx server:
FROM nginx:alpine
ARG PORT=8080
EXPOSE ${PORT}
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
COPY ./html/ /usr/share/nginx/html
Each of these instructions creates a layer, and these layers are cached. This cache is invalidated when a layer changes, whether it’s the result of a RUN or the content of files to COPY.
What it means exactly, is that this Dockerfile, inverting the commands, functionally results in the same image:
FROM nginx:alpine
COPY ./html/ /usr/share/nginx/html
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
ARG PORT=8080
EXPOSE ${PORT}
However, every time a single file in the html folder changes, the whole cache after that is invalidated!
Obviously in this basic example it doesn’t change much, but when you’re preparing a complex image, remember to declare everything that changes rarely as high as possible in the stack of commands to optimize the Docker cache and speed up your build!
You must be logged in to post a comment.