Docker portability: 6 important caveats and pitfalls

In this article I present 6 kinds of real-world Docker portability issues, where Docker-based software does not run or build the same way on two different machines.

Introduction to Docker portability

When I started learning about Docker several years ago, I was very enthusiastic about its promise of portability: write your software and Dockerfile once, then run it anywhere. A dream come true!

Unfortunately, it’s pipe dream! Things do not always work out of the box. Let’s take a look at 6 real-life portability issues you may run into, when working with Docker.

Docker Desktop vs. Docker Linux engine

In production, Docker engine (or other container engines) typically only runs on Linux. Developers, however, are much more likely to work on Windows or macOS hosts, and use Docker Desktop. Docker Desktop jumps through many hoops (hidden from your eyes) to make this work, running some kind of Linux VM under the hood. A few features, most prominently bind mounting (=making folders on the host accessible in the container) are implemented very differently on Docker Desktop (macOS/Windows), compared to Docker engine (Linux).

Let’s take a look at a few caveats related to bind mounting:

  • The performance of bind mounts on Docker Desktop is much worse than on Linux. I’ve talked about a few solutions in this article.
  • There are issues with file system permission issues, exemplary, the ownership of the files.
    • With Docker engine on Linux, the ownership of files are retained “as is” between host and container. There is no user-ID-remapping, unless you configure it explicitly (see docs). If a folder that you bind-mount contains files owned by the Linux host user with UID=1000, they are also owned by UID=1000 inside the container. And files you create inside the container with the (often default) root user in a bind-mounted directory are owned by root on the host. The same holds for UNIX permission bits (such as chmod 644, etc.)
    • With Docker Desktop on Mac or Windows, the bind-mounting mechanism crosses the host ↔ Linux-VM boundary. The files you create on the host may now have a different owner in the container (and they might even have different permission bits, e.g. missing executable permission, when you cloned code to Windows). Files you create in the container (with root) in a bind-mounted directory are owned by the macOS/Windows user you use on the host (and not root). In other words: there is a remapping of user IDs taking place (see the first section here for more details). Note that, on Windows, if you bind-mount a directory from a WSL2 distro (e.g. /home/username) rather than somewhere on “C:”, the behavior is like as if you were using Linux as host OS. Files created in the container as root are also owned by the WSL root user, because you are not crossing the VM boundary in this case.
  • File mounts require different syntax due to the shell/terminal working differently: for instance, on Linux/macOS, a command like docker run --rm -it -v "$PWD:/test" docker:latest ash works, but it won’t work on Windows: there only docker run --rm -it -v "${PWD}:/test" docker:latest ash works.

If you work on Windows, I strongly suggest that you build Docker images from code that is stored inside a WSL2 distro. This avoids both the bind-mount performance problem, and the issues with ownership and UNIX permissions.

Platform differences (ARM vs. Intel/AMD)

Docker (Desktop and Engine) supports running on many CPU architectures (“platforms” in Docker speak), including Intel/AMD (a.k.a. x64 / AMD64) and ARM-based CPUs.

You can build multi-platform images, using emulation, on a single machine, and you can also associate multiple images (each built for a different platform) to the same Docker image tag.

Unfortunately, things only work if the maintainers of the Docker images (or the tools installed into them) have done their homework, and actually built and pushed the binaries (or images) for both CPU platforms. Unfortunately, this is not only always the case.

To see a very subtle fail in action, do this:

  • Use macOS on an M1-based device (ARM)
  • Start a Docker container for the redhat/ubi8 container
  • Install Terraform CLI with the RedHat package manager (yum) as per the official manual

It won’t work. Why? Because Hashicorp forgot (?) to build ARM64 binaries for Terraform, thus yum cannot find them.

However, if you start the Docker container with --platform=linux/amd64, everything works as expected.

Incompatible Linux kernels

On Linux, processes running in a Docker container make use of the host’s Linux kernel. This kernel basically offers a syscall-interface. Consequently, the binaries packaged into the container make syscalls against the host’s kernel. As this blog post elaborates (in particular: section “The Bugzilla Breakdown”), in rare cases it can happen that the syscall-interface that the containerized binaries expect does not match the one offered by the host kernel, resulting in crashes (or other weird behavior) that are very difficult to diagnose.

Coarse stable tags pull different images

When you run something like docker pull postgres:12 on two different machines at two different points of time, you may get two different Postgres-servers that also behave differently, because they use different minor versions. I looked at this problem (and possible solutions) in this article.

Different container tools or versions

There are many different tools for running and building containers, and their behavior may be different. Examples where things can go wrong:

  • One team member uses Docker Desktop version X, the other one uses an older version Y of Docker Desktop. The behavior of the two versions differ (e.g. docker compose not supporting build-secrets yet, in the older version Y)
  • One team member uses Docker Desktop on macOS, the other one uses colima on macOS
  • One team member uses Docker Desktop on Windows, the other one uses Rancher Desktop

Build problems due to platform or tooling differences

Sometimes building a Docker image only works on specific platforms and tools. Two examples:

  • If your Dockerfile contains BuildKit-specific syntax, e.g. RUN --mount=type=secret (see here for details), then you will run into problems with build tools that do not support such features, e.g. kaniko
  • If you want to build your image on Windows, and your Dockerfile runs a bash script that you cloned (on Windows) with Git, its line endings might be using CRLF characters (rather than LF). This causes problems when trying to run the image, because the bash interpreter cannot process shell scripts with CRLF line endings.


As I have demonstrated, there are plenty of rather unobvious reasons why Docker-based software won’t build or run in the same way on two different machines. Of course, most of the time, things “just work”. However, once they don’t, it is very helpful to be aware of the kinds of issues that can happen: with the list presented in this article, you now know where to look for the root cause.

Have you encountered any portability-issues with Docker? I would be happy to hear about them in the comments below!

Leave a Comment