docker_post_lead_image
dgartmann
Daniel Gartmann Security Specialist

Tech Focus Tue 10th July, 2018

Quick wins to secure your Docker containers

With the rise of Docker, we see loads of development teams using this technology to run workloads in production. But because it’s production, are those workloads secure?  

Many people believe that containers provide a strong security boundary, but that’s a common misconception as explained by Google on the GCP Blog.

We’d like to share some security quick wins we’ve introduced at Equal Experts that will significantly reduce the attack surface of your Docker containers and thus reduce the likelihood of a vulnerability being exploited.

Use a stripped down base image

Linux distributions come in many flavours, ranging from fully-fledged desktop distributions to headless distributions. It’s important to ship only what you need in order to reduce the attack surface.

Any unnecessary package can be used against you. Imagine how useful an attacker would find tools such as nmap to map your network, curl to fetch an evil piece of source code (in order to bypass any WAF blocking executable binaries) and gcc to compile it if they gain access to your container.  Therefore, we recommend you use a stripped down distribution such as Alpine Linux.

Note that the same principle also applies to the runtime and application layer!

docker_1

Pull images by hash

Docker, by default, pulls the latest version (:latest tag). Therefore it’s a good practice to pull by hash in order to uniquely identify an image and to ensure it’s integrity as any compromised images would usually be committed to the latest version.

docker_2

Only use vetted images

Anyone can publish Docker images on public registries such as DockerHub. Therefore you can find the good, the bad and the ugly and it’s vital to investigate who is behind a specific image. DockerHub provides a blue tick for official repositories and you can find detailed information about their vetting requirements on this page.

Check for vulnerabilities

Images within official repositories are frequently scanned using Docker Cloud’s Security Scanning service and you can get the access to the results of the scans for free by signing up to DockerHub. You can also use tools like Clair and Docker Bench to scan images locally (including your own).

You’ll immediately be able to see whether there are any known vulnerabilities in the images that you intend to use and also their respective severity. The history of those scans also provides a valuable insight into how well an image is being maintained by looking at how quickly security issues have been fixed in the past.

Read-only file system

Many attacks rely on an attacker persisting an exploit to disk while carrying out an attack. In order to reduce the likelihood of a successful exploitation, you can set the filesystem to read-only.  

docker_3

Do not run as root

Hardening best practices that apply to legacy stacks also apply to containerised applications. As you wouldn’t run any non-containerised application under the root account in production, the same applies to applications being executed within containers.

Unfortunately by default, Docker runs the process as root. It’s therefore crucial to execute the process as a user with limited privileges. In order to achieve that. you have to create a user and a group and then assign it to the process as shown below.

docker_4

Or alternatively, use the guest user (405) which is already baked into the Alpine Linux image:

docker_5

Do not use the  – – privileged flag

As previously mentioned, the default behaviour of Docker is to run the process as root but it’s important to point out that the root user within the container has a restricted set of kernel capabilities enabled and thus has far less privileges than the root user on the host.

By using the  – – privileged flag you basically enable all kernel capabilities, which enables a process running within a Docker container to bypass most of the controls such as kernel namespaces and cgroups limitations. In other words, by adding this flag you give an attacker an easy path to break out of the container and to compromise the whole host.

Do not set credentials in environment variables

Environment variables should never be used to store sensitive data since they are likely to be accidentally exposed in logs or dumps and are available to any process running within the system which violates the principle of least privilege.

With containers, best practice is to mount credentials as a file at deploy time and this is usually a capability provided by the container orchestration layer such as Kubernetes or Docker Swarm.

For highly sensitive workloads running in the cloud, we recommend encrypting credentials using a Key Management Service such as AWS KMS, Azure Key Vault or Cloud KMS (GCP) and to only decrypt them at runtime using their respective client SDKs. In this way your sensitive credentials will only exist in cleartext in memory.

Leverage available tools

Docker Bench for Security is an open source tool that allows checking your Docker images, host, runtimes among others against common industry best-practices around deploying Docker containers securely in production.

Conclusion

By applying these basic security quick wins you will significantly increase the level of security of your containers with a minimal effort.

Thanks to my colleague Dan Mitchell for his help writing this.