Trusted self-signed certificates between Docker containers

We live in a world where headless setups are becoming more and more common.
Vue.js applications that connect to REST APIs of e-commerce systems like Shopware or content management systems such as Storyblok, Strapi, and others.

Developing a failure-proof production system requires rock-solid development environments. However, this poses challenges for network connections between containers.

Specifically, we are talking about Docker containers.
This article shows you how to optimize internal network connections between containers, based on self-signed certificates that are completely trusted in your containers.

Let's first start with the actual problem.

Why current setups fail

Let's imagine you are building a setup with multiple containers that connect to each other.

services:

  frontend:
    image: ...

  shopware:
    image: ...

The frontend container is an application that connects to the shopware container to fetch products, categories, or other data from the API.

When you try to connect to a different container from within a container, you usually use the key of the service from the YAML to connect. That, we can say, is the host or "IP" address of the container. So a connection from the frontend container to the shopware container could be done by using this:

curl http://shopware

As long as a web server is running on the shopware container, this should work.

But what if we use the HTTPS protocol?

This can work, if SSL is available, but depending on your general setup,
chances are high, that you use a self-signed certificate for your Docker containers....if not, just skip the article and you are happy :)

So if it's self-signed, the request would work if we skip the SSL verification.
But do we really want to do that in the full application? Is it even possible to configure it?

So in other words, the command from above with https would fail. We would need to skip the SSL verification. Here is a sample of curl, using the -k flag to skip it:

curl -k https://shopware

Skipping SSL verification with options like -k in curl bypasses important security checks, leaving the connection vulnerable to man-in-the-middle attacks. Although we have a local development environment, we would still like to have a secure connection between our containers.

So let's imagine, we have successfully skipped the verification in our application.
There might still be another problem.

Our application has requests to Shopware within the browser, so on the client-side.
These requests are done on the host system of the developer machine.

The host "http://shopware" is not available there.
Usually things like "http://localhost", or a custom domain (/etc/hosts) are used to connect from your host to your Docker containers.

That means, our application needs 2 configurations. 1 for the client-side hostname (https://localhost) and one for the backend-side hostname (http://shopware). And yes, the client-side could also use HTTPS, while the backend usually needs to be HTTP because of our problem from above.

I know teams and developers, including myself, that struggled with setups like this. Debugging or just configuring it, is no fun at all.

The goal

Our goal for this article, is to create a simple domain for our Shopware shop (www.my-shop.dev) that can easily be called using HTTPS from our host system, as well as all containers involved.

This allows us, to have a single configuration for our Shopware API endpoint in our application.

Please note, this is only a sample, if you indeed want to use that public traffic. If you want to use internal IPs on your production system for backend calls, then you would have 2 hostnames anyway.

The solution

The first thing to solve, is to have a domain that works both on the host and in the containers. The main goal is to use something like /etc/hosts on your host machine and something similar inside Docker.

To achieve this, I have already created a blog post about Easy multi-tenant wildcard domain setups with Docker and dnsmasq.

We will use that principle to register our domain www.my-shop.dev on our host system and in our containers.

Now to the fun part, making sure our self-signed SSL certificate appears to be trusted.

Self-signed certificates are ideal for development environments because they are easy to generate and manage. However, they lack trust in production environments where public Certificate Authorities (CAs) are required for secure communication. For production systems, consider using tools like Let’s Encrypt or commercially issued certificates to ensure compatibility and security.

Let's imagine we either have a proxy like NGINX in our developer setup, or just a web server that uses SSL. In that case, we need a certificate and a key file that we can use to configure our proxy or webserver.

These files can be easily created using mkcert.

Once we install and run it with the name of our domain, we will get 2 files.
One is the certificate and the other is the key.

We can use those files in our proxy or webserver configuration for the SSL location directives (not covered here).

    brew install mkcert || true
    sudo mkcert -install
    sudo mkcert -CAROOT
    # ---------------------------------------------
    sudo mkcert "*.my-shop.dev"
    mv _wildcard.my-shop.dev.pem ./certs/my-shop.dev/certificate.crt
    mv _wildcard.my-shop.dev-key.pem ./certs/my-shop.dev/certificate.key
    sudo chmod 664 ./certs/my-shop.dev/certificate.crt
    sudo chmod 664 ./certs/my-shop.dev/certificate.key

We now have a certificate.crt and a certificate.key file in our certs directory.

Thanks to the sudo mkcert -CAROOT command, these certificates already appear as trusted on our host machine.

So the next step is to make them trusted in our containers.

The mkcert command creates a rootCA.pem file on our machine.
Our goal is to copy that file into our containers and register it there.

The file is located in the user directories of developer machines. So every developer has a different path.

To have a single script that works on all machines, I recommend copying the file to the local project directory. There we can work with relative paths.

    ...
    sudo mkcert -CAROOT
    # ---------------------------------------------
    # copy CA root certificate for containers
    cp "$(mkcert -CAROOT)/rootCA.pem" ./certs/my-shop.dev/rootCA.pem
    # ---------------------------------------------
    ...

Now we have 3 files in our certs directory:

  • certificate.crt
  • certificate.key
  • rootCA.pem

Now it's time to bring the rootCA.pem file into our containers.

In the other blog post, I wrote about a dynamic script that executes commands in our containers. We can simply use this one and extend it.

#!/bin/bash

CA_CERT_PEM=$1

# dynamically get all running containers
containers=$(docker ps --format '{{.Names}}' )

# ...or use a static list of containers
# containers="frontend"
# containers+=" other_container" 

for container in $containers
do
    # ......
    # ...nameserver configuration from other blog post
    # ......

    # .... update CA certificates in our container
    docker cp $CA_CERT_PEM $container:/usr/local/share/ca-certificates/my-ca-cert.crt || true
    docker exec -it -u root $container bash -c "chmod 777 /usr/local/share/ca-certificates/my-ca-cert.crt" || true
    docker exec -it -u root $container bash -c "update-ca-certificates" || true
done

We have created a new variable CA_CERT_PEM to the original script. (that's just to make it reusable).

Then we have added a copy command for every container, to make sure the file is available inside the container. The sample is from an Ubuntu based system. Here we have to add these files to the /usr/local/share/ca-certificates directory. This might be different on other systems.

Please also note, that we rename it to be a crt file. This is because the update-ca-certificates command expects a crt file.

After setting permissions, we just need to run the update-ca-certificates command to register the certificate. Please note, that I have used permissions 777 in this local container, to make sure it always works. 777 is of course generally not recommended ;)

Let's run our script on our existing containers, and provide our rootCA.pem file.

sh scripts/configure_containers.sh ./certs/my-shop.dev/rootCA.pem

If you now connect into our application container, you should be able to successfully run a curl command with HTTPS and without the -k flag.

curl https://www.my-shop.dev



And that's it.

Just imagine creating something like a make setup command for your developers that creates local certificate files on their machines. Followed by a make start command that starts the Docker containers and prepares the domain nameserver entries as well as the certificates.

Without any developer interaction, the host and all containers can fully connect to each other by using domains and locally trusted SSL certificates.

For me, this is a must-have for fully automated and complex local developer setups!

Conclusion

It's indeed a bit of work with the first setup.
But once you manage that, you can create easy and automated scripts for all your projects.

No more troubles, special configurations or even skipping SSL verification in your applications.

Links: