Provides a solution to access the systemd-resolved daemon running on the host-system from docker and other containerization and virtualization solution.
The major problem to solve is that software like docker (containerd) currently doesn't have a good integration to reuse the DNS configuration. When you have a setup using split DNS or working in an environment where you are frequently switching networks connections (VPN on/off, moving from the desk using the docking station to meeting rooms), you will likely have DNS hostname resolution issues using your docker containers.
Here is a brief desciption what happens and why.
When starting a container the current version of docker on ubuntu 20.04 checks for systemd-resolved being installed which is usually the case. Since libraries historically use the dns-servers in /etc/resolv.conf
and don't know about all the logic around having multiple networks connected to a system, systemd-resolved does a trick by running a local DNS server on 127.0.0.53
. This dns-server does all the magic, however since docker containers are usually isolated from the hosts network using there own separate loopback-device, they cannot reach the server on 127.0.0.53
. The developers decided to go the simple route and use a file that was generated by systemd-resolved for backward compatiblity. So instead of copying /etc/resolv.conf
into the container, docker copies /run/systemd/resolve/resolv.conf
. This works for simple environment where all DNS servers are equivalent and able to answer the requests but not so for more complex setups. One might think it could work and if resolution doesn't work the next DNS-server is ask but that's not how it actually works. There is a difference between not responding and answering with a not found message. So if the first server is the one of your home network and second one from the company (set on the VPN), your home DNS server will just return not found and the system will take that answer without asking the next server. As a result when accessing servers inside a docker container you will see DNS working for the internet and you home servers but not for the company servers. Depending on what you are doing this might not be an issue but if you are working with server only reachable via VPN you wont get an ip-address for the hostnames. Well you could use ip-address directly but that's usually not what you want.
Another problem occurs when you switch networks after a container was created. Docker wont update the /etc/resolv.conf
inside the container unless you restart the container. While I haven't seen this to be a problem in my environment it's still something that could totally screw up your demonstration when you prepared something on your desk and move to the meeting room just to see a message that a system can no longer be reached.
To get this work it would be nice to be able to access the systemd-resolved from the host which knows everything about the network configuration.
While searching for solution I failed to find something that is simple to setup and works ubuntu. Further more as surprising it might be the following groups of developers failed to provide a simple solution:
- docker there is an open issue about this but nobody was able resolve it
- zscaler they failed to provide a solution and when finally reaching somebody through their support understanding the details of their linux-software, they were not very helpful any more since I already found this workaround and it's a docker integration issue.
I hope somebody else can benefit from this little project when they have to deal with older system not having an up-to-date systemd-resolved.
A docker-service is exposed using a bridge interface on the host that is visible to the other networks used by the VMs, containers, etc. which setup to listen on port 53 and answering the request by reaching out to the systemd-resolved daemon on the host.
There are several ways to solve this and ended up using dnsmasq running inside a docker container with host-networking.
- configuring DNSStubListenerExtra-option provided by the latest systemd-resolved, this option isn't available on older distribution
- using the existing https://github.com/flaktack/systemd-resolved-docker project, this has a debian-folder but it's not implemented, so only working for rpm-bases systems
- using nginx, while I got that working nginx is more a generic tools which requires some tuning since it's not designed to handle massive a mount of dns-request (I had to increase the worker threads and number of file-handles).
- using nginx and dnsmasq on the host directly installed from the systems package source, while possible this requires either to handle configuration conflicts with existing installation for other purposes. One solution would have been providing a second configuration and separate systemd-service configuration files, overall I felt it's to complicated
- using dnsmasq as standalong binary and separate systemd-service configuration and install via the os package system, this requires to build an installatioon package for each ditro out there or at least a shellscript that can handle the variations
- install a dnsmasq based solution via snap or flatpak, something I might want to consider based on feedback however I feel like it's probably not worth since they just different technologies doing the same and I don't want to go into a flatpak vs snap discussion when docker is by far more commonly used for things that need to run accross different distribution
- systemd-service based solutions, seeing that docker basically provided all I need including automatic startup using the restart-policy unless-stopped, I have not even to deal with the init-system on the host.
- you need to have docker installed
- not available before running docker, there might be cases where you want to expose systemd-resolved even earlier
- not reliably available to other docker containers during startup, while the exposer container starts it takes a while until the service is working which will prevent other container from using it. other containers might even start before the exposer container and fail to resolve hostnames. I considered this to a non-problem since dealing with network failures during startup is a very common issue also when separate database containers.
- runs all most everywhere, using docker makes it damm simple to install it on almost any distribution docker if you put the docker image on your companies local docker registry
- all most no new code, I create container containing dnsmasq with a base configuration and an entry-point script creating the bridge and adding the setting in config before running it. I didn't had to write much code to get this work. In other words it's bssically scripts for build a container and running it.
There are two options, either build it from scratch or my prebuilt image here on github.
Note This is a free account which has only limited traffic on the github package registry. I kindly ask everybody to pull the image only and put it into your own registry, especially if you are using it in your company.
You need to have bash
, make
, curl
and docker
installed to get started as well what ever is needed to get the content of this repository.
- Go to dnsmasq/docker, run
make
. - Go to docekr, run
make
Note: there is already a subdirectory build
to compile a dnsmasq-binary for a potential non-docker based solution, this is currently not used at all.
The minimum that is need to execute the solution is docker and a dns-service such as systemd-resolved where the dns-request are delegated to.
Simply go into the docker
subdirectory and run make run-as-service
to run and register the container to be restarted on the next run.
For testing I also provide as simple make run
.
If you have a local docker requistry in you company you might put the docker image just there. This will turn the installation and start in a one-liner:
docker run -d --restart unless-stopped --cap-add NET_ADMIN --name systemd-resolved-exposer --network=host ghcr.io/heikoboettger/systemd-resolved-exposer:1.0.0 -k
When the container starts it will create a bridge br-resolv
on the host
using the ip 100.65.0.1
and subnet /24
. Forwarding will be configured to forward to the 127.0.0.53
where systemd-resolved is usually running. All these three setting can be overwritten be setting variable found in docker/entrypoint.sh
.
Note: don't forget to include the subnet mask when defining the bridge ip option. I tried to choose a variable name that makes that clear but I better mention it to be safe. There not much checks to confirm right values.
When starting the container you have to wait a few seconds until everything is setup. If you haven't changed the default settings, you can test it using:
nslookup google.de 100.65.0.1
There is no code setting up the exposer for use in docker or VMs but here is an example on how to setup docker on ubuntu 20.04 to use the systemd-resolved from host-system using the forwarding setup on the bridge:
- Open
/etc/docker/daemon.json
in your favourite editor as root - If the file wasn't there set the following content otherwise change or add the dns property accordingly:
{
"dns" : [ "100.65.0.1" ]
}
When using zscaler client 1.4.0.79 on linux and having the ZIA (internet security) component enabled you may face issues such as slow loading of webpages. I even observed timeouts and situations where a webpage wasn't loading until multiple attempts hitting the browsers reload button. After investigation of the issue it turned out that the zscaler client registers three DNS servers on the zcctun-bridge but only one of them fully working. In a discussion with zscaler support it was revealed that there is a 1:1 mapping between the DNS servers running on the bridge and the DNS servers on the clients internet uplink. As a result the secondary and ternary DNS entry isn't working for resolving internet requests when your home network only has one DNS server. They are working on a fix but until then we need a workaround.
If you have a commonly used setup with a simple router you may be able to configure it to provide two more external DNS servers. However if you do that name resolution to internal computers will only work on the primary server assuming it's your routers DNS. That results again in inconsistent behavior and situations where you can reach your computers via hostname only randomly.
As a workaround you can setup an additional bridge with two systemd-resolved-exposer instances running on that bridge which instead of redirecting the request to systemd-resolved on your host are reconfigured to send them to your router.
Here are the steps you usually have to do in a home-network with DHCP and DNS provided directly by your home router:
-
YOUR_ROUTERS_IP="192.168.0.1" # change this to your routers ip-address docker run -d --restart unless-stopped --cap-add NET_ADMIN --name systemd-resolved-exposer --network=host -e "BRIDGE_NAME=bridge-redirect" -e "BRIDGE_IP_ADDR_SUBNET=100.66.0.1/24" -e DNS_SERVER_IP_ADDR="${YOUR_ROUTERS_IP}" ghcr.io/heikoboettger/systemd-resolved-exposer:1.0.0 -k docker run -d --restart unless-stopped --cap-add NET_ADMIN --name systemd-resolved-exposer --network=host -e "BRIDGE_NAME=bridge-redirect" -e "BRIDGE_IP_ADDR_SUBNET=100.66.0.2/24" -e DNS_SERVER_IP_ADDR="${YOUR_ROUTERS_IP}" ghcr.io/heikoboettger/systemd-resolved-exposer:1.0.0 -k
- Open the settings of your uplink, disable the option to automatically configure DNS-servers via DHCP and manually define your routers-ip, the first bridge-ip
100.66.0.1
and the second bridge-ip100.66.0.2
. Save and reload the network configuration (or reboot if that is easier for you)
This solution is provided as is without any warrenty and at your own risk. I didn't spent time into adding checks at the startup of the container to surpress error message such when the bridge already exists or detecting bridge-name and ip-address conflicts.