Getting published Docker container ports to work with IPv6

This server was overdue for a migration to new hardware, and I used this opportunity to make its setup reproducible by basing it on Docker containers. This allowed me to test things locally, setting everything up on the real server was only a matter of one hour then. Some issues I didn’t recognize locally however, most importantly Docker’s weird IPv6 support. Everything worked just fine when the server was accessed via IPv4, accessing it via an IPv6 address caused connections to hang however. I hit this issue with Docker 1.13.1 originally, updating to Docker 17.12 didn’t change anything. Figuring this out took me quite a while, so I want to sum up my findings here.

First, it is important to know that Docker currently has two entirely different mechanisms implementing published ports. The default is the userland proxy, which is an application listening to a port and forwarding any incoming traffic to the respective container. The downside of this solution is: the proxy needs to open a new connection to the container, which means that the container will no longer see the remote address of the real client but merely the proxy’s address. This might be acceptable for some applications, but if your web server runs inside a container for example it needs to log real remote addresses.

So you will often see recommendations to disable userland proxy, which was even supposed to become the default setting (didn’t happen yet because of stability issues). In this mode, Docker (at least on Linux) uses iptables to forward incoming traffic to the container, the way a router would do it. You will still see published ports being held open on the host by dockerd but that’s merely a fake meant to prevent other applications from listening on the same port. In reality, the traffic destined for the published ports should never reach dockerd. Except that for IPv6 traffic it does, because Docker only sets up forwarding rules in iptables for IPv4 traffic.

You can see IPv4 rules created by Docker if you run iptables -nL, running ip6tables -nL on the other hand will show no rules for IPv6 traffic. My understanding is that this isn’t due to implementation complexity, adding the same set of rules for IPv4 and IPv6 would be rather trivial. The official reason for handling IPv6 traffic differently is rather that IPv6 addresses aren’t supposed to be used behind a NAT. So instead of routing all traffic through the host’s external IP address, one is supposed to give containers public IPv6 addresses and direct the traffic to those directly. Needless to say that this inconsistency between IPv4 and IPv6 complicates the setup quite significantly when we are talking about a single host running multiple containers, not to mention potentially exposing container internals to the outside world. The official documentation is also hopelessly useless and merely confuses matters.

Luckily, community members have stepped in an devised a solution that would just make published ports work with IPv6. First of all, you need to make sure that IPv6 is enabled on the network used by your containers. If you are using the default network, you would do it like this in docker-compose.yml:

version: "2.1"

networks:
  default:
    driver: bridge
    enable_ipv6: true
    ipam:
      config:
        - subnet: 172.20.0.0/16
        - subnet: fd00:dead:beef::/48

And then you need to add ipv6nat as a privileged container that will take care of setting up the IPv6 forwarding rules:

services:
  ipv6nat:
    container_name: ipv6nat
    restart: always
    image: robbertkl/ipv6nat
    privileged: true
    network_mode: host
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /lib/modules:/lib/modules:ro

There you go, it just works. Except that there is one more catch: don’t test your IPv6 setup on the ::1 address, it won’t work. The container will see a request coming from ::1 and will try sending a reply to it – meaning that it will send a reply to itself rather than the host. Using your external IPv6 address for testing will do.

Comments

  • Murphy

    Just stumbled upon your article, but I must say: Portforwarding, no matter how it is done (userland proxy, iptables, loadbalancer, reverse proxy etc. etc.) is actually not a good thing but more of a terrible workaround - IPv6 helps you to get rid of this complexity (yes there are even use cases where an application layer gateway is just not available/advisable) - you have more Addresses available than one will probably ever use on an single Host (64Bit or even more!). Now of course it is a bit more work to layout and administer your internal networks und routing properly, but well it is just what routing was invented for and for very good reasons (separation of concerns) it has its own layer in the OSI-Stack. There should be no problem with exposing to much of your Container details when using IPv6 - normally every Container should not expose more Ports than needed (eg. http(s) should only expose port 80 and 443 - maybe 8080 as "last resort") - and of course you will have to make sure that for example database-services are normally not exposed to the outside world, but this is just few lines of additional iptable rules and you may even have the opportunity to decide on a project (custom network basis) which ports are exposed by a whole network segment. Inside the same segment you may use any ports you like / need. In my opinion thats the price you pay for independent containers and microservices: You will need a bit more time and have a bit more complexity on the routing/network-layer. It may feel uncommon or even strange in the beginning but once you got your structures right you will wonder how complicated things were before.

    Wladimir Palant

    I have already stumbled upon this idea. However, I don't want to expose my containers to the outside world directly. It shouldn't make a difference but I don't really know whether it does, particularly with containers that I didn't create myself.