Tproxy mode as sidecar container not working with alpine >=3.13

Hi folks,

I have been playing with the tproxy mode of the ziti tunnel binary as a sidecar container for a “to-be-zitified” application container in a Kubernetes cluster. I have used this example as a base: Kubernetes Sidecar Proxy | OpenZiti.

I have noticed that with this setup the ziti-tunnel container itself was able to access ziti services just fine, but the other one (the “workload” container) was not:

on ziti-tunnel container:

[root@ziti-tunnel-sidecar-demo-8549567775-64qwr netfoundry]# curl https://demo.ziti
<html><title>Test site</title><body><h1>This is a test site only</h1></body></html>

on the workload container:

bash-5.1# curl https://demo.ziti
curl: (6) Could not resolve host: demo.ziti

Of course I checked the first things first. So I saw that both containers have the same /etc/resolv.conf.

nameserver 127.1.2.3
nameserver 10.43.0.10

In order to debug further I added another container containing network debugging tools. With these I could see that the container has access to the dns server the tunneler exposes and can resolve the service’s names (tested with the dig command) but somehow the gethostbyname() of this container seemed to behave differently than the one in the ziti tunnel container. Next suspect was alpine linux which both the test container in the above mentioned example setup and my network debugging container are based on, so I added another container to the pod: this time an ubuntu image, and see what:

[root@ziti-tunnel-sidecar-demo-8549567775-64qwr netfoundry]# curl https://demo.ziti
<html><title>Test site</title><body><h1>This is a test site only</h1></body></html>

I now checked with tcpdump what happens, ans found the following:

issuing curl with a URL of a ziti service produces the following DNS packets produce the following DNS packets on the two different OS containers.

On Ubuntu (and the ziti router), :

18:28:01.988524 lo    In  IP 127.0.0.1.51825 > 127.1.2.3.53: 49846+ A? demo.ziti. (46)
18:28:01.988554 lo    In  IP 127.0.0.1.51825 > 127.1.2.3.53: 44676+ AAAA? demo.ziti. (46)
18:28:01.988725 lo    In  IP 127.1.2.3.53 > 127.0.0.1.51825: 44676 Refused- 0/0/0 (46)
18:28:01.988783 lo    In  IP 127.1.2.3.53 > 127.0.0.1.51825: 49846*- 1/0/0 A 100.64.0.5 (90)
18:28:01.989799 lo    In  IP 127.0.0.1.34308 > 127.1.2.3.53: 28123+ [1au] AAAA? client.sdn.my.org. (55)
18:28:01.989806 lo    In  IP 127.0.0.1.44416 > 127.1.2.3.53: 60272+ [1au] A? client.sdn.my.org. (55)
18:28:01.989917 lo    In  IP 127.1.2.3.53 > 127.0.0.1.44416: 60272 Refused- 0/0/0 (44)
18:28:01.989939 lo    In  IP 127.1.2.3.53 > 127.0.0.1.34308: 28123 Refused- 0/0/0 (44)
18:28:01.990057 eth0  Out IP 10.42.4.219.43862 > 10.43.0.10.53: 48218+ [1au] AAAA? client.sdn.my.org. (55)
18:28:01.990073 eth0  Out IP 10.42.4.219.51508 > 10.43.0.10.53: 36060+ [1au] A? client.sdn.my.org. (55)
18:28:02.004531 eth0  In  IP 10.43.0.10.53 > 10.42.4.219.43862: 48218 0/1/1 (149)
18:28:02.008892 eth0  In  IP 10.43.0.10.53 > 10.42.4.219.51508: 36060 1/0/1 A 214.123.321.111 (97)

We can se how nicely it works. Linux behaves as we know it to do: first, it asks the first of the dns servers it knows (the tproxy’s) for both A and AAAA records, gets a “refused” for the AAAA and an address for the ziti service. Next the tunneler itself wants to communicate to its public router which causes another set of requests on the first dns server entry. As ziti tunnel doesn’t know how to answer these, it answers “refused” and therefore linux uses the second dns server entry, which is the ip of CoreDNS, which finally forwards to the ISPs DNS server or wherever.

Done! Nice!

Now let’s look at what happens if the Alpine container tries the same:

18:37:11.853899 lo    In  IP 127.0.0.1.55971 > 127.1.2.3.53: 9091+ A? demo.ziti. (46)
18:37:11.853933 eth0  Out IP 10.42.4.219.55971 > 10.43.0.10.53: 9091+ A? demo.ziti. (46)
18:37:11.854025 lo    In  IP 127.0.0.1.55971 > 127.1.2.3.53: 9388+ AAAA? demo.ziti. (46)
18:37:11.854035 eth0  Out IP 10.42.4.219.55971 > 10.43.0.10.53: 9388+ AAAA? demo.ziti. (46)
18:37:11.854103 lo    In  IP 127.1.2.3.53 > 127.0.0.1.55971: 9091*- 1/0/0 A 100.64.0.5 (90)
18:37:11.854167 lo    In  IP 127.1.2.3.53 > 127.0.0.1.55971: 9388 Refused- 0/0/0 (46)
18:37:11.886745 eth0  In  IP 10.43.0.10.53 > 10.42.4.219.55971: 9388 NXDomain 0/1/0 (140)
18:37:11.889368 eth0  In  IP 10.43.0.10.53 > 10.42.4.219.55971: 9091 NXDomain 0/1/0 (140)

And here we see what’s going wrong. Same as before, our alpine first asks the tproxy’s dns server for an entry for the ziti service but asks the coreDNS of the cluster at the exact same time. Now, CoreDNS cannot find an authoritative name server for this domain and returns NXDomain AFTER the tproxy’s DNS server has already given the IP back. Alpine gets confused, takes the last answer for real and that’s why we see a “Could not resolve host”.

Digging for this on the web I found (this)[Only first response from DNS is used · Issue #118 · alpinelinux/docker-alpine · GitHub] which is almost exactly the description of the above problem, but the other way around. And indeed, changing the image of my test container from alpine:3.13 to alpine:3.12.2 makes my gethostbyname() work again:

/ # curl https://demo.ziti
<html><title>Test site</title><body><h1>This is a test site only</h1></body></html>

So, in alpine 3.13 something has changed in the glibc that makes DNS requests behave differently than I believed it to work. This breaks the idea of the tproxy sidecar as it was designed originally.

I can imagine of workarounds like an additional sidecar container that contains a CoreDNS itself and forwards all ziti relevant domain names to the ziti tunnel provided DNS server and all others to the Kubernetes Cluster’s CoreDNS, but that’s really not a nice solution. Another option could be to change the ziti tunnel dns server behavior so that it actually forwards DNS queries it cannot answer itself to an upstream DNS server (the Kubernetes CoreDNS) and therefore both the ziti tunnel container as also the workload container just asks one DNS server.

What do you think or do you have an idea for a better workaround?

Cheers

Christian

1 Like

Nice sleuthing there, @ChristianAnton.

I’ll see if I can find a way to make it work with Alpine 3.13. Appreciate you writing this up!

Have you been able to find any viable way to work with the tproxy sidecar with this new glibc behavior of Alpine so far?

I gathered from this article that the patches for Alpine to address this issue are expected within months. Meanwhile, consider switching from Alpine to Alpaquita Linux, which they created to address Alpine issues like parallel DNS resolution breaking nameserver precedence. They offer a container image with MUSL (like Alpine) and a glibc variant.

These are the most recent pinned release tags for each (amd64 only):

I found this article helpful for additional context.

1 Like

MUSL released a flurry of DNS-related improvements that are now available in Alpine 3.18. I’m optimistic this will solve our problem with TPROXY!