I am sure I am doing something wrong. When I add the ziti-tunnel sidecar my other container in the pod can’t see one of my other microservices in the same namespace. Here is my sidecar definition with the dnsConfig.
- image: netfoundry/ziti-tunnel:latest
name: ziti-tunnel
env:
- name: NF_REG_NAME
value: tunnel-sidecar
volumeMounts:
- name: tunnel-sidecar-jwt
mountPath: "/netfoundry/tunnel-sidecar.jwt"
subPath: tunnel-sidecar.jwt
readOnly: true
- name: ziti-tunnel-persistent-storage
mountPath: /netfoundry
resources:
requests:
memory: 512M
cpu: ".5"
limits:
memory: 1Gi
cpu: 1
securityContext:
capabilities:
add:
- NET_ADMIN
dnsPolicy: "None"
dnsConfig:
nameservers:
- 127.0.0.1
- 8.8.8.8
Do I need to add my local k8s DNS in this list too?
Yes, that sounds right. You could make cluster DNS the secondary nameserver. Ziti DNS still needs to be first, and it will fail fast if the query doesn’t match an authorized service intercept. What IP is cluster DNS for you?
Is cluster DNS recursive? If not then you may still need a tertiary nameserver for non-local queries.
dnsPolicy: "None"
dnsConfig:
nameservers:
- 127.0.0.1
- 172.20.0.10
- 8.8.8.8
So, I added the ClusterIP of my kube-dns service. I still can resolve my service.
Let me see if I can reproduce that. Meanwhile, please see if you can verify it’s a DNS problem. You might have a log message from the workload container that eliminates all doubt, or you might need to kubectl exec
into that container, install a DNS resolver, and query the domain name of your other microservice in the same namespace.
First, let’s verify you can see both service/kube-dns
as well as your other microservice in the list of services. Assuming the other microservice is in the default namespace, you should see an item here. If hello-netfoundry
were your other microservice, then you could try resolving hello-netfoundry.default.svc
in cluster DNS and you should get 10.21.83.136. Sorry if this is painfully elementary, just want to make sure the bases are covered.
❯ kubectl --namespace=default --kubeconfig=${HOME}/Downloads/ZitiTest4.yaml get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-netfoundry ClusterIP 10.21.83.136 <none> 80/TCP 14d
kubernetes ClusterIP 10.21.0.1 <none> 443/TCP 14d
Now for cluster DNS itself. This should reveal the current cluster IP.
❯ kubectl --namespace=kube-system --kubeconfig=${HOME}/Downloads/ZitiTest4.yaml get service/kube-dns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.21.0.10 <none> 53/UDP,53/TCP,9153/TCP 14d
Yes, if I create my service without the ziti-tunnel sidecar, then I can use dig to get the IP of my other service.
# dig zdp-gateway.arena.svc.cluster.local
; <<>> DiG 9.11.5-P4-5.1+deb10u5-Debian <<>> zdp-gateway.arena.svc.cluster.local
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17557
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: dcb4b83448bbad17 (echoed)
;; QUESTION SECTION:
;zdp-gateway.arena.svc.cluster.local. IN A
;; ANSWER SECTION:
zdp-gateway.arena.svc.cluster.local. 5 IN A 172.20.178.136
;; Query time: 1 msec
;; SERVER: 172.20.0.10#53(172.20.0.10)
;; WHEN: Thu Jul 15 15:06:13 UTC 2021
;; MSG SIZE rcvd: 127
I can curl the other service and get the expected response. When I deploy my service without the ztit-tunnel sidecar.
Hi Matt,
Is your cluster using kube-dns
or coredns
? It’s probably kube-dns
unless you’ve migrated to coredns
.
One theory is that dnsConfig
isn’t being inherited in the expected way. I applied the sidecar demo deployment and found that /etc/hosts
only received the first nameserver from dnsConfig
with values like these.
dnsPolicy: "None"
dnsConfig:
nameservers:
- 127.0.0.1 # Ziti DNS
- 10.21.0.10 # Cluster DNS
- 8.8.8.8 # Recursive DNS
root@ziti-tunnel-sidecar-demo-8b5976c7b-5ffks:/# cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.20.55.25 ziti-tunnel-sidecar-demo-8b5976c7b-5ffks # pod IP
I’ll keep investigating, and please let me know if you figure it out.
Hi @modonnell , just a thought. Can you exec into any other pod which is running on the cluster network and see if the /etc/resolv.conf entry matches the value you gave in the nameservers list ? Ideally, we should be able to resolve any service if we give the correct nameserver ip in the list.
1 Like
@modonnell Did you find a working DNS configuration for tproxy
mode? One alternative approach is proxy
mode. Here’s an example deployment using the proxy
mode. The idea would be to tightly-couple the ziti-tunnel proxy
configuration with the app’s requirements in the same pod manifest. This is more rigid than tproxy
but doesn’t depend on DNS because the application always connects to a bound TCP port on the loopback interface.