Ziti-tunnel-sidecar logging ethzero.ziti.ui

I just setup a ziti-tunnel side-car and it is spewing out the following logs:

ziti-tunnel-sidecar-demo-7b8896cc94-sk6m8 testclient curl: (6) Could not resolve host: ethzero.ziti.ui
ziti-tunnel-sidecar-demo-7b8896cc94-sk6m8 testclient + curl -sSLf ethzero.ziti.ui

Is this an error I need to be concerned with?

Over and over again? That seems strange. It’s as though some script is trying to access that ziti service. It’s not “concerning” per-se but it makes me think there’s some sort of configuration issue at play.

How’d you end up setting up the side car - are you following Kubernetes Sidecar Client | Ziti ?

ah yeah - I do see that command in tunnel-sidecar-demo.yaml:

command: ["sh","-c","while true; set -x; do curl -sSLf ethzero.ziti.ui 2>&1; set +x; sleep 5; done"]

so I am expecting that’s what you’re following along there. I can take a look to see if there’s some doc that needs to get updated.

Hi Matt! What is the content of the testclient container’s /etc/resolv.conf? It should have the override values from the manifest in dnsConfig.nameservers, and by installing a DNS client like dig you may be able to diagnose the resolution failure from the perspective of cURL.

# e.g.
❯ kubectl exec -ti ziti-tunnel-sidecar-demo-749cf55866-dsl8n --container testclient -- cat /etc/resolv.conf
nameserver 127.0.0.1
nameserver 8.8.8.8

Matt,
Please see if you get the same result after these changes to the manifest:

  1. verify that the Ziti service’s intercept domain name exactly matches the domain name used in the curl command in container “testclient”
  2. change the container image for container “ziti-tunnel” to netfoundry/ziti-tunnel:latest

After changing the manifest YAML file you may re-deploy after first deleting the prior deployment:

❯ kubectl delete deployments.apps ziti-tunnel-sidecar-demo

We’ll get the Ziti quickstart updated to reflect the same.

Thanks,
Ken

the problem also was a doc issue. I’ve updated the doc but it probably won’t push out until later tonight. What @qrkourier is mentioning should be published soon.

One other small change is to make sure the service referenced was the same as one created in the quick start. This doc changes did publish earlier today.

The doc update just published. The guide should now be accurate. Please try again. Thanks for the question/feedback!

Yes, so I used that link. I see there is another kubernetes guide that uses helm.

I have 9 services running in my EKS cluster. I have an on-prem sqlserver that a couple of my services need to be able to reach. Which is the better approach for my case? Do I need to install a sidecar for each of the services or use the helm chart?

Hi Matt! The chart is a hosting-only, non-intercepting pod and so you would use that to expose cluster workloads e.g. the k8s API, service domain name, etc… to Ziti clients as an alternative to using a k8s LB, ingress ctrl, or node IP. The sidecar is the proven method for intercepting pod egress, and it is also able to host services for which the server is reachable by the pod.

So, you have a couple of options here. You will need to run a sidecar for each pod that needs to reach the SQL server, and you could also host a service for those pods with the sidecar. If those services running in EKS span multiple namespaces or network security zones you could install the hosting pod with Helm once for each with endpoint identities for each. If all the services are in the same namespace or network security policy then you could host them all with a single pod deployed with Helm.

Let me know if I can clarify these better! Here’s a quick take video I’m working on about the hosting use case using the Helm chart. Would love your feedback on the docs or video if you’ve got it.

Hey Ken! Long time!

Yeah, all 9 of the services are in the same namespace. I will definitely take a look at the video and give some feedback.

1 Like

OK, so I was able to make a connection from a k8s pod to an on-premise SQL Serve, so that is awesome.

The question I have now, should I handle scaling up and down our gateway service pods with a sidecar.

1 Like

Good stuff. Can you say more about your gateway service pods and the scaling problem you have in mind? After defining a multi-container deployment i.e. workload container with supporting container ziti-tunnel you may scale-out the replica count like this:

❯ kubectl scale deployment ziti-tunnel-sidecar-demo --replicas=2 
deployment.apps/ziti-tunnel-sidecar-demo scaled

The result of this command is two identical pods each with its own “sidecar” supporting ziti-tunnel container. This approach would use the same Ziti identity for each of the horizontally-scaled replica of the pod, and so each replica will have identical Ziti permissions.

@qrkourier If I scale out a deployment using “kubectl scale” will the new ziti-tunnel-sidecar in the new pod need to be enrolled in the nfconsole? Currently I created a secret with the name tunnel-sidecar.jwt that I used for my demo pod. Can I use the same jwt for a whole deployment?

No, you won’t need to enroll each for this particular approach. Here there’s one Kubernetes “deployment” per Ziti identity, and so scaling-out a deployment’s replicas will necessarily create multiple instances of ziti-tunnel that are all sharing the same Ziti identity that’s saved in the persistent volume claim that was made with the deployment. Ziti will function normally as an intercepting client tunneler with such a shared identity.

You still have the option to manage the hosting side of the Ziti service separately from the intercepting side. For example, you could create a k8s internal LB fronting an array of server listeners and then describe the LB as a Ziti service that’s hosted by a dedicated pod. This would attenuate any confusion that might arise from trying to also host a service in the scaled deployment where the Ziti identity is shared by replicas.

So, the trick is that we need the different ziti-tunnel side cars to share that same PVC. That way they can use the same identity. Is that correct @qrkourier ?

We are looking to move from EFS to rook/cephFS, so I will test that out tomorrow.

So, another question that just came to mind. Will all the ziti-tunnels be seen in nfconsole as a single endpoint or one per pod?

Endpoints: Correct, each Ziti identity will appear in the NF platform as an “endpoint”.

Sharing identities: It’s not the goal to share a Ziti identity across the replicas of a single pod, that’s just the way it has to be because the replicas’ configuration is by definition identical. You really shouldn’t share the Ziti identity in other circumstances e.g. across pods. One-endpoint-identity-per-pod is a good rule of thumb. :+1:

EFS vs Ceph: That should work just fine. I expect this won’t make any difference on the Ziti side of things because the deployment manifest simply calls for a PVC which will call the default storage driver in Kubernetes which could be fulfilled by whatever storage substrate is currently configured.