Discussions related to running ziti on kubernetes platform
I am able to run ziti-tunnel in a sidecar and communicate over the ziti-network. I want to be able to communicate through the ziti network from many pods in the kubernetes cluster. I think the only way this is possible as of now is to run a ziti-tunnel sidecar in each of the pods that wants to communicate through the ziti network. But I am looking for an option to have a single ziti-tunnel running and I want the traffic from a set of selected pods to be tunnelled through the single ziti-tunnel. Is there a way to achieve this ? If not, please suggest a way to implement this
Did you try Ziti for Kubernetes?
Pls share the organisation that you are part of, your job role and location for our reference
I tried the same image. I was able to communicate over the ziti network. I am from in2tive tech, Bangalore. My job role is Backend Lead Engineer. Do you have any suggestions regarding the question I asked ?
One option for accessing ziti services from multiple pods is to run ziti-tunnel in a DaemonSet with the container
hostNetwork field set to true, so each ziti-tunnel in the DaemonSet intercepts connections from all pods on the same node. This is the configuration we use when deploying our GKE marketplace application. Here’s the manifest template if it helps: ziti-gcp-marketplace/manifests.yaml at master · netfoundry/ziti-gcp-marketplace · GitHub
@moteesh, the attached drawing gives a good explanation of what was suggested above, with side note that the drawing shows a pod+sidecar deployment, not a daemonset. The difference is that the node itself is ziti-enabled, not the pod. In other words, the ziti-enablement is extended to any pod running on the node hosts where the daemonset is deployed.
@PhilipGriffiths I deployed ziti-tunnel as a daemonset on the host network as suggested. All the pods running under the host network are able to communicate through the tunnel. But it does not solve our use case where pods running on the cluster network should be able to communicate through the tunnel. For now, I am using ziti-tunnel as a sidecar to solve the use case. Is there any other way to solve this problem. And it is not feasible to add ziti sidecar to each of the pods in our deployment.
Hi Moteesh. Let’s try to understand what’s happening here. Are you accessing your ziti services by hostname or IP address? Can you share the manifest that you used do deploy your daemonset?
Is there a solution figured-out for the above query from Moteesh.
We have our applications too running on docker containers. We have placed Ziti tunnel running on the host. Things work only if set the app container network to host network. If set to private docker network - ziti tunnel running on host is not able to capture the app packets & route those outside.
One option as u suggested above - i.e. to run ziit inside each app container in the background. We shall try that tomorrow.
But we don’t want to do it as a larger solution. We want only one instance of ziti to be running in the system. All app containers should be able to route their packets through a single instance of ziti to outside world. And most imp - we don’t want the app container to be on host network.
Hi there Moteesh, A clarification, please. What is “cluster network”? You might mean that you wish to intercept pod egress from the Docker network or you could have meant you wish to intercept traffic originating in the cluster’s parent subnet and received by a node IP, load balancer, or ingress controller.
Hi everyone. Sorry, for my late reply. Been busy with the product launch deadlines. Could not get time.
The problem is if I run the ziti-tunnel as a daemonset, it is intercepting traffic only from the pods running on the host network. For intercepting traffic from cluster network, I had to run ziti-tunnel as a sidecar wherever necessary.
Here is the yaml that I used for deploying ziti-tunnel as a daemonset: ziti-tunnel daemonset configuration · GitHub
I understand that you would like to avoid using a sidecar container for intercepting pod egress and that you are simultaneously experimenting with running
ziti-tunnel as a background process inside the same container that is also running the workload application. I understand that you would prefer to have a shared intercepting tunneler for multiple pods. Do I understand correctly that the sidecar approach works in manual testing, but is not preferred for a larger-scale deployment?
The sidecar approach is the most proven way to intercept pod egress today, and I may be able to assist with easing the burden of including a sidecar configuration in the pod deployments. Could this be done in your environment with Helm or another template-based approach to rendering the deployable manifests?
I noticed that you are mounting the tun device on the container “ziti-tunnel”. That device is not used by
ziti-tunnel executable at all because only
iptables rules are needed to intercept egress, not the tun device.
Also, I am actively developing an alternative to the sidecar interceptor that, I hope, will enable an egress controller for the cluster. Would you be interested in early testing of this or collaborating to clarify the use case?
We are actually shipping ziti-tunnel sidecar along with some helm charts such as prometheus and fluent-bit. If there is a simple way to use a single tunner, that would be great. I am more than happy to help you with early testing and collaboration. Sorry, for the late reply, been busy and did not see the emails for quite some time. Will respond promptly from now on. Thanks for offering your help.