K8S multiple tunnelers load balancing

Hey,

I've a controller and public edge router running on the Cloud. I also have a Talos cluster running on my homelab.

To access services in my homelab remotely, I've deployed a tunneler Pod in my Talos cluster following this doc: Deploy a Hosting Tunneler in Kubernetes | OpenZiti

I made an http echo server pod and service on my K8S cluster and configured the necessary service/config/policies to access it and it's working great:

Now I'm wondering how can I load balance the traffic so it won't all go through this single tunneler to access my K8S cluster. Are there any best practices ?

Yes! This style of load balancing is built into Ziti. Just grant multiple hosting tunnelers permission to host the service, and they'll share the load according to the default "smartrouting" terminator strategy.

It's like a fault-tolerant array of reverse proxies at the most basic level. It's been a while, but we had a series of two episodes of Ziti TV where we talked about using Ziti terminators to manage load balancing and failover of Ziti services.

My takeaways from re-watching the second video include:

  • The CLI has changed - now you can run ziti edge VERB terminators where VERB is list, update, etc.
  • Authorize multiple identities to host the services, not multiple copies of the same identity, which is confusing to manage.
  • Adjust terminator strategy, cost, and precedence to express a preference for failover or scaling behavior (link to doc about extensible terminators).
  • You can define terminator actions, e.g., "mark unhealthy", triggered by health checks in host.v1 configs.
  • Terminators are automatically created by hosting SDKs on three routers, creating multiple potential paths to the hosting SDK in the Ziti network. Their dynamic costs are factored in in the default "smart routing" strategy.

Thanks ! That's very helpful.

One last question, if I have 3 same web applications running (they all use the same backend API / peristance storage). If I configure 3 services with 3 host.v1 configs for each application and the same intercept config. Does Openziti load balance the traffic automatically on the 3 ?

I'll paraphrase your question to clarify the facts. You're asking how to distribute the request load across three stateless web application servers with Ziti. You wondered if you should use three Ziti services to accomplish this. Still, you should not use multiple Ziti services with overlapping intercept addresses, as this creates domain name resolution ambiguity on the client side, leading to unpredictable behavior.

With the default terminator strategy, smartrouting, each unique Ziti identity that has permission to host your single Ziti service will share the request load. This approach requires that each Ziti identity hosting the Ziti service uses the same target address to reach its respective upstream web application servers. For example, in a server environment, each web application server's host may run a Ziti tunneler for hosting the service with a host config targeting 127.0.0.1:8080; in a cloud environment, each Ziti tunneler hosting the service could use a host config with a target address leveraging environment-specific DNS discovery, like my-web-app.internal, which resolves to the correct address for each environment where the stateless web application servers are also deployed.