xyz is an application used for providing featured operations on client data whose infra is
managed via k8s clusters and those every piece of data is made sure to be secured
Every client has a dashboard he can create multiple clusters across tenants and add plugings to drive
usecase on his data
should we consider to install different controller to every client than keeping it centralized since
if by chance some jwt gets exposed in one of the client clusters there might create attack surface
Or we should go with multiple controllers for each cluster approach , can one tunnel manage multiple controllers ?
cant be used to connect to multiple clusters
How would the overall architecture look like ?
It would be really great if you can draw and send it how we should assemble the components
I don't think we would make direct recommendations like this.
should we consider to install different controller to every client than keeping it centralized since
if by chance some jwt gets exposed in one of the client clusters there might create attack surface
I wouldn't. I always recommend having a single overlay network and using policies to segment the network. OpenZiti very nicely supports this sort of deployment/multi-tenant model imo. A rogue JWTs doesn't expose the cluster at all and isn't a worry. Those are individual identities and they can't do anything without authorization.
Same answer. I would just have a single overlay network, a single controller that all the clusters can access. you can do it this way, but it seems like it'd be a burden to maintain. Hard to know without understanding all the details.
I think you're asking if one tunnel can have identities from multiple overlay networks -- yes. All the tunnelers can have numerous identities from numerous overlay networks without too much of a problem but BE CAREFUL... If all the networks have the same intercept, then you'll definitely have problems. All the intercepts would need to be unique... That's the only thing to worry about in my opinion...
thank you @TheLumberjack . Should I configure the tunnel in sdk in my application code so that it can talk to other cluster ?
I think you're asking about using an OpenZiti SDK and embedding it into your code? If so, YES!!! That's undoubtedly the best approach. That makes your code incredibly flexible. You can run it from anywhere, dev machine, docker, kubernetes, etc. It won't matter where or how you deploy your application then. That's a huuuge benefit of using an OpenZiti SDK.
thanks @TheLumberjack .I'll look over that for sure .
I have an issue here I followed this guide Deploy OpenZiti in Kubernetes with Ease Using k3d . But the difference is I installed controller on AWS VM NODE_IP is set to public ip of that machine .
Then the router I installed was in eks which is allowing the public traffic , here is the command
helm upgrade --install "ziti-router" openziti/ziti-router
--namespace ziti
--set-file enrollmentJwt=./router.jwt
--set edge.advertisedHost=router.edge.ziti.eks_node_ip.sslip.io
--set edge.advertisedPort=3022
--set edge.service.type=LoadBalancer
--set linkListeners.transport.advertisedHost=router.link.ziti.eks_node_ip.sslip.io
--set linkListeners.transport.advertisedPort=10080
--set linkListeners.transport.service.type=LoadBalancer
--set ctrl.endpoint="${Controller_NODE_IP}:6262"
--set nodeSelector."kubernetes.io/hostname"="eks_node_hostname"
The error from the router side is
[ 0.421] ERROR ziti/router/env.(*networkControllers).connectToControllerWithBackoff.func2: {endpoint=[tls:controller_ip:6262] error=[error connecting ctrl (tls: failed to verify certificate: x509: certificate is valid for 127.0.0.1, not controller_ip)]} unable to connect controller
[ 92.826] ERROR transport/v2/tls.(*sharedListener).processConn [tls:0.0.0.0:10080]: {remote=[eks_node_ip:54051] error=[EOF]} handshake failed
The value of the router's ctrl.endpoint
must exactly match the controller's ctrlPlane.advertisedHost
.
thanks , I thought i did that . But when I retried I got it
Next going to install router in private k8s cluster . Hope it works
Ideally normal installation of routers like showed in that guide should be enough ? I saw some diagrams of public/private routers so confused about that
Great! Glad to hear that.
The controller chart has two similarly named input values. The one that needs to match the router input is ctrlPlane
.
clientApi.advertisedHost
- the DNS name of the controller's client API that must be reachable by all Ziti identities and Ziti routers
ctrlPlane.advertisedHost
- the DNS name of the controller's control plane listener that must be reachable by all Ziti routers
Security note: While it's possible to use magic wildcard DNS from sslip.io with public IP addresses in this way, using a DNS zone you control is essential for production.
You can configure a router as public or private. It's not a precise way of describing the router configuration, but it usually means the router has a private or public IP. If you are hosting the router on a public IP, it's generally because you want it to provide a public link listener for the other routers to call.
cool , thanks @qrkourier
So how would we deploy routers in private eks , to talk to controller at public in this scenario
Here's an example of deploying a router in a cluster where the router does not publish services outside the cluster; it only provides an in-cluster edge listener.
Configuring a router with a link listener is unusual if it's not reachable on a public IP.
helm upgrade --install "private-router123" openziti/ziti-router \
--namespace ziti \
--set-file enrollmentJwt=./router1.jwt \
--set edge.advertisedHost=private-router123-edge.ziti.svc.cluster.local \
--set linkListeners.transport.service.enabled=false \
--set ctrl.endpoint="{{ ctrlPlane.advertisedHost }}:6262"
worked , thanks alot . Can I interact with private cluster's apiserver and do kubectl commands from other vpc using openziti ? Maybe exposing apiserver url not sure how it would work
Yep. The Kube API server is published by default as ClusterIP service https://kubernetes.default.svc.cluster.local
. You can create a Ziti service that targets kubernetes.default.svc.cluster.local
on port 443
and it becomes available to the devices with dial permission from a Ziti service policy.
Relevant resource: kubeztl
is a fork of kubectl
with the Ziti library so it doesn't need a proxy to use your Ziti service for the Kube API: GitHub - openziti-test-kitchen/kubeztl: A zitified kubernetes client
@qrkourier thanks ,
Let's say 2 services under 2 different private clusters want's to talk to each other ,
How would that be achieved and implemented
I think you want to access the Kube API of multiple clusters from inside each cluster. Although each cluster's Kube API has a predictable, standard DNS name kubernetes.default.svc
, I would create a new DNS name for each Kube API to use with Ziti to avoid confusion.
You must follow your Kubernetes distribution's advice for adding a DNS name to the Kube API server certificate. For example, with k3s, this is done by adding a --tls-san cluster1.ziti.example.com
option for the new DNS name.
With these new DNS names (subject alternative names [SAN]) added to each cluster's API server cert, you can create a Ziti service for each name.
Assuming you have two clusters, you must configure a reverse proxy for each Kube API and a proxy for each client pod.
I recommend at least glancing at this overview of K8 tunneling options (Ziti proxy and reverse proxy alternatives for K8s).
I would probably use the node proxy daemonset for both proxy and reverse proxy if all pods in each cluster should have the same level of access. If pods need access to specific, different Ziti services then you must use the pod sidecar proxy as a Ziti client.
@qrkourier for the same case what would be the bootstrap.env ? if router deployed as a service in ubuntu machine in private subnet and want to be accessible with tunnels on other vpc's
The main environment variables for generating a Linux router (same for Docker router) configuration are the controller's address and the router's address, and the router's token.
Here's the "bootstrapping" section from the Linux guide which mentions the most relevant variables that you should set before starting the Linux service.
At a minimum, you must set these to opt-in to bootstrapping.
- In /opt/openziti/etc/router/service.env (the answer file for the service)
- Set
ZITI_BOOTSTRAP=true
- In /opt/openziti/etc/router/bootstrap.env (the answer file for bootstrapping the router)
- Set
ZITI_CTRL_ADVERTISED_ADDRESS
to the FQDN of the controller
- Set
ZITI_ENROLL_TOKEN
to the resulting token (JWT)
Additionally, you probably want to set these:
- In /opt/openziti/etc/router/bootstrap.env (the answer file for bootstrapping the router)
- Set
ZITI_ROUTER_ADVERTISED_ADDRESS
to the permanent FQDN of the router (default: localhost). This value can not be changed after enrollment.
- Set
ZITI_ROUTER_MODE
(default: none) if this router's built-in tunneler will provide a proxy for services. Mode tproxy
requires additional kernel capabilities in /etc/systemd/system/ziti-router.service.d/override.conf and DNS resolver configuration for the host.
ZITI_CTRL_ADVERTISED_ADDRESS--> I set to exactly the controller's ctrl.endpoint="{{ ctrlPlane.advertisedHost }}:6262" and router's address using internal IP ZITI_ROUTER_ADVERTISED_ADDRESS=router1.10.0.11.171
But its not working , the requests from the tunneler are not able to reach the router
ERROR ziti-sdk:channel.c:903 on_channel_connect_internal() ch[0] failed to connect to ER[router1] [-30
and then i tried to access from a machine outside of this VPC
@qrkourier provided this time I have only one private router