Sure. Well, to be honest I'm just trying to understand how things work and the implications, to see if that would be compatible with the real use cases.
To use a concrete example, if the interceptor takes precedence and I were to intercept and forward *.default.svc
(or even just kubernetes.default.svc
, for that matter), then all of a sudden pods would lose the ability to connect to their local kubernetes API service (being able to see the other cluster's kubernetes API service is actually a real use case).
That's why it would be beneficial to have some sort of mapping/indirection between names as they appear on one side and the real names on the other side.
Generally speaking, the use case is being able to access clusterIP services on the other cluster without the need to expose them via ingresses, NodePort or LoadBalancer (think collecting prometheus metrics, access the kubernetes API server, access dashboards or other internal HTTP services). Why? Because sometimes you don't have the needed admin rights to make changes on the edge, or even if you do, the edge might not be externally reachable (only outbound connectvity), or you don't want to go through those steps for a variety of reasons (eg they might only be needed for a limited time).
So the idea is that you own the "master" cluster (full control), and you want to manage (or "do something with") edge clusters from the master. As said, you might not own the edge cluster, and you might not have privileged access to them (eg you might only access a single or a few namespaces), because some customer manages it, or you might also have privileged access; all cases might happen.
In any case, you install some unprivileged "agent" (which doesn't require elevated or cluster-wide permission, so no L3 networking, no CRDs, etc) on the edge that could give you a protected entry point into the cluster; once you have that, depending on the access level, you might want to do things directly against its k8s API (eg deploy applications/helm charts), or "simply" access remote services, depending on the actual scenario. You might also need to make some master services visible on the edge. In all cases, it's always a matter of being able to see and access (kubernetes) services that exist on the other side. I know this all sounds vague, but it's because it's meant to be a very generic mechanism for more complicated logic to be built on top of it.
One tool that seems to offer similar characteristics and capabilities is skupper (https://skupper.io), which I'm also evaluating, although that is not a zero-trust solution like openziti.
Thanks to your suggestions I can proceed further and configure the host.v1 services. I'll let you know how it works out. Thanks!