Sockets encapsulation

I manage a large number of standard servers and Kubernetes clusters, hosting numerous independent projects. Each project consists of applications and their associated services, such as databases, message queues, file storage systems, and more.

My goal is to enable developers from different teams to access only the services of the projects they are working on.

For a project running on a single server, the setup is straightforward: a service (e.g., project-some) is created with a configuration that forwards relevant ports like 5432 (PostgreSQL), 5672 (RabbitMQ), 15672 (RabbitMQ Management), 8080, and 9000 (ClickHouse). In this case, one project corresponds to one OpenZiti service, which I make accessible to its developers.

However, managing projects on Kubernetes clusters introduces challenges. Since a single service cannot have multiple host.v1 addresses, and host.v2 seems designed for other use cases (failing when multiple IPs and ports are specified in its terminators), I’m forced to create a separate service for each container in the cluster. For example:

  1. project-some-postgresql with a host.v1 address of postgresql.project-some.svc.cluster.local on port 5432.
  2. project-some-rabbitmq with a host.v1 address of rabbitmq.project-some.svc.cluster.local on ports 5672 and 15672.
  3. project-some-clickhouse with a host.v1 address of clickhouse.project-some.svc.cluster.local on ports 8080 and 9000.
  4. And so forth.

While this setup provides fine-grained access control, as developers can only access the specific ports they need, it also significantly increases complexity. Managing thousands of services and twice as many configurations becomes overwhelming.

An ideal solution would allow a single service to support multiple host.v1 and interceptor.v1 configurations. Developers would connect to one address and see only the ports they are authorized to access via interceptor.v1. Meanwhile, the server would route connections dynamically—for instance, connecting to postgresql.project-some.svc.cluster.local for traffic on port 5432 and to clickhouse.project-some.svc.cluster.local for traffic on port 8080.

The ability to use multiple host.v1 configurations would also be highly beneficial for large-scale scenarios like this because it would allow the creation of several standard host.v1 configurations, such as:

  • service-openssh
  • service-rabbitmq
  • service-rabbitmq-management
  • service-clickhouse-http
  • service-clickhouse-data
  • And so on.

These pre-defined configurations could then be assigned to the necessary services as needed, drastically reducing the total number of services and configurations required. This modular approach would make managing large infrastructures significantly more efficient.

1 Like

Hi @opcode, welcome to the community and to OpenZiti (and zrok/BrowZer)!

Although true if you set the address you can only have one, I'm not sure it's relevant because OpenZiti has a different feature: intercept forwarding. The key is to "forward" the intercepted address, making it quite easy to accomplish what you're looking to do. You can't "NAT" the addresses but I don't think that will be an issue.

Basically whatever address and port is intercepted at the client, can be "forwarded" once inside the cluster.

I think this one feature will help you accomplish your goal. Hope that helps, let us know. Cheers

Yeah, as @TheLumberjack pointed out you could try creating a single host.v1 config with forwarding to allow devs to connect to the pre-allowed ports and addresses.

I guess the following should do the trick.. More on forwarding here[1]
[1] The host.v1 Config Type | OpenZiti
Example:
host.v1.json

{
  "forwardAddress": true,
  "allowedAddresses": [
    "postgresql.project-some.svc.cluster.local",
    "rabbitmq.project-some.svc.cluster.local",
    "clickhouse.project-some.svc.cluster.local"
  ],
  "forwardPort": true,
  "allowedPortRanges": [
    {
      "low": 5432,
      "high": 5432
    },
   {
      "low": 5672,
      "high": 5672
    },
   {
      "low": 15672,
      "high": 15672
    },
   {
      "low": 8080,
      "high": 8080
    },
{
      "low": 9090,
      "high": 9090
    }
  ],
  "forwardProtocol": true,
  "allowedProtocols": [
    "tcp"
  ]
}

`intercept.v1.json'

{
    "protocols": [
        "tcp"
    ],
    "addresses": [
         "postgresql.project-some.svc.cluster.local",
         "rabbitmq.project-some.svc.cluster.local",
         "clickhouse.project-some.svc.cluster.local"
    ],
    "portRanges": [
        {
      "low": 5432,
      "high": 5432
    },
   {
      "low": 5672,
      "high": 5672
    },
   {
      "low": 15672,
      "high": 15672
    },
   {
      "low": 8080,
      "high": 8080
    },
   {
      "low": 9090,
      "high": 9090
    }
    ]
}
1 Like

Another approach is to dedicate a Kubernetes Namespace to each Ziti service. This allows you to reduce the complexity of the Ziti service to a wildcard forwarding rule like *.project-some.svc.cluster.local (or a list of wildcard rules representing a set of namespaces in the same cluster) instead of explicitly allowing each service name.

This same approach will work with any DNS zone, including the cluster's zone (default cluster.local), which may be viable if you have one K8S cluster per Ziti service.

If you need more control over the DNS records, e.g., inventing a DNS zone for a Ziti service that grants access to a set of ClusterIP services, then you could express each DNS record and service mapping with a combination of CoreDNS forwarding rules and internal Gateway. For example, a CoreDNS rule like this:

project-some.example.com:53 {
    rewrite name regex (.*)\.project-some\.example\.com internal-gateway.internal-routing.svc.cluster.local
}

This means all queries for *.project-some.example.com will resolve to the internal-gateway ClusterIP service's private IP, and so they will be handled by the internal-gateway's rules, e.g.,

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: internal-gateway
  namespace: internal-routing
spec:
  gatewayClassName: internal
  listeners:
    - name: http
      protocol: HTTP
      port: 80
      allowedRoutes:
        namespaces:
          from: All
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: internal-service-route
  namespace: foo
spec:
  parentRefs:
    - name: internal-gateway
      namespace: internal-routing
  hostnames:
    - "clickhouse.project-some.example.com"
  rules:
    - matches:
        - path:
            type: Prefix
            value: "/"
      backendRefs:
        - name: service-clickhouse-http
          port: 80
2 Likes