I manage a large number of standard servers and Kubernetes clusters, hosting numerous independent projects. Each project consists of applications and their associated services, such as databases, message queues, file storage systems, and more.
My goal is to enable developers from different teams to access only the services of the projects they are working on.
For a project running on a single server, the setup is straightforward: a service (e.g., project-some) is created with a configuration that forwards relevant ports like 5432 (PostgreSQL), 5672 (RabbitMQ), 15672 (RabbitMQ Management), 8080, and 9000 (ClickHouse). In this case, one project corresponds to one OpenZiti service, which I make accessible to its developers.
However, managing projects on Kubernetes clusters introduces challenges. Since a single service cannot have multiple host.v1 addresses, and host.v2 seems designed for other use cases (failing when multiple IPs and ports are specified in its terminators), I’m forced to create a separate service for each container in the cluster. For example:
- project-some-postgresql with a host.v1 address of postgresql.project-some.svc.cluster.local on port 5432.
- project-some-rabbitmq with a host.v1 address of rabbitmq.project-some.svc.cluster.local on ports 5672 and 15672.
- project-some-clickhouse with a host.v1 address of clickhouse.project-some.svc.cluster.local on ports 8080 and 9000.
- And so forth.
While this setup provides fine-grained access control, as developers can only access the specific ports they need, it also significantly increases complexity. Managing thousands of services and twice as many configurations becomes overwhelming.
An ideal solution would allow a single service to support multiple host.v1 and interceptor.v1 configurations. Developers would connect to one address and see only the ports they are authorized to access via interceptor.v1. Meanwhile, the server would route connections dynamically—for instance, connecting to postgresql.project-some.svc.cluster.local for traffic on port 5432 and to clickhouse.project-some.svc.cluster.local for traffic on port 8080.
The ability to use multiple host.v1 configurations would also be highly beneficial for large-scale scenarios like this because it would allow the creation of several standard host.v1 configurations, such as:
- service-openssh
- service-rabbitmq
- service-rabbitmq-management
- service-clickhouse-http
- service-clickhouse-data
- And so on.
These pre-defined configurations could then be assigned to the necessary services as needed, drastically reducing the total number of services and configurations required. This modular approach would make managing large infrastructures significantly more efficient.