Openziti in a fully local (docker-compose) setup

Hello,
I'm trying to figure out if Openziti is the best solution for my use case. I have looked into service meshes like linkerd and istio (my thanks to this comparison) but afaict they require kubernetes and I would prefer to avoid adding that much complexity if I can.

I have several services that I'm running as containers via a docker compose. They are behind nginx as follows. Everything is on the same device.

The things I'm looking for are mutual authentication (mTLS), policy routing (aka being able to decide who is able to talk to who) and have it be transparent for an external user. In short I'm wondering what openzitifying such a setup would look like. I know I would need to add a controller and a router but I'm less sure about the rest. My biggest questions are:

  1. Can I force a container to be accessible only through openziti (so nginx can't talk to service C, or service C to service A for example)?
  2. How would I make some containers still be able to talk with external services?
  3. Similarly, would anyone connecting to nginx need to register with OpenZiti, or should I put the network behind nginx?
  4. Would service A and service C be both identies and services (as defined by Openziti) as they both need to access a service but also provide one.
  5. Is there something else better suited to the situation, as most of the setups I have seen with Openziti involve multiple devices across different networks.

I'm sorry if there are basic things I misunderstand about openziti
Thank you for your help

1 Like

Welcome to the party, @Snowflake8. It sounds to me like your case is a good match for OpenZiti's capabilities.

Make sure I understood you correctly: you are committed to doing this with Docker Compose (as opposed to Linux services or Kubernetes), and you want to control which services can talk to each other, including which direction they can talk (who initiates the connection).

Can you say a little more about how you want to use mTLS? Ziti internally uses mTLS, but you might be thinking about using certificate authentication with the service you're publishing with Ziti too. For example, if you publish a web application with Ziti, your server could require clients' web browsers to present a valid user certificate. That would work with Ziti just fine because Ziti is application-neutral. It encrypts and delivers the whole application packet on a TCP or UDP socket.

One more clarification: when you say "fully local" I think you mean that the servers you are publishing with Ziti are not listening on any IP addresses except the loopback address, so they're not reachable without Ziti outside the device. Is that accurate?


This is a good example of what Ziti does best: total isolation at the connectivity layer, allowing only authorized connections via Ziti services.

Like Service B, right? Ziti's "tunnelers" don't prevent the application from connecting to non-Ziti destinations, but has an additive, split-tunnel mode of operation. A client application that attempts to resolve a DNS name like "myapp.ziti.internal" will first check Ziti DNS which is always the primary nameserver. If that fails, then it should ask the next nameserver. If you can only configure one nameserver, you can tell the Ziti tunneler to recurse instead of failing the query.

Let's distinguish between publishing the Ziti control and data planes vs. publishing your applications with Ziti or Nginx or both.

Control and data planes: at least one Ziti controller TCP port and one router TCP port must be reachable from anywhere.

Your applications: you wanted at least some of the applications to be "fully local," so I don't think you want to publish them with Nginx, but only publish them with Ziti. However, suppose you want to publish a web application with Ziti, and Nginx is providing TLS termination for the web server. Let me know if you have such a need. I have some ideas.

Yes, you'd normally create a Ziti identity for each thing so you can give it a friendly name and a role so it can be authorized for Ziti services in one or both directions, i.e., clients "dial" and servers "bind"/"host" Ziti services.

Ziti can connect different processes on the same host, other hosts in the same network, or span various networks. As far as Ziti is concerned, your Docker bridge networks are just another subnet. You can isolate your apps with a straightforward firewall rule like "allow outbound only," place your Ziti routers so they're reachable from anywhere, like a public IP, or at least reachable from the Ziti identities' subnets, then write Ziti service policies to give permission for each direction (dial or bind) and each application (Ziti service).

I'm committed to using containers (understand that the full picture is a bit more complicated then what I presented) if only for maintainability. I will use Kubernetes if it becomes necessary, but since that's a whole beast on its own, it would be as a last resort.

Correct.
I want mTLS as mitigation. Prevent someone that could snoop on the network from getting information (although you could get this from TLS), prevent someone from impersonating a service to communicate a legitimate one (MitM for example), and finally (this is less about mTLS) if one of my services is compromised I don't want them to be able to communicate with all my other services. You would be right to say I'm over-engineering this, but I also want to use this to learn how to design/setup such a system. I'm also considering the fact that in the future if one of my services moves to another machine then I can simply rely on the already existing Openziti network.

This is something I want to avoid, I want openziti to not exist from the users point of view, and only use it "internally".

Not exactly but this is on me (wasn't sure what was the best way to put it). What I meant was that all services are on the same machine. The machine is accessible through a public IP, but only for nginx, so the services themselves shouldn't be directly accessible outside the device with or without ziti (unless I messed up somewhere ^^")

I'm not sure I understand what you are saying. I think by your definition I want to publish my application with nginx, so the latter (?).

The souce of most of my confusion I think, and the big unknown for me how to integrate ziti with existing services (when you aren't able to use the SDK ofc).

  1. If communication would have to go through a tunneler (I am basing myself off of this in your docummentation), therefore doing service A --- tunneler ---- tunneler --- service B would be rejected (since openZiti doesn't allow this flow) but one could simply do service A --- service B directly, bypassing the openZiti network. I could work with different networks in the docker compose (like one network for a service/tunneler pair) to prevent that, but maybe there's a more ziti-esque solution (or another more elegant solution that doesn't require as many tunnelers).
  2. In the case above what would the ziti identities be? Since we can't necessarily modify the service, would it be the tunnelers who take care of it or another layer between them?
  3. My other question was partially answered with the split-tunnel mode. If I have all the services in my network, would I still be able to expose nginx (and only nginx) to any user without the need for enrollment or authentication even though nginx would be in the ziti network?

PS

Are you talking about a normal firewall or something in openziti? Probably a dumb question

Thank you for all the help

1 Like

Thanks for clarifying! These are good questions, and it's challenging to answer directly without fully understanding the goal, so I've filled in some details that may not be directly relevant. Don't hesitate to ask for clarification.

You can combine tactics depending on your security tolerances, optimizing for fewer tunnelers or less network surface area. We advocate for the "zero-trust" approach where the application is not listening on any address other than localhost or none if importing the Ziti edge SDK, as you noted.

An example of this zero-trust approach in your Docker environment is configuring application servers to bind/listen on their container loopback instead of 0.0.0.0, effectively isolating them from their peers inside the Docker bridge.

Then, use a Ziti tunneler as a "sidecar," meaning the tunneler and your application share a container network namespace and nameserver configuration. This way, the tunneler's identity is granted permission to bind/host the Ziti service targeting the loopback listener in their shared network namespace, e.g., 127.0.0.1:8000.

There are two approaches to this sidecar solution: uni-directional for only hosting Ziti services, and bi-directional for also dialing Ziti services. The bi-directional approach provides Ziti DNS to the application.

Here's an example of using the openziti/ziti-router container image this way: as a Docker container sidecar (container network mode) providing bi-directional access to Ziti services (including Ziti DNS) to the application with which it is sharing a container network interface.

If Ziti services are only hosted in the sidecar, it's not strictly necessary to configure the Ziti router for bi-directional Ziti services (dial and bind). You could use the openziti/ziti-host image, which requires only the ZITI_ENROLL_TOKEN env var (Containers | OpenZiti) and no additional kernel capabilities or, if you wish to continue providing a Ziti fabric point of presence inside the Docker bridge network, you could continue using the Ziti router image, drop the additional capabilities and privileged run-as and DNS configs, and set the mode to only host services (uni-directional) with ZITI_ROUTER_MODE=host.

You'd have a Ziti identity each for your application's client and server with respective Ziti identity roles like #clients and #servers. You'd administratively create a Ziti service with tunneling "configs" expressing the address clients should use to access the service (the "intercept" config) and the target address to which servers should forward traffic exiting the Ziti service (the "host" config), ideally the loopback address of the hosting identity's tunneler. Finally, you need a Ziti Dial service policy authorizing #clients and a Ziti Bind service policy authorizing #servers.

I don't understand this scenario perfectly. You want to publish the Nginx server port(s) with Ziti while they remain published to the internet, right?

Put another way, you could publish the same Nginx server ports with Ziti and concurrently directly to the internet. If you only publish them with Ziti, then the only way to access Nginx will be via Ziti. In either scenario, you need a Ziti service with a host config targeting the Nginx server ports.

I need to go and tinker/try out all the new information and ideas given. I'm quite relieved to know that what I'm trying to do with ziti is not completely outlandish. I'm motivated to give it a go.
I won't hesitate to ask a question when if I encounter any issues can't find a solution to.
Thank you so much for all the insights @qrkourier

1 Like

I thought of a simpler approach that may be acceptable if you don't need isolation of the containers sharing a subnet like a Docker bridge network. This alternative is simpler because you need fewer Ziti tunnelers, i.e., one per network instead of one "sidecar" per server container.

For example, if you have one published server container named "myserver" that listens on 8000/TCP that is sharing a subnet with supporting applications like a SQL and NoSQL data base/structure servers, you might decide to accept the network exposure of the application server inside the bridge by configuring your published server applications to bind all interfaces (0.0.0.0) rather than binding only the loopback, thereby exposing its port 8000 to the subnet.

In this scenario, assuming you are only publishing the "myserver" with Ziti, you could have a very simple Ziti deployment of the openziti/ziti-host image inside the same Docker bridge network/subnet which requires only the ZITI_ENROLL_TOKEN variable. Then, you would build a Ziti service with a host config targeting your published server's container name and port, i.e., myserver:8000 and the tunneler would resolve that domain name with Docker DNS when forwarding the packets on the last mile as they are exiting the Ziti tunnel.