Why use an overlay network when you still need an open port in a service mapping

I was chatting with a friend today about OpenZiti to better understand other people’s viewpoints.

One comment that caught me by surprise was the following:

Why is an open port on an overlay network better than an underlay network (ps. this has been paraphrased to make it easier to share)

For instance: consider the Private Postgres database example, where you can close all of the ports on the Docker container to make it completely locked down. With Ziti, you can deploy a local edge router to then create a host on the Postgres instance, open a “ziti port” by configuring a service, and then use the overlay network execute the same SQL statement.

From an outsiders observation who has basic technical understanding, you still need to have a port open… so why bother?

So… I thought to put a post up about this to see how others would respond in a non technical way.

For instance, I have a few reasons… but they are a bit too technical for the audience I want to connect with.

  • Mutual TLS
  • Strong identities
  • Micro network segmentation
  • Eliminate side attacks
  • Continuous authentication
  • etc

Please add other reasons that would be important to highlight.

I think in this specific case, an analogy would be more appropriate than a technical response… to emphasise how the future of the internet experience is changing due to the increasing prevalence of cyber attacks.

For instance, there is a great metaphor that Kenneth made in the following video… about how to protect your home from uninvited guests e.g the idea of a secret map. Is this what you find the best way to communicate the message for non technical people? I am curious to learn more about how others communicate the message.

Why is an open port on an overlay network better than an underlay network’, 2 things come to mind:

  • (1) The is no open ‘inbound’ port on the overlay. This is a very important nuance.
  • (2) is that we are talking about completely different things in a modern, virtualised infrastructure.

Let’s use an example to explore both of the above. I build my private Postgres database in a cloud (e.g., AWS or Oracle).

  • With OpenZiti
    • (1) In my VNC/VPC I set it up to have only a NAT GW with outbound only or even a FW with deny all inbound. There is no way for external malicious actors to reach the DB. You have to authenticate and authorise the overlay and the service/app first.
    • (2) As my service/VPC does not need public IP, the cloud data centre still has public IPs. But those are not associated with my VPC/app/service at all. The maximal attack vector DDoS is to take down the whole AWS data centre. It’s a missile to crack a nut scenario. Further, from the OpenZiti perspective, if architected correctly, this attack would cause smart routing to avoid this location thus negating its impact.
  • Without OpenZiti
    • (1) In my VNC/VPC I either need to expose public IPs or the DB or set up a VPN/bastion etc. While the DB may not have a public IP, my environment needs it and an inbound port.
    • (2) My VPC/VPN/bastion has a public IP and inbound port. Many external network attacks incl. DDoS, port scans, CVE exploit, and brute force can take place against my single tenancy and services… potentially even the application.

In terms of simple analogy, I wrote this piece early this year… it potentially requires some minimal understanding of Harry Potter :slight_smile: Demystifying magic of Zero Trust with my daughter & opensource

Make sense??

1 Like

This is very helpful. I will need to digest this… and provide more comments later tomorrow.

It definitely helps with building the right way to position it against other alternatives…

Thanks for highlighting this point. It really emphasises the way the Controller CA and Edge Controller CA work together to provide a secure overlay network layer. It’s very subtle but very important.

This is a very interesting point to further expand on… as it highlights the challenges with the current infrastructure… ie that the network layer is still constrained… as the logic is unable to be decoupled from a hardware device… unless of course you use Ziti.

Now that I reflect on this… I believe it makes a lot of sense to lead with the virtualisation story. This is where the “current state” problem is… then you can lead them on a journey to the “future state” that emphasises the first point " no open ‘inbound’ port on the overlay".

One more point I also feel needs to be emphasised is that current vendor solutions lock clients into a specific vendor product road map. While some clients like this… because it delegates the decision and responsibility… its not something that every client wants…

Ziti breaks this cycle by offering a way to pursue a more open and independent pathway… that can accelerate the speed your Zero Trust journey.

I am going to test this out a bit with a few people over the coming weeks to see if this changes how the conversation progresses.


Depending upon where the Ziti component is installed will depend on the ‘open port’. If using ZTAA (ie at the application level) then there is no port. ZTNA - sure - there is an open port between the router and the service (even if on the local machine), postgres in this case. However, cyber hygiene is only have ports open to the services that need them. (so port scanning wont see what you are running on the port) Ie, if using postgres for a web server, only allow connections from the web server. So, in this case, you would only allow connections from the edge router, and it cannot be found by port scanning unless you are on the edge router.

Why is it better…I see two reasons

  1. because you are lifting the postgres server and have the ability to make it available to anyone on the fabric - be it Azure, GCP, AWS…whatever in a controlled non VPN way.
  2. If you connect the webserver to the DB over Ziti (and not saying you should or would as they would generally be local), then there is a layer of abstraction as each does not know where the other is.

Great point… which reminds me of another point @TheLumberjack @qrkourier have made in the past re portability.

I can see how important portability is when seeking to keep your infrastructure aligned with the latest virtualisation capabilities.