Using OpenZiti and zrok to Support "On-Prem" Services

I had a really interesting meeting today which involved describing how we're able to deploy our services and solution stack in our clients' environments and then be able to manage and support those deployments securely with a closed firewall and no need for a VPN, thanks to zrok. There was some very satisfying stunned silence! There is very little burden on our clients' IT departments and the solution is more secure - WIN-WIN.

But there is a lot that I don't know and can't explain (my background is in building applications and not in networking or security) and a lot more we could be doing with a zero-trust network in place.

Generally speaking we need to be able to do the following in a client's environment,

  • deploy (install and configure our services)
    • requires ssh and root privileges to provisioned VPSs
  • maintain
    • upgrades
    • OS
    • our services
  • support
    • access to a web portal, possibly several
    • access cached files and logs

In the past we would ask for VPN + open ports (22, 3000), but now we can use zrok.

It would be really useful to,

  • as a non-security/networking expert, be able to explain what openziti and zrok are
  • have additional ideas for how a zero-trust network could be used to do this kind of SA job

I was watching this video earlier today. I would love to see one which aims to demystify zero-trust networks for application developers and SA's which would be useful for me and for giving our client's confidence in how we're delivering our services, particularly from a security perspective. Maybe something for ZitiTV. I'm happy to work on defining the topic more.

1 Like

demystify zero-trust

Thanks for that idea i really like it. I'll try to queue that up for a Ziti TV very soon! If you have specifics about what you might want to hear demystified, I'm sure one of us can help you here in the meantime.

I am obviously biased, but I like the content I produce :sweat_smile: (@TheLumberjack's is great too :wink:).

Here is a blog I wrote, quite literally with demystifying ZT in the title, great if you like a little Harry Potter - Demystifying the magic of Zero Trust with my daughter and opensource - NetFoundry.

I also do a lot of podcasts and webinars; here is the most recent one I gave at the CSA last week - Philip Griffiths on LinkedIn: Google Calendar - Easier Time Management, Appointments & Scheduling

I would also note, your use cases is the exact one we aim to enable with OpenZiti/NetFoundry as its a truly x10 outcome. To quote someone else, "I can now get access to by customer's environment without them needing to give me a 'backdoor'. I would be happy to have a direct chat and help you in your knowledge. Feel free to dm me.

Edit: this attached picture is good, too; It portrays how we can give, app, not network access. While it shows it from an app-embedded standpoint, logically you can do similar without being app embedded.

@TheLumberjack @PhilipGriffiths Thanks for the replies, gents. I will give this topic some more thought and maybe come up with something snappy and a bit more specific which would lend itself well to a ZitiTV type discussion.

I will also read those blogs, @PhilipGriffiths. Thanks for the links.

In my specific case I'm managing several deployments of a custom app which is run using docker compose. Each of the deployments "calls home" with operation logs which allows me to do monitoring and alerting. i.e. provide the app as a managed service. As part of the managed service, I need to support those deployments, which includes at the host OS level. I don't know how typical this model is (it seemed obvious to me, but I don't know of anyone else doing it this way).

I'm currently public sharing (with SSO) the web servers (port 3000 exposed at the host level) using a self-hosted zrok. This works really nicely! But I still need to able to get to the host (to say upgrade the app or update the OS) and "into" the docker compose services to inspect logs, traces, etc.

  • I could run zrok vpn at the host level.
  • I could put zrok into the docker compose stack (web, drive) to share those log or trace files.
  • Any number of other things, I'm sure.

I know I'm only scratching the surface with my current zrok usage. But given I'm currently hosting my own zrok and I have complete flexibility, I feel like I can "10x my experience" and end up with a completely standard way of supporting my deployments, regardless of where they are deployed, but with zero-trust and very minimal interaction with IT departments.

Thing is, I don't know what I don't know, so it's hard to be more specific about what I want to do. Maybe this is what I mean about "demystifying" ZT networks, i.e. help the uninitiated to open their minds to the practical possibilities.

I use them in tandem to diversify the means of remote access. Here's my preferred recipe. Both are solid, so I don't think of one as a front door vs. back door.

  1. zrok-share.service systemd unit running a VPN or proxy private share backend. For example, a private proxy share to Cockpit on 127.0.0.1:9090 or a subnet allocation w/ zrok vpn.
  2. ziti-edge-tunnel.service systemd unit with permission to host a Ziti service with wildcard intercept like *.node22.ziti.example.com that forwards ports 1-65535/tcp,udp to the node's loopback address.
  3. For extra safety, a openziti/ziti-host container running with host network mode and permission to bind the same node local Ziti service provides two paths to the loopback (systemd vs. dockerd).
  4. Disable external access to the OpenSSH server port. I always retain the ability to re-publish the port through a cloud provider or remote hands, but I haven't had to do that yet.

Crucially, these remote access methods rely on independent infrastructure, not the same OpenZiti network or node service type (systemd vs. dockerd). This gives me an acceptable blend of flexibility and reliability for privileged node access.

Here are examples of OpenZiti service configs that I use for node access.

intercept.v1 config

{
  "name": "node22-local",
  "configTypeId": "g7cIWbcGg",
  "data": {
    "addresses": [
      "node22.ziti.example.com",
      "*.node22.ziti.example.com"
    ],
    "portRanges": [
      {
        "high": 65535,
        "low": 1
      }
    ],
    "protocols": [
      "tcp",
      "udp"
    ]
  }
}

host.v1 config

{
  "name": "node22-local",
  "configTypeId": "NH5p4FpGR",
  "data": {
    "address": "127.0.0.1",
    "allowedPortRanges": [
      {
        "high": 65535,
        "low": 1
      }
    ],
    "allowedProtocols": [
      "udp",
      "tcp"
    ],
    "forwardPort": true,
    "forwardProtocol": true
  }
}

Want trusted certs for the node's web apps too? That's an easy thing to add-on if you choose a domain name you control for your Ziti service intercept address, e.g., example.com in these examples.

I like to use Caddy running in a container because I can drop my DNS provider API token in the Caddy's env and it will continually renew trusted server certs for any number of web apps served by that node. This does not require Caddy to be published for the CA to verify because it solves a DNS challenge with ACMEv2.

For each app, I add a stanza in the Caddyfile. You could do this with one Compose project having Caddy and everything else on that node or use published ports through the node's loopback.

1 Like

Thanks @qrkourier. Your reply is hugely appreciated.

I'm leaning towards frontdoor keeping ssh via zrok available for each of my deployments. And then zrok web proxy and drive alongside my docker compose "stack" for each of my deployments.

I have a concern which has been bugging me for a while. It is that each of my app deployments will have the same zrok account secret token available somewhere on the deployment's host machine in plain text. And typically the host machine is a VPS in our clients' infrastructure, i.e. our clients have root access to those host machines and could sniff out the access token if they knew where to look. Which would mean they'd be able to retrieve the keys to the kingdom (access zrok shares for all my clients'). Of course, I could use a different account for each our my deployments, but that would make it harder for me to manage on my side.

This shares some similarity (sort of) with this post: Compromised tokens.

I guess I'm asking for general views on this concern and the way that I'm using (or thinking about using) zrok. Maybe there's a way to "flash" the token into and then out of the host machine, so that it doesn't sit there. Any other ideas?

Your concern is legitimate. If you re-use a zrok account for each, and you're forced to rotate that token due to a compromise, then all environments enabled with that account token will stop working.

The best fit for zrok is to provision a zrok account for each. This can be done with the API or CLI or the email invitation workflow, and I'm guessing the CLI is the simplest way.

  1. Run a zrok instance: Limitless zrok with Docker

  2. Provision accounts at will:

    zrok admin create account /etc/zrok-controller/config.yml ${ZROK_USER_EMAIL} ${ZROK_USER_PWD}'
    

Each account has a unique email, but it's unnecessary to configure the zrok controller to send emails if you're managing everything with CLI or API. Email sending credentials (SMTP) are required only for zrok user account self-service.


If you'd prefer not to run your own zrok instance, then it's a good time to consider using OpenZiti directly because the tunneler daemon I mentioned I use for node access will have a unique identity for each of your customers and so does not present the same risk of a compromised node infecting other sites/customers.

1 Like

@qrkourier I've noticed a couple of things which are out-of-date at Self-hosting guide for Docker,

  • caddy doesn't appear to have a list-certificates command
    • see here
    • I'm using v2.8.4 h1:q3pe0wpBj1OcHFZ3n/1nl4V4bxBrYoSoab7rL9BMYNk= built with xcaddy to include caddy-dns plugin desec - perhaps the problem is how my caddy was built
  • creating account now takes a config file
    • see here
    • this command
      docker compose exec zrok-controller bash -xc 'zrok admin create account ${ZROK_USER_EMAIL} ${ZROK_USER_PWD}'
    • should rather read
      docker compose exec zrok-controller bash -xc 'zrok admin create account /etc/zrok-controller/config.yaml ${ZROK_USER_EMAIL} ${ZROK_USER_PWD}'

Good catch. I sent a PR to prune the incorrect troubleshooting advice from the Docker instance guide.

The create account problem you encountered is now reconciled. The documentation was published a few hours before v0.4.39 was released, which changed the parameters to no longer require the path to the config. Instead, this version uses the controller API to create accounts, so it requires the ZROK_ADMIN_TOKEN and ZROK_API_ENDPOINT variables instead of the config file.