System Requirements for self-hosted

What are the system requirements for OpenZITI? I am going to spin up a cloud instance and want to right-size a linode VM. I can’t find any details in the docs or the forum… but I might be searching for the wrong thing.

Do I need a dedicated CPU instance? I am leaning more toward a shared CPU instance since the load will be very sporadic I would assume.

How about ram, what’s good?

How about storage?

I am kicking the tires on this… so I don’t see this as a production instance.

Not familiar with Linode, but we typically get by with something like a t2/t3.micro AWS instance for most non-production things. So, that would be 1-2 VCPU and 1GB RAM. It really depends on how much throughput you are hoping to push through the network, and that is typically CPU bound. Hope this helps.

1 Like

Yeah, the t2/3.micro helps, thanks. I have been using linode for a bit, I used to be all-in with AWS but have been using linode for little POCs, then build something in CloudFormation/terraform when I am ready to make it production.

Hi @jptechnical, while we don’t have an official minimum requirements list, I can provide some info on what I’ve used in various cloud computing scenarios.

I have an Oracle VPS that I use for my Home Assistant, Security Camera system, a simple website with low activity, and my NAS device. The specs on this are very minimal, but Oracle has a block volume setup for storage, all instances share a 200GB volume. Other than storage, I have the following and, while I don’t have much traffic going over the network, I have not had any issues, including transmitting video streams.

  • One VCPU
  • One GB of RAM
  • 0.48 Gbps network bandwidth

Alternatively, on AWS I’ve used t2.micro for just messing around, testing things which has

  • 1 VCPU
  • 1GB of RAM
  • 8GiB of storage

I have run into issues with the 8GiB of storage when I want to install other programs or download repos, that space gets used up fast. In that case I’ve used a t3.medum with 15GiB of storage and that is more than plenty for anything I’ve done.

As @padibona mentioned, it all comes down to how much throughput you plan to pass. Of course, I have to plug the OpenZiti Quickstart, which will make it easy to create a network on a new system if you don’t want to bother migrating your existing network.

I am following that, yes. I do wonder if this is just a quickstart, or if this is the recommended method for a production install as well? If I can give the feedback, it's not clear if this is an evaluation method of install or a recommended method suitable for production.

The “quickstart” aspect of the quickstarts is simply that most of the work is done for you, there is even a PKI generated if you don’t have one already.

You can certainly override aspects of the quickstart. However, with most of the quickstarts, the intent is that you only need to execute a single command and possibly answer a question or two, and you’ve got a fully functional network ready for use.

I think that, for most cases, quickstart is used to mess around and get to understand OpenZiti but I do know that a lot of users have used quickstart for their own personal networks. In fact, I use a network set up with quickstart for my Oracle VPS I mentioned above. I’ve been running that for almost a year now with no issues. So, at least personally, I feel the quickstart can be used for a production environment, at the very least, on a small scale.

Ultimately, it would depend on the specifics of your network. To be honest, I’m not entirely familiar with the production side of a Ziti network, so I’m hoping someone more familiar will chime in here. In the meantime, you can check out the deployments section of the docs to get insight into what is involved there.

1 Like

For a production system, there are a couple of things i would do. If the system is relatively small (< 50-100 users, normal types of usage rather than automated very high speed dial rate) we usually use a 2 CPU 4GB RAM system for the controller, dedicated, not burst. In AWS we use C5.large instances mostly, and scale up as necessary. We also use 100GB of storage by default, so we have enough room for logs, which can get quite large depending on what masking you are using If you have a special set up, it may need more. The quickstart doesn’t deploy a systemd managed service, at least not the last time I looked, so you would want to build a service file so systemd can restart it if the node bounces, or if there is any issue with the software. In production, I would want to think about how you are monitoring, etc. The logs can become very cumbersome if you aren’t using tooling. Whatever you might use for other systems, Elastic, Datadog, etc., as long as you have some search capabilities, etc. Not a must have but you will appreciate having it when you are troubleshooting something. Metrics and events are important as well; there is a Prometheus endpoint for the metrics, events you have to ship to your logging facility or something similar.

The software is the same, quickstart or no, so that’s not the issue, it is the environment, as above, that really changes when you move from “kick the tires” to production operations. The main infrastructure is usually Linux nodes, so whatever processes you use there for hardening, monitoring, etc. should all work fine.

That helped a lot. And that makes a lot of sense, regarding vertical scaling.

One more question. Is there a compelling reason to use an installed edition of OpenZITI vs a docker-compose install on a public cloud server? I ask because I have run a LOT of production systems as containers, and compose is a fine (simple) orchestrator. I have gotten out of treating servers as pets... and running a fully locally installed application feels like an antipattern to me. Short of a K8s setup, is there any objection to running this in docker-compose on a right-sized host (assuming the engineer is familiar with containers)?

It does now, there are copy/paste commands in the step-by-step doc to create the units and start the service(s).

I am not a very container savvy person, so I can’t answer that question well, though perhaps @KENCHLCS can chime in. Thanx for the note on the service file, if it was in the instructions, there’s a good chance it was there before and I just didn’t follow them :slight_smile:

@jptechnical Like you, I’ve found Docker Compose simple and useful for managing containerized server apps. No, there’s no objection to using the Compose quickstart if it meets your prod requirements.

Alternatives to local install include deploying to K8s with Terraform (e.g., Linode Kubernetes Engine). I’m working on a Terraform Cloud gitops example. Let me know if you’re interested in that. I’m not aware of any Ansible+Helm examples shared with the community yet, but contributions are welcome in our Ansible Collection.

The Kubernetes quickstart includes a BASH script miniziti.bash and step-by-step instructions detailing the actions performed by the script for deploying a Ziti controller and router in K8s with helm CLI. If you don’t use Terraform, you could reference the K8s quickstart to deploy to any K8s cluster.

1 Like