Hi everyone,
I've been testing a couple of things in my lab, but actually I've been crashing into a wall.
In this test I have:
2 x Controllers
1 x Load Balancer
What I wanted to attain is a deployment with load balancers and the cluster (on-premise), and be reacheable for cloud routers (multi-cloud).
In my head controller i have this:
raft:
bootstrapMembers:
- tls:xxx.xxx.xxx.xxx:8440
- tls:xxx.xxx.xxx.xxx:8440
..........
ctrl:
options:
advertiseAddress: tls:xxx.xxx.xxx.xxx:8440
........
The error prompted when trying to run the controller config:
FATAL fabric/controller/raft.(*Controller).validateCert: {error=[invalid controller certificate, no controller SPIFFE ID in cert]} controller cert must have Subject Alternative Name URI of form spiffe:///controller/
I've used the quickstart from Host it anywhere
Which address should be into the advertise, the public one or the hostname?
Is it possible to use load balancers other than nginx?
Hi @MoonCraver, welcome to the community and to OpenZiti!
I have no seat time with HA configurations yet so sadly I can't help out yet. @plorenz or @andrew.martinez, I think one of you are going to need to help out here?
Hi @MoonCraver,
HA isn't feature complete yet. You should be able to get a cluster up and running, the model is distributed and distributed routing should be working. However, distributing sessions and posture check data is still in progress, so edge SDK connectivity will either not work at all, or only work sometimes. For now, I'd recommend starting off with a single controller.
If you want to experiment with what's working so far for HA, you'll need to generate your controller certs with a trust domain and spiffe IDS. There are some HA docs here. Take a look at the create-pki.sh script.
Your trust root should have a trust domain and your server cert needs a SPIFFE id. That id is how controllers identity each.
# Create the trust root, a self-signed CA
ziti pki create ca --trust-domain ha.test --pki-root ./pki --ca-file ca --ca-name 'HA Example Trust Root'
# Create the controller 1 intermediate/signing cert
ziti pki create intermediate --pki-root ./pki --ca-name ca --intermediate-file ctrl1 --intermediate-name 'Controller One Signing Cert'
# Create the controller 1 server cert
ziti pki create server --pki-root ./pki --ca-name ctrl1 --dns localhost --ip 127.0.0.1 --server-name ctrl1 --spiffe-id 'controller/ctrl1'
Let me know if that helps or if I can clarify things some more.
I'll give it a try and ask for detailed help if I missed something.
It would be great to have the shared sessions and posture working, but as a backup in case of failure of any of the servers it is enough for me at the moment.
I've been testing into the case you gave me from the docs.
There are some flaws that are coming to crash the cluster conformation:
1.- When you are deploying all the controllers, can't communicate between instances. Also trying with a custom yml, but it was impossible to make connections between them with the sdk.
2.- When the raft module loads in the master instance, it's not forced to become the leader, leaving orphaned cluster members. All of them becomes votee, but anyone assumes the leadership role.
3.- When you are conforming the cluster, the sdk force you become a voter as default, failing the flag "--voter false"
Re 1: Can you expand a little on your test setup? It looked like you were setting up a two node cluster. Are you using the config file to add the bootstrap members on one node or both? Or are you using ziti agent cluster add? Do you have raft: { minClusterSize: 2 } set in the config file? I'd like to be able to reproduce what you're seeing.
Re 2: In Raft there's no dedicated leader. All voting members of the cluster are eligible to become the leader. There is a ziti agent cluster transfer-leadership command, but that is intended primarily for graceful upgrades, etc.
Re: 3: Sounds like a bug, I'll investigate.
Thanks for sharing your test results. I'm not confident your setup will work for everything you want to do, but at a bare minimum we should at least be able get the cluster up and running and admin functionality should work.
Doing some testing, it looks like the ALPN work with went in with 0.30.0 (allowing consolidation to a single listening server port) has broken HA cluster setup. Because HA isn't feature complete yet, HA smoketests aren't running for releases yet. I'll let you know when a fix is available.
Hi @MoonCraver , a fix for the cluster connections issue has been released in v0.30.3. Let me know if you get a chance to try it, and if so, how it goes.
I also may send you the improvements in the manual to update the deployment and controller configurations
After doing some performance and debug tests, I will test it as part of a pool on a software load balancer, with a ZAC mounted on docker and certificates (self-signed and/or public).
As a milestone, if the router enrollment is done, it is able to recognize the controllers in HA and balance the traffic according to the load.
Thanks, @MoonCraver , let us know how your testing goes. If you've got doc tweaks, I'll be happy to incorporate them. I still need to move the docs from ziti/doc/ha into the ziti-doc repo and do some reorganization, clean-up and expansion. As, I mentioned earlier I'll be surprised if things work for you if you're using SDK clients. If you're using non-edge based ingress into the fabric, using something like xgress_proxy, that should work. If you're using tunneler enabled routers, that may work, I haven't thought through it to see if it would be ok. Either way, curious to hear how it goes.
I would like to ask the possibility to scale out OpenZiti controller.
As I understand so far, HA means there will be a failover function for keeping the service availability.
But under heavy loading, isn't it better to apply a scalable(scale-out) deployment for OpenZiti controller?
Let me run through the various kinds of availability and scale you can expect. I'll use HA to mean high availability, meaning we can withstand losing one or more servers. I'll use HS for horizontally scalable, meaning that the load can spread across multiple servers.
Services can be terminated on multiple routers. Based on how it's configured, this can be done to provide just HA or HA and HS. See the guide for more details.
If the controller is running in a cluster, then circuits can be routed on any controller so once a service is configured, routing can be load-balanced across the controllers and is HS. The only caveat here is that whichever controller routes a circuit also owns that circuit and other controllers do not know about the circuit and cannot take it over if needed. If that controller goes down, the circuit will continue to work, but re-routes won't work. Improving the routing model will be a focus of post-HA development.
For model updates (creating/updating/deleting services, identities, configs, policies, etc), the controllers use Raft. So model updates are HA, not HS. This is because for model updates we prioritized consistency over availability. There are some use cases where this might not be the right trade-off. For those, we're thinking about what makes sense. Initial thoughts have been around multi-tenant routers so that routers can be shared by independent controllers. or some version of controller federation. This is an area where we don't have a clear plan yet.