Hey @sam-ulrich1 I'm looking at this now. I don't yet know if you're having a similar problem as another user did a couple of days ago (link), but it does sound suspiciously similar. Both cases involved pieces of the router's identity in Kubernetes. I suspect the other issue was caused by skipping Helm chart versions during an upgrade, and the problem didn't recur with a fresh install.
Are you seeing this on a fresh install?
Let's check for obvious problems with kubectl get events --all-namespaces --watch (or narrow to Ziti namespace) while creating the Helm release for the Ziti Router, and kubectl describe pods --selector app.kubernetes.io/component=ziti-router after creating the release.
Where was that MountVolume.SetUp error emitted?
Does that expected K8s Secret exist in any namespace?
I assume <TOKEN> is a redaction, and the token was valid. I often check tokens with https://jwt.io if not sensitive, or this Python script if I need to keep the token offline.
This is deployed on an RKE2 cluster with project network isolation enabled but the ziti namespace has an allow all on egress & ingress.
Controller is successfully deployed with cert-manager, trust-manager, and console in the same namespace.
Token is a redaction, the token appears to be valid. It was retrieved from console after I created it there.
This is a fresh install via helm with the latest version. I've fresh installed a couple times with the same issue.
The secret is nowhere to be found in any of the secrets.
The log comes from the event tab for the pod in rancher but I assume a kubectl describe on the pod would render the same answer since that is where rancher gets those logs from.
I'll perform a fresh install tomorrow and get the raw k8s logs from the install. I'm tied up for the rest of the day. Thanks so much for the help
You bet! The chart has life-cycle hooks for installation and upgrade, and they both do the same thing: check if the Secret exists, if not then enroll and create the Secret.
The app pod waits for that Secret to appear, but something must've gone wrong with the hook Job.
While creating the Helm release for the Router, you can watch for the hook Job to appear and check its logs if it throws an error. I've just released Router chart 0.9.0 with a delayed exit for installation and upgrade life cycle hooks errors.
Be sure to update:
helm repo update openziti
Do this if there's a Job running with this name due to a hook error (adjust to match your release name if needed). The Job will exit and vanish immediately if it succeeds.
From what I can tell the post install hook never runs. Not sure what is happening.
Events
AST SEEN TYPE REASON OBJECT MESSAGE
31m Normal Pulling pod/ziti-console-88d848c54-shk2g Pulling image "openziti/zac:2.9.0"
31m Normal Created pod/ziti-console-88d848c54-shk2g Created container ziti-console
31m Normal Started pod/ziti-console-88d848c54-shk2g Started container ziti-console
4m49s Warning Unhealthy pod/ziti-console-88d848c54-shk2g Liveness probe failed: Get "http://10.142.60.80:1408/login": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
31m Normal Killing pod/ziti-console-88d848c54-shk2g Container ziti-console failed liveness probe, will be restarted
31m Normal Pulled pod/ziti-console-88d848c54-shk2g Successfully pulled image "openziti/zac:2.9.0" in 760.177134ms (760.209154ms including waiting)
35s Normal Sync ingress/ziti-console Scheduled for sync
35s Normal Sync ingress/ziti-console Scheduled for sync
35s Normal Sync ingress/ziti-console Scheduled for sync
35s Normal Sync ingress/ziti-console Scheduled for sync
35s Normal Sync ingress/ziti-console Scheduled for sync
35s Normal Sync ingress/ziti-console Scheduled for sync
35s Normal Sync ingress/ziti-console Scheduled for sync
35s Normal Sync ingress/ziti-console Scheduled for sync
35s Normal Sync ingress/ziti-console Scheduled for sync
35s Normal Sync ingress/ziti-console Scheduled for sync
35s Normal Sync ingress/ziti-console Scheduled for sync
35s Normal Sync ingress/ziti-console Scheduled for sync
35s Normal Sync ingress/ziti-console Scheduled for sync
35s Normal Sync ingress/ziti-console Scheduled for sync
3m55s Normal Sync ingress/ziti-console Scheduled for sync
35s Normal Sync ingress/ziti-console Scheduled for sync
4m28s Warning NodeNotReady pod/ziti-controller-cert-manager-cainjector-6d69b886dc-cdsk7 Node is not ready
94s Normal Pulled pod/ziti-controller-cert-manager-cainjector-6d69b886dc-cdsk7 Container image "quay.io/jetstack/cert-manager-cainjector:v1.11.5" already present on machine
94s Normal Created pod/ziti-controller-cert-manager-cainjector-6d69b886dc-cdsk7 Created container cert-manager-cainjector
94s Normal Started pod/ziti-controller-cert-manager-cainjector-6d69b886dc-cdsk7 Started container cert-manager-cainjector
88s Normal TaintManagerEviction pod/ziti-controller-cert-manager-cainjector-6d69b886dc-cdsk7 Cancelling deletion of Pod ziti/ziti-controller-cert-manager-cainjector-6d69b886dc-cdsk7
36s Normal Sync ingress/ziti-controller-client Scheduled for sync
36s Normal Sync ingress/ziti-controller-client Scheduled for sync
36s Normal Sync ingress/ziti-controller-client Scheduled for sync
36s Normal Sync ingress/ziti-controller-client Scheduled for sync
36s Normal Sync ingress/ziti-controller-client Scheduled for sync
36s Normal Sync ingress/ziti-controller-client Scheduled for sync
36s Normal Sync ingress/ziti-controller-client Scheduled for sync
36s Normal Sync ingress/ziti-controller-client Scheduled for sync
36s Normal Sync ingress/ziti-controller-client Scheduled for sync
36s Normal Sync ingress/ziti-controller-client Scheduled for sync
36s Normal Sync ingress/ziti-controller-client Scheduled for sync
36s Normal Sync ingress/ziti-controller-client Scheduled for sync
36s Normal Sync ingress/ziti-controller-client Scheduled for sync
36s Normal Sync ingress/ziti-controller-client Scheduled for sync
3m55s Normal Sync ingress/ziti-controller-client Scheduled for sync
36s Normal Sync ingress/ziti-controller-client Scheduled for sync
41m Warning FailedMount pod/ziti-router-gigo-internal-global1-5dbb5f7988-s9m2f MountVolume.SetUp failed for volume "ziti-router-identity" : secret "ziti-router-gigo-internal-global1-identity" not found
35m Warning FailedMount pod/ziti-router-gigo-internal-global1-5dbb5f7988-s9m2f Unable to attach or mount volumes: unmounted volumes=[ziti-router-identity], unattached volumes=[config-data ziti-router-config kube-api-access-ph6ff ziti-router-identity]: timed out waiting for the condition
51m Warning FailedMount pod/ziti-router-gigo-internal-global1-5dbb5f7988-s9m2f Unable to attach or mount volumes: unmounted volumes=[ziti-router-identity], unattached volumes=[ziti-router-identity config-data ziti-router-config kube-api-access-ph6ff]: timed out waiting for the condition
33m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
33m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
33m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
33m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
33m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
33m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
33m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
33m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
33m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
33m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
33m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
33m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
33m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
33m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
33m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
28m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
28m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
28m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
28m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
28m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
28m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
28m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
28m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
28m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
28m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
28m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
28m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
28m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
28m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
28m Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
36s Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
36s Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
36s Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
36s Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
36s Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
36s Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
36s Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
36s Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
36s Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
36s Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
36s Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
36s Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
36s Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
36s Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
36s Normal Sync ingress/ziti-router-gigo-internal-global1-edge Scheduled for sync
2m23s Normal Scheduled pod/ziti-router-gigo-internal-global1-f9b44cd85-47lqv Successfully assigned ziti/ziti-router-gigo-internal-global1-f9b44cd85-47lqv to gigo-worker-ab40529a-w9vjd
16s Warning FailedMount pod/ziti-router-gigo-internal-global1-f9b44cd85-47lqv MountVolume.SetUp failed for volume "ziti-router-identity" : secret "ziti-router-gigo-internal-global1-identity" not found
2m13s Warning FailedAttachVolume pod/ziti-router-gigo-internal-global1-f9b44cd85-47lqv AttachVolume.Attach failed for volume "pvc-bbf4f27f-3bbf-4fdf-a685-0819cbc7ce6c" : rpc error: code = Aborted desc = The PVC pvc-bbf4f27f-3bbf-4fdf-a685-0819cbc7ce6c in phase Pending is not ready to be attached
2m5s Normal SuccessfulAttachVolume pod/ziti-router-gigo-internal-global1-f9b44cd85-47lqv AttachVolume.Attach succeeded for volume "pvc-bbf4f27f-3bbf-4fdf-a685-0819cbc7ce6c"
2m2s Warning FailedMount pod/ziti-router-gigo-internal-global1-f9b44cd85-47lqv MountVolume.SetUp failed for volume "pvc-bbf4f27f-3bbf-4fdf-a685-0819cbc7ce6c" : rpc error: code = Aborted desc = The hot-plug volume pvc-bbf4f27f-3bbf-4fdf-a685-0819cbc7ce6c is not ready
21s Warning FailedMount pod/ziti-router-gigo-internal-global1-f9b44cd85-47lqv Unable to attach or mount volumes: unmounted volumes=[ziti-router-identity], unattached volumes=[ziti-router-identity config-data ziti-router-config kube-api-access-5xjpt]: timed out waiting for the condition
33m Warning FailedScheduling pod/ziti-router-gigo-internal-global1-f9b44cd85-dn99c 0/21 nodes are available: persistentvolumeclaim "ziti-router-gigo-internal-global1" is being deleted. preemption: 0/21 nodes are available: 21 No preemption victims found for incoming pod..
33m Warning FailedScheduling pod/ziti-router-gigo-internal-global1-f9b44cd85-dn99c 0/21 nodes are available: persistentvolumeclaim "ziti-router-gigo-internal-global1" not found. preemption: 0/21 nodes are available: 21 No preemption victims found for incoming pod..
33m Warning FailedScheduling pod/ziti-router-gigo-internal-global1-f9b44cd85-dn99c 0/21 nodes are available: persistentvolumeclaim "ziti-router-gigo-internal-global1" not found. preemption: 0/21 nodes are available: 21 No preemption victims found for incoming pod..
32m Warning FailedScheduling pod/ziti-router-gigo-internal-global1-f9b44cd85-dn99c skip schedule deleting pod: ziti/ziti-router-gigo-internal-global1-f9b44cd85-dn99c
29m Normal Scheduled pod/ziti-router-gigo-internal-global1-f9b44cd85-gx6k2 Successfully assigned ziti/ziti-router-gigo-internal-global1-f9b44cd85-gx6k2 to gigo-worker-ab40529a-hfqt2
12m Warning FailedMount pod/ziti-router-gigo-internal-global1-f9b44cd85-gx6k2 MountVolume.SetUp failed for volume "ziti-router-identity" : secret "ziti-router-gigo-internal-global1-identity" not found
28m Warning FailedAttachVolume pod/ziti-router-gigo-internal-global1-f9b44cd85-gx6k2 AttachVolume.Attach failed for volume "pvc-a9bcfb66-0df1-47bd-a7f9-f240559cbcd9" : rpc error: code = Aborted desc = The PVC pvc-a9bcfb66-0df1-47bd-a7f9-f240559cbcd9 in phase Pending is not ready to be attached
28m Normal SuccessfulAttachVolume pod/ziti-router-gigo-internal-global1-f9b44cd85-gx6k2 AttachVolume.Attach succeeded for volume "pvc-a9bcfb66-0df1-47bd-a7f9-f240559cbcd9"
28m Warning FailedMount pod/ziti-router-gigo-internal-global1-f9b44cd85-gx6k2 MountVolume.SetUp failed for volume "pvc-a9bcfb66-0df1-47bd-a7f9-f240559cbcd9" : rpc error: code = Aborted desc = The hot-plug volume pvc-a9bcfb66-0df1-47bd-a7f9-f240559cbcd9 is not ready
9m1s Warning FailedMount pod/ziti-router-gigo-internal-global1-f9b44cd85-gx6k2 Unable to attach or mount volumes: unmounted volumes=[ziti-router-identity], unattached volumes=[ziti-router-identity config-data ziti-router-config kube-api-access-h8qvf]: timed out waiting for the condition
24m Warning FailedMount pod/ziti-router-gigo-internal-global1-f9b44cd85-gx6k2 Unable to attach or mount volumes: unmounted volumes=[ziti-router-identity], unattached volumes=[config-data ziti-router-config kube-api-access-h8qvf ziti-router-identity]: timed out waiting for the condition
17m Warning FailedMount pod/ziti-router-gigo-internal-global1-f9b44cd85-gx6k2 Unable to attach or mount volumes: unmounted volumes=[ziti-router-identity], unattached volumes=[ziti-router-config kube-api-access-h8qvf ziti-router-identity config-data]: timed out waiting for the condition
15m Warning FailedMount pod/ziti-router-gigo-internal-global1-f9b44cd85-gx6k2 Unable to attach or mount volumes: unmounted volumes=[ziti-router-identity], unattached volumes=[kube-api-access-h8qvf ziti-router-identity config-data ziti-router-config]: timed out waiting for the condition
33m Normal SuccessfulCreate replicaset/ziti-router-gigo-internal-global1-f9b44cd85 Created pod: ziti-router-gigo-internal-global1-f9b44cd85-dn99c
29m Normal SuccessfulCreate replicaset/ziti-router-gigo-internal-global1-f9b44cd85 Created pod: ziti-router-gigo-internal-global1-f9b44cd85-gx6k2
2m24s Normal SuccessfulCreate replicaset/ziti-router-gigo-internal-global1-f9b44cd85 Created pod: ziti-router-gigo-internal-global1-f9b44cd85-47lqv
35m Normal Scheduled pod/ziti-router-gigo-internal-global1-pre-delete-job-jqvtk Successfully assigned ziti/ziti-router-gigo-internal-global1-pre-delete-job-jqvtk to gigo-worker-ab40529a-hfqt2
35m Normal Pulling pod/ziti-router-gigo-internal-global1-pre-delete-job-jqvtk Pulling image "docker.io/openziti/ziti-router:0.30.4"
35m Normal Pulled pod/ziti-router-gigo-internal-global1-pre-delete-job-jqvtk Successfully pulled image "docker.io/openziti/ziti-router:0.30.4" in 9.642524779s (9.642548165s including waiting)
35m Normal Created pod/ziti-router-gigo-internal-global1-pre-delete-job-jqvtk Created container pre-install-job
35m Normal Started pod/ziti-router-gigo-internal-global1-pre-delete-job-jqvtk Started container pre-install-job
32m Normal Scheduled pod/ziti-router-gigo-internal-global1-pre-delete-job-mq9jl Successfully assigned ziti/ziti-router-gigo-internal-global1-pre-delete-job-mq9jl to gigo-worker-ab40529a-w9vjd
32m Normal Pulling pod/ziti-router-gigo-internal-global1-pre-delete-job-mq9jl Pulling image "docker.io/openziti/ziti-router:0.31.0"
32m Normal Pulled pod/ziti-router-gigo-internal-global1-pre-delete-job-mq9jl Successfully pulled image "docker.io/openziti/ziti-router:0.31.0" in 8.396640338s (8.396659618s including waiting)
32m Normal Created pod/ziti-router-gigo-internal-global1-pre-delete-job-mq9jl Created container pre-install-job
32m Normal Started pod/ziti-router-gigo-internal-global1-pre-delete-job-mq9jl Started container pre-install-job
5m34s Normal Scheduled pod/ziti-router-gigo-internal-global1-pre-delete-job-rwdt4 Successfully assigned ziti/ziti-router-gigo-internal-global1-pre-delete-job-rwdt4 to gigo-worker-ab40529a-hfqt2
5m34s Normal Pulling pod/ziti-router-gigo-internal-global1-pre-delete-job-rwdt4 Pulling image "docker.io/openziti/ziti-router:0.31.0"
5m26s Normal Pulled pod/ziti-router-gigo-internal-global1-pre-delete-job-rwdt4 Successfully pulled image "docker.io/openziti/ziti-router:0.31.0" in 8.063878514s (8.063903882s including waiting)
5m26s Normal Created pod/ziti-router-gigo-internal-global1-pre-delete-job-rwdt4 Created container pre-install-job
5m26s Normal Started pod/ziti-router-gigo-internal-global1-pre-delete-job-rwdt4 Started container pre-install-job
35m Normal SuccessfulCreate job/ziti-router-gigo-internal-global1-pre-delete-job Created pod: ziti-router-gigo-internal-global1-pre-delete-job-jqvtk
35m Normal Completed job/ziti-router-gigo-internal-global1-pre-delete-job Job completed
32m Normal SuccessfulCreate job/ziti-router-gigo-internal-global1-pre-delete-job Created pod: ziti-router-gigo-internal-global1-pre-delete-job-mq9jl
32m Normal Completed job/ziti-router-gigo-internal-global1-pre-delete-job Job completed
5m35s Normal SuccessfulCreate job/ziti-router-gigo-internal-global1-pre-delete-job Created pod: ziti-router-gigo-internal-global1-pre-delete-job-rwdt4
5m14s Normal Completed job/ziti-router-gigo-internal-global1-pre-delete-job Job completed
33m Normal ScalingReplicaSet deployment/ziti-router-gigo-internal-global1 Scaled up replica set ziti-router-gigo-internal-global1-f9b44cd85 to 1
29m Normal ExternalProvisioning persistentvolumeclaim/ziti-router-gigo-internal-global1 waiting for a volume to be created, either by external provisioner "driver.harvesterhci.io" or manually created by system administrator
29m Normal Provisioning persistentvolumeclaim/ziti-router-gigo-internal-global1 External provisioner is provisioning volume for claim "ziti/ziti-router-gigo-internal-global1"
29m Normal ProvisioningSucceeded persistentvolumeclaim/ziti-router-gigo-internal-global1 Successfully provisioned volume pvc-a9bcfb66-0df1-47bd-a7f9-f240559cbcd9
29m Normal ScalingReplicaSet deployment/ziti-router-gigo-internal-global1 Scaled up replica set ziti-router-gigo-internal-global1-f9b44cd85 to 1
2m24s Normal ExternalProvisioning persistentvolumeclaim/ziti-router-gigo-internal-global1 waiting for a volume to be created, either by external provisioner "driver.harvesterhci.io" or manually created by system administrator
2m24s Normal Provisioning persistentvolumeclaim/ziti-router-gigo-internal-global1 External provisioner is provisioning volume for claim "ziti/ziti-router-gigo-internal-global1"
2m24s Normal ProvisioningSucceeded persistentvolumeclaim/ziti-router-gigo-internal-global1 Successfully provisioned volume pvc-bbf4f27f-3bbf-4fdf-a685-0819cbc7ce6c
2m24s Normal ScalingReplicaSet deployment/ziti-router-gigo-internal-global1 Scaled up replica set ziti-router-gigo-internal-global1-f9b44cd85 to 1
Are you perchance wrapping helm CLI with a something like an Ansible module or TF provider? I've noticed that some wrappers do not fire the hooks unless wait=false, which is not intuitive, and I don't understand why this is necessary. There is a long-running GitHub issue about it.
The workaround is to unset wait and atomic when creating or upgrading the Helm Release. The helm CLI default settings seem to work fine, but when wrapping Helm I've noticed that hooks don't fire without the additional setting. Let me know if you have any ideas for making this more robust.
EDIT: It's starting to make sense why we're seeing what we're seeing. The default for Helm CLI is wait=false, and post-install and post-upgrade don't run when wait=true. It's because those Jobs are only scheduled after the main app pod is "ready," which can't happen until after those Jobs run, so it's a pickle. Maybe those hooks need to be pre-install and pre-upgrade. I don't recall now what forced me to use post-x at the time, but the constraint may have only applied to certain charts, not all. I'll investigate.
I'm running it via rancher so I'm sure wait is set to true. I'll try overriding it or manually running helm. I would've never guessed that cause the controller and console installed just fin with rancher's helm wrapper.
Still working on it but as an anecdote, one of my issues was the controller ingress config. The SSL passthrough needed one more annotation to use the secure backed nginx.ingress.kubernetes.io/backend-protocol=https
The Controller chart has a sample values.yaml for Nginx Ingress that worked with some version, so it would be great for those samples to be more robust.
The particular annotation you added configures Nginx to use TLS for the reverse proxy backend, no? If so, I doubt that will work in the end because it must passthrough TLS (L4 TCP proxy).
Interesting, does the current configuration setup a L4 proxy? On my RKE2 cluster it doesn't. I can configure it I just didn't want to cause it's a more involved configuration
Yes, the sample values for Nginx Ingress represent a combination of annotations that I found necessary to configure ingress-nginx for passthrough TLS, i.e. L4 TCP proxy.
I've definitely used those annotations on some version of RKE2 that I provisioned with Rancher. I was using the Harvester cloud provider/node driver to provide the LB from an IP pool, but alas, some other version of RKE2 or provider may not work the same way. It's a weakness of Ingress that L4 TCP proxy is not a primitive, and all hail our savior, Gateway API.
The reason it needs passthrough TLS, not backend TLS, is that at the last stage of your deployment your Ziti Edge Identities will try to authenticate with the Controller's Client API TLS Service using mTLS by presenting the client certificate they obtained during "edge enrollment."
I'm not entirely sure how the nginx config is working here but I can say that I am getting the controllers cert from the config. To the best of my knowledge the only way to do a SSL pass through is L4 so maybe it worked.
Cool. Let's get it working and If the best way to get L4 passthrough proves to be annotating nginx.ingress.kubernetes.io/backend-protocol=https then I'll update the Nginx sample values.
Since you're using Rancher, would you have been likely to use a Rancher app catalog launcher for Ziti Controller or Ziti Router, or both, or have preferred less magic? I've only placed a simple Ziti reverse proxy in the Rancher catalog up to this point, but an opinionated launcher is probably feasible, if we assume RKE2 was provisioned by Rancher with the default ingress-nginx enabled. I'm unsure how much flexibility the catalog will give me for pivoting the values based on the target cluster's parameters.