Deploying router on k8s via helm gives identity secret mount error

When the post job runs i get the following error from this command ziti router enroll /etc/ziti/config/ziti-router.yaml --jwt /etc/ziti/config/enrollment.jwt --verbose

FATAL ziti/router/enroll.(*RestEnroller).Enroll: {cause=[token signature is invalid: crypto/rsa: verification error]} failed to parse JWT

I've logged into the pod and I see that the enrollment.jwt file is the correct token I got from the console UI. Am I supposed to use a different token then one I retrieved from this page?

Yes, the enrollment JWT from the Ziti Router detail pictured is the correct token for the Router chart's values.enrollmentJwt.

The error points to a JWT signature verification error, which is what I'd expect if Nginx is terminating TLS, not passing through TLS to the Controller's Client API.

The token contains an iss (issuer) property which value is the Controller's advertised Client API URL. During enrollment, the router fetches the Client API's server TLS pubkey, which is the key that signed the token.

When Nginx terminates TLS, the pubkey that is "seen" is Nginx's TLS server pubkey, not the Client API's, so signature verification always fails in this case.

Here's an example of what the router's enrollment token contains.

    "header": {
        "alg": "RS256",
        "kid": "f6b615a0004ec8a20792566822586d7dd5a912ff",
        "typ": "JWT"
    },
    "payload": {
        "iss": "https://miniziti-controller.192.168.49.2.sslip.io:443",
        "sub": "fJg4Q4662",
        "aud": [
            ""
        ],
        "exp": 1700355837,
        "jti": "d8d548e4-e33a-4f3a-bde1-aeffe6f6f1c3",
        "em": "erott"
    },

link to ziti-jwt.py script

Okay, back to the drawing board... Would it be a terrible idea to just host the controller on a VM provisioned from our harvester cluster and then configure the routers inside k8s?
It would give me a lot more flexibility on the networking

I know what you mean about Harvester networking. It's complicated.

I just remembered what I had to do with RKE2. when I provisioned RKE2 nodes with Rancher on Harvester. I had to modify the built-in Nginx Ingress values.

apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: rke2-ingress-nginx
  namespace: kube-system
spec:
  valuesContent: |-
    controller:
      extraArgs:
        enable-ssl-passthrough: true

Then I deployed a Service LoadBalancer to handle Ingresses with IngressClass: nginx:

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx-loadbalancer
  namespace: kube-system
  annotations: {}
spec:
  selector:
    app.kubernetes.io/name: rke2-ingress-nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      name: http
    - protocol: TCP
      port: 443
      targetPort: 443
      name: https
  type: LoadBalancer

In Harvester, I configured an LB pool so that Harvester itself would provide the LB VIP from that IP address pool.

IIRC, the deployment is in kube-system namespace for RKE2.

Inspect ingress-nginx run command args to verify passthrough TLS is enabled:

kubectl get deployment "ingress-nginx-controller" \
    --namespace kube-system \
    --output go-template='{{ (index .spec.template.spec.containers 0).args }}'

I've given up trying to expose the controller to the outer network and have accepted I'll run entirely in k8s. Maybe I'll change it one day but I need to get moving. So now I'm trying to deploy the controller and I'm stuck with a problem I had before that kind of just magically fixed itself.

Trust manager stalls on deployment with MountVolume.SetUp failed for volume "tls" : secret "trust-manager-tls" not found

Self-hosting Ziti Controller and Ziti Router in K8s without exposing them at the edge will work as long as all the clients/users are also in the cluster.

Ensure Trust Manager's "trust namespace" is the cluster namespace where you installed the Ziti Controller.

If you are installing Trust Manager as a sub-chart, set values.app.trust.namespace to the Ziti Controller's namespace, if not the default ziti.

If you are installing Trust Manager in advance of the Ziti Controller, not as a sub-chart, then ensure its trust manager argument concurs with the Ziti Controller's namespace.

--trust-namespace=ziti

I'm dumb... I read the first line which was an error trap not an actual error

Was it Trust Manager's "trust namespace" or something else?

Hi,
thanks for your brilliant remark!
I guess that's what causes the exact same symptom on my side when installing this Helm Chart via GitOps (ArgoCD). Let’s see if I'm able to "convince" ArgoCD to perform this hook otherwise I need to switch to CLI and create the Argo app afterwards.