Helm v4 Incompatibility, Ziti-Router Unresponsive

I upgraded networking equipment, and both ziti-controller and ziti-router threw a fit (running in kubernetes).

I found that there is a bug/incompatibility with helm version 4.

When installing the helm chart, debugging shows this:

level=DEBUG msg="Error creating resource via patch" namespace=ziti name=ziti-controller gvk="apps/v1, Kind=Deployment" error="failed to create typed patch object (ziti/ziti-controller; apps/v1, Kind=Deployment): .spec.template.spec.containers[name=\"ziti-controller\"].ports: duplicate entries for key [containerPort=1280,protocol=\"TCP\"]" level=WARN msg="upgrade failed" name=ziti-controller error="failed to create resource: failed to create typed patch object (ziti/ziti-controller; apps/v1, Kind=Deployment): .spec.template.spec.containers[name=\"ziti-controller\"].ports: duplicate entries for key [containerPort=1280,protocol=\"TCP\"]" Error: UPGRADE FAILED: failed to create resource: failed to create typed patch object (ziti/ziti-controller; apps/v1, Kind=Deployment): .spec.template.spec.containers[name="ziti-controller"].ports: duplicate entries for key [containerPort=1280,protocol="TCP"]

I downgraded to helm version 3.19.2, which allowed the controller chart to install.

I re-enrolled my controller, and installed the chart with this command:

helm upgrade --install openziti/ziti-router --namespace ziti --create-namespace --debug --version 2.1.0 --values values-router-2.1.0.yaml --set-file enrollmentJwt=/tmp/main-router.jwt

The pod starts, but only shows this in the logs:

INFO: config file exists in /etc/ziti/config/ziti-router.yaml
{"file":"github.com/openziti/ziti/router/env/config_edge.go:398","func":"github.com/openziti/ziti/router/env.(*EdgeConfig).loadCsr","level":"info","msg":"loaded csr info from configuration file at path [edge.csr]","time":"2026-03-19T03:43:05.673Z"}

After a short while, the liveness probe fails, and the pod restarts.

I see nothing in the controller logs from the router trying to connect.

Hey @thedarkula! I recognize that error. There was a bug in some versions of the ziti-controller chart that caused the podspec to have an illegal, redundant declaration for the same server port on the container. There can be only one per port.

If you're running the ctrlPlane and clientApi with the same advertisedAddress and advertisedPort (that's been the default for some time, usually setting only clientApi and letting ctrlPlane piggy-back on clientApi's cluster service, ingress, etc.), then you can work around it by disabling the redundant ClusterIP service with input value ctrlPlane.service.enabled=false.

I'd be very interested to know if the latest, stable Helm Chart still has this issue.

Additionally, Ziti 2.0.0 prereleases are available in the Helm repo so you can get a jump on that upgrade step. The main difference with the new controller chart, major version 3 (w/ Ziti v2), is the requirement to specify your intent for controller clustering, e.g., cluster.mode=standalone if you do not wish to migrate the controller datastore to HA cluster mode.

❯ helm repo update openziti
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "openziti" chart repository
Update Complete. ⎈Happy Helming!⎈

❯ helm search repo "openziti/ziti-controller"
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
openziti/ziti-controller        3.1.1           1.7.2           Host an OpenZiti controller in Kubernetes

❯ helm search repo "openziti/ziti-controller" --devel
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
openziti/ziti-controller        3.2.0-pre4      2.0.0-pre5      Host an OpenZiti controller in Kubernetes

Wonderful! I am working on the upgrades now.

I did see this note in the new controller values:

# DEPRECATION NOTICE: The separate web-binding root CA is deprecated.
# Web certificates are now issued from the edge-signer-issuer (unified PKI).
# The webBindingPki.alternativeIssuer option is preserved for backward compatibility
# but the web-root-cert and web-root-issuer resources are orphaned.
webBindingPki:

I do have this section from the 2.x.x chart version

webBindingPki:
  # -- generate a separate PKI root of trust for web bindings, i.e., client,
  # management, and prometheus APIs
  enabled: true
  altServerCerts:
    zac:
      # -- request an alternative server certificate from a cert-manager issuer
      mode: certManager
      # -- name of the tls secret for cert-manager to create and manage the server
      #    certificate and private key
      # -- request a certificate for these alternative names distinct from advertisedHost and dnsNames of the clientApi, managementApi, and ctrlPlane
      altDnsNames:
        - zac.domain.com
      secretName: ziti-controller-zac-alt-server-cert
      # -- issuer ref to use when requesting the alternative server certificate
      issuerRef:
        group: cert-manager.io
        kind: ClusterIssuer
        name: letsencrypt-production
      # -- where to mount the tls secret on the pod - must not collide with another mountpoint
      mountPath: /etc/ziti/alt-server-cert-zac
    mgmt:
      # -- request an alternative server certificate from a cert-manager issuer
      mode: certManager
      # -- name of the tls secret for cert-manager to create and manage the server
      #    certificate and private key
      # -- request a certificate for these alternative names distinct from advertisedHost and dnsNames of the clientApi, managementApi, and ctrlPlane
      altDnsNames:
        - ziti-mgmt.domain.com
      secretName: ziti-controller-mgmt-alt-server-cert
      # -- issuer ref to use when requesting the alternative server certificate
      issuerRef:
        group: cert-manager.io
        kind: ClusterIssuer
        name: letsencrypt-production
      # -- where to mount the tls secret on the pod - must not collide with another mountpoint
      mountPath: /etc/ziti/alt-server-cert-mgmt

What is the recommended way to incorporate this in version 3.x.x?

For anyone running into the conflicting ports issue, my fix to keep everything separate is this:

ctrlPlane:
  # -- cluster service target port on the container
  # containerPort: "{{ .Values.clientApi.containerPort }}"
  containerPort: 1283

Using the new 3.0.0-pre4 router chart, In the values file, I only changed this:

edge:
  advertisedHost: ziti-router.domain.com

I re-enrolled the router in the controller.

When I install the router chart, the pod logs only show this:

INFO: config file exists in /etc/ziti/config/ziti-router.yaml
{"file":"github.com/openziti/ziti/v2/router/env/config_edge.go:398","func":"github.com/openziti/ziti/v2/router/env.(*EdgeConfig).loadCsr","level":"info","msg":"loaded csr info from configuration file at path [edge.csr]","time":"2026-03-19T17:20:11.928Z"}
{"file":"github.com/openziti/ziti/v2/router/env/config_edge.go:329","func":"github.com/openziti/ziti/v2/router/env.parseEdgeListenerOptions","level":"info","msg":"advertised port [0] in [listeners[443].options.advertise] does not match the listening port [0] in [listeners[3022].address].","time":"2026-03-19T17:20:11.928Z"}

Describing the router pod shows these failures:

  Warning  Unhealthy  6m9s                    kubelet            spec.containers{ziti-router}: Readiness probe failed:
  Warning  Unhealthy  5m19s (x10 over 7m19s)  kubelet            spec.containers{ziti-router}: Liveness probe failed: Error: no processes found matching filter, use 'ziti agent list' to list candidates

I do not see any attempt to connect from the router in the controller logs.