I’m currently running the Ziti Controller (v1.1.15) on Kubernetes, deployed via the official Helm chart (ziti-controller-1.1.5).
Prometheus and Grafana are already set up and working fine in my environment.
I’d like to enable Prometheus metrics exposure from the controller so I can scrape and visualize Ziti metrics in my existing Prometheus stack.
What I’ve Tried
I attempted to enable metrics by adding the following section to my controller.yaml values file:
prometheus:
service:
enabled: true
type: ClusterIP
labels:
app: "prometheus"
component: "ziti-controller-rd"
serviceMonitor:
enabled: false
additionalConfigs:
web:
- name: prometheus-metrics
bindPoints:
- interface: 0.0.0.0:9090
address: ziti-controller-rd-prometheus.ziti-console.svc.cluster.local:443
identity:
cert: /etc/ziti/web-client-identity/tls.crt
key: /etc/ziti/web-client-identity/tls.key
server_cert: /etc/ziti/web-identity/tls.crt
server_key: /etc/ziti/web-identity/tls.key
ca: /etc/ziti/web-identity/ca.crt
apis:
- binding: metrics
options: {}
- binding: health-checks
options: {}
Then ran:
helm upgrade ziti-controller-rd openziti/ziti-controller \
--namespace ziti-console \
--version 1.1.5 \
--values controller.yaml
Observed Behavior
After enabling Prometheus, the Ziti Controller pod fails to start, showing one of the following errors depending on the configuration attempt.
Error Case 1 – Nil pointer dereference
When Prometheus was enabled via Helm values:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x196baba]
github.com/openziti/identity.(*Config).ValidateWithPathContext(0x0, ...)
github.com/openziti/xweb/v2.parseIdentityConfig(...)
...
github.com/openziti/ziti/controller.(*Controller).Run(...)
This occurred right after:
{"level":"info","msg":"starting edge"}
{"level":"info","msg":"edge initialized"}
Removing the Prometheus section entirely allows the controller to start normally again.
Expected Behavior
The controller should start successfully and expose a Prometheus metrics endpoint on port 9090 (or similar), so that it can be scraped by my existing Prometheus deployment — without requiring a separate Prometheus subchart installation.
Environment
| Component | Version |
|---|---|
| Ziti Controller | 1.1.15 |
| Helm Chart | ziti-controller-1.1.5 |
| Kubernetes | 1.29.x (GKE) |
| Existing Prometheus |
Question
qrkourier not sure whom to tag it,
Could you help me to setup:
-
Whether Prometheus metrics exposure is officially supported in chart version
1.1.5, -
The correct Helm values or annotations required to enable metrics scraping (for existing Prometheus Operator setup),
-
Whether a dedicated Ziti identity or listener is required for the Prometheus endpoint (e.g.,
web-client-identity).