Hi community! I've been trying for a few days to set up zrok on my VPS, which is mostly IPv6. I need to proxy my domain through Cloudflare to point to my public IPv6 address. I'm really new to IPv6. Has anyone done a full configuration specifically for IPv6, and is it working fully?
Hi @mrPatriko, welcome to the community and to zrok (and OpenZiti/BrowZer)!
Could you clarify, are you running zrok on the vps, maybe exposing a port or two, or are you trying to fully self-host an environment out there?
I've not done much with ipv6 myself, so I can't say one way or another other than we'd "expect" things to work.
Let us know which route you're going there -- running zrok with the SaaS zrok.io, or self-hosting a whole install of your own?
I'm trying to run Zrok fully self-hosted using Docker Compose. I followed the guide for Docker, the containers start correctly, but there was a problem with resolving 127.* IP addresses and 0.0.0.0. I believe this is because I'm using only IPv6. So, I changed every instance of 0.0.0.0 to [::] and 127.0.0.1 to [::1] in the Zrok files. However, now the Zrok controller keeps restarting.
Blockquote
zrok-controller-1 | goroutine 1 [running]:
zrok-controller-1 | main.(*adminBootstrap).run(0xc001907080, 0xc00189fbb0?, {0xc0003448c0?, 0x0?, 0x0?})
zrok-controller-1 | /home/runner/work/zrok/zrok/cmd/zrok/adminBootstrap.go:36 +0x105
zrok-controller-1 | github.com/spf13/cobra.(*Command).execute(0xc00197e608, {0xc000344880, 0x2, 0x2})
zrok-controller-1 | /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:989 +0xa91
zrok-controller-1 | github.com/spf13/cobra.(*Command).ExecuteC(0x629c3e0)
zrok-controller-1 | /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117 +0x3ff
zrok-controller-1 | github.com/spf13/cobra.(*Command).Execute(...)
zrok-controller-1 | /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041
zrok-controller-1 | main.main()
zrok-controller-1 | /home/runner/work/zrok/zrok/cmd/zrok/main.go:108 +0x1a
zrok-controller-1 | + for arg in "${@}"
zrok-controller-1 | + [[ ! zrok =~ ^- ]]
zrok-controller-1 | + [[ -s zrok ]]
zrok-controller-1 | + for arg in "${@}"
zrok-controller-1 | + [[ ! controller =~ ^- ]]
zrok-controller-1 | + [[ -s controller ]]
zrok-controller-1 | + for arg in "${@}"
zrok-controller-1 | + [[ ! /etc/zrok-controller/config.yml =~ ^- ]]
zrok-controller-1 | + [[ -s /etc/zrok-controller/config.yml ]]
zrok-controller-1 | + CONFIG=/etc/zrok-controller/config.yml
zrok-controller-1 | + break
zrok-controller-1 | + [[ -z /etc/zrok-controller/config.yml ]]
zrok-controller-1 | + zrok admin bootstrap --skip-frontend /etc/zrok-controller/config.yml
zrok-controller-1 | panic: error loading controller config '/etc/zrok-controller/config.yml': error parsing yaml [/etc/zrok-controller/config.yml]: yaml: line 13: did not find expected node content
Blockquote
zrok-frontend-1 | curl: (6) Could not resolve host: zrok-controller
zrok-frontend-1 | DEBUG: waiting for zrok controller API version endpoint to respond
This actually makes me think you updated the yaml incorrectly. Can you show the actual yaml file or a snippet from the actual yaml file?
It can be true but, I can't find the wrong.
Here are my configs. Other files are untouched.
**cat compose.yml**
services:
ziti-quickstart:
image: ${ZITI_CLI_IMAGE:-docker.io/openziti/ziti-cli}:${ZITI_CLI_TAG:-latest}
restart: unless-stopped
networks:
zrok-instance:
# this allows other containers to use the same external DNS name to reach the quickstart container from within the
# Docker network that clients outside the Docker network use to reach the quickstart container via port forwarding
aliases:
- ziti.${ZROK_DNS_ZONE}
entrypoint:
- bash
- -euc
- |
ZITI_CMD+=" --ctrl-address ziti.${ZROK_DNS_ZONE}"\
" --ctrl-port ${ZITI_CTRL_ADVERTISED_PORT:-1280}"\
" --router-address ziti.${ZROK_DNS_ZONE}"\
" --router-port ${ZITI_ROUTER_PORT:-3022}"\
" --password ${ZITI_PWD:-admin}"
echo "DEBUG: run command is: ziti $${@} $${ZITI_CMD}"
exec ziti "$${@}" $${ZITI_CMD}
command: -- edge quickstart --home /home/ziggy/quickstart
user: ${ZIGGY_UID:-1000}
environment:
HOME: /home/ziggy
PFXLOG_NO_JSON: "${PFXLOG_NO_JSON:-true}"
ZITI_ROUTER_NAME: ${ZITI_ROUTER_NAME:-quickstart-router}
volumes:
# store the quickstart state in a named volume "ziti_home" or store the quickstart state on the Docker host in a
# directory, ZITI_HOME
- ${ZITI_HOME:-ziti_home}:/home/ziggy
ports:
- ${ZITI_INTERFACE:-[::]}:${ZITI_CTRL_ADVERTISED_PORT:-1280}:${ZITI_CTRL_ADVERTISED_PORT:-1280}
- ${ZITI_INTERFACE:-[::]}:${ZITI_ROUTER_PORT:-3022}:${ZITI_ROUTER_PORT:-3022}
expose:
- ${ZITI_CTRL_ADVERTISED_PORT:-1280}
- ${ZITI_ROUTER_PORT:-3022}
depends_on:
ziti-quickstart-init:
condition: service_completed_successfully
healthcheck:
test:
- CMD
- ziti
- agent
- stats
interval: 3s
timeout: 3s
retries: 5
start_period: 30s
# this service is used to initialize the ziti_home volume by setting the owner to the UID of the user running the
# quickstart container
ziti-quickstart-init:
image: busybox
command: chown -Rc ${ZIGGY_UID:-1000} /home/ziggy
user: root
environment:
HOME: /home/ziggy
volumes:
# store the quickstart state in a named volume "ziti_home" or store the quickstart state on the Docker host in a
# directory, ZITI_HOME
- ${ZITI_HOME:-ziti_home}:/home/ziggy
# add a health check for the quickstart network
ziti-quickstart-check:
image: busybox
command: echo "Ziti is cooking"
depends_on:
ziti-quickstart:
condition: service_healthy
zrok-permissions:
image: busybox
command:
- /bin/sh
- -euxc
- |
chown -Rc ${ZIGGY_UID:-2171} /var/lib/zrok-*;
chmod -Rc ug=rwX,o-rwx /var/lib/zrok-*;
volumes:
- zrok_ctrl:/var/lib/zrok-controller
- zrok_frontend:/var/lib/zrok-frontend
zrok-controller:
depends_on:
zrok-permissions:
condition: service_completed_successfully
build:
context: .
dockerfile: ./zrok-controller.Dockerfile
args:
ZROK_CLI_IMAGE: ${ZROK_CLI_IMAGE:-openziti/zrok}
ZROK_CLI_TAG: ${ZROK_CLI_TAG:-latest}
ZROK_DNS_ZONE: ${ZROK_DNS_ZONE}
ZITI_CTRL_ADVERTISED_PORT: ${ZITI_CTRL_ADVERTISED_PORT:-1280}
ZROK_ADMIN_TOKEN: ${ZROK_ADMIN_TOKEN} # zrok controller admin password
ZROK_CTRL_PORT: ${ZROK_CTRL_PORT:-18080}
ZITI_PWD: ${ZITI_PWD} # ziti controller admin password
user: ${ZIGGY_UID:-2171}
command: zrok controller /etc/zrok-controller/config.yml --verbose
volumes:
- zrok_ctrl:/var/lib/zrok-controller
networks:
zrok-instance:
aliases:
- zrok.${ZROK_DNS_ZONE}
restart: unless-stopped
expose:
- ${ZROK_CTRL_PORT:-18080} # (not published)
ports:
- ${ZROK_INSECURE_INTERFACE:-[::1]}:${ZROK_CTRL_PORT:-18080}:${ZROK_CTRL_PORT:-18080}
environment:
ZROK_USER_PWD: ${ZROK_USER_PWD} # admin account password (initial user account)
ZROK_USER_EMAIL: ${ZROK_USER_EMAIL} # login email address (initial user account)
ZROK_ADMIN_TOKEN: ${ZROK_ADMIN_TOKEN} # zrok controller admin password
ZROK_API_ENDPOINT: http://zrok-controller:${ZROK_CTRL_PORT:-18080} # bridge address of the zrok controller
zrok-frontend:
depends_on:
zrok-permissions:
condition: service_completed_successfully
build:
context: .
dockerfile: zrok-frontend.Dockerfile
args:
ZROK_CLI_IMAGE: ${ZROK_CLI_IMAGE:-openziti/zrok}
ZROK_CLI_TAG: ${ZROK_CLI_TAG:-latest}
ZROK_DNS_ZONE: ${ZROK_DNS_ZONE} # e.g., "example.com" or "127.0.0.1.sslip.io"
ZROK_FRONTEND_PORT: ${ZROK_FRONTEND_PORT:-8080}
ZROK_OAUTH_PORT: ${ZROK_OAUTH_PORT:-8081}
ZROK_OAUTH_HASH_KEY: ${ZROK_OAUTH_HASH_KEY-noop}
ZROK_OAUTH_GOOGLE_CLIENT_ID: ${ZROK_OAUTH_GOOGLE_CLIENT_ID:-noop}
ZROK_OAUTH_GOOGLE_CLIENT_SECRET: ${ZROK_OAUTH_GOOGLE_CLIENT_SECRET:-noop}
ZROK_OAUTH_GITHUB_CLIENT_ID: ${ZROK_OAUTH_GITHUB_CLIENT_ID:-noop}
ZROK_OAUTH_GITHUB_CLIENT_SECRET: ${ZROK_OAUTH_GITHUB_CLIENT_SECRET:-noop}
user: ${ZIGGY_UID:-2171}
command: zrok access public /etc/zrok-frontend/config.yml --verbose
volumes:
- zrok_frontend:/var/lib/zrok-frontend
networks:
zrok-instance:
restart: unless-stopped
expose:
- ${ZROK_FRONTEND_PORT:-8080} # (not published)
- ${ZROK_OAUTH_PORT:-8081} # (not published)
ports:
- ${ZROK_INSECURE_INTERFACE:-[::1]}:${ZROK_FRONTEND_PORT:-8080}:${ZROK_FRONTEND_PORT:-8080}
- ${ZROK_INSECURE_INTERFACE:-[::1]}:${ZROK_OAUTH_PORT:-8081}:${ZROK_OAUTH_PORT:-8081}
environment:
HOME: /var/lib/zrok-frontend
ZROK_DNS_ZONE: ${ZROK_DNS_ZONE}
ZROK_ADMIN_TOKEN: ${ZROK_ADMIN_TOKEN} # zrok controller admin password
ZROK_API_ENDPOINT: http://zrok-controller:${ZROK_CTRL_PORT:-18080} # bridge address of the zrok controller
ZROK_FRONTEND_SCHEME: http
ZROK_FRONTEND_PORT: ${ZROK_FRONTEND_PORT:-8080}
ZITI_CTRL_ADVERTISED_PORT: ${ZITI_CTRL_ADVERTISED_PORT:-1280}
ZITI_PWD: ${ZITI_PWD} # ziti controller admin password
volumes:
ziti_home: # this will not be used if you switch from named volume to bind mount volume
zrok_ctrl:
zrok_frontend:
# define a custom network so that we can also define DNS aliases
networks:
zrok-instance:
enable_ipv6: true
**cat compose.override.yml**
# delete this file from your compose project if you do not want to use Caddy for TLS termination
services:
caddy:
build:
context: .
dockerfile: ./caddy.Dockerfile
args:
CADDY_DNS_PLUGIN: ${CADDY_DNS_PLUGIN} # e.g., "digitalocean"
restart: unless-stopped
environment:
CADDY_DNS_PLUGIN: ${CADDY_DNS_PLUGIN} # e.g., "digitalocean"
CADDY_DNS_PLUGIN_TOKEN: ${CADDY_DNS_PLUGIN_TOKEN} # API token from DNS provider used by plugin to solve the ACME challenge
ZROK_USER_EMAIL: ${ZROK_USER_EMAIL} # email address sent to CA for ACME account and renewal notifications
CADDY_ACME_API: ${CADDY_ACME_API:-https://acme-v02.api.letsencrypt.org/directory} # ACME API endpoint
ZROK_DNS_ZONE: ${ZROK_DNS_ZONE} # e.g., "example.com" or "127.0.0.1.sslip.io"
ZROK_CTRL_PORT: ${ZROK_CTRL_PORT:-18080}
ZROK_FRONTEND_PORT: ${ZROK_FRONTEND_PORT:-8080}
ZROK_OAUTH_PORT: ${ZROK_OAUTH_PORT:-8081}
expose:
- 80/tcp
- 443/tcp
- 443/udp # Caddy's HTTP/3 (QUIC) (not published)
- 2019/tcp # Caddy's admin API (not published)
ports:
- ${CADDY_INTERFACE:-[::1]}:80:80
- ${CADDY_INTERFACE:-[::1]}:443:443
# - ${CADDY_INTERFACE:-0.0.0.0}:443:443/udp" # future: HTTP/3 (QUIC)
volumes:
- caddy_data:/data
- caddy_config:/config
networks:
zrok-instance:
zrok-frontend:
environment:
ZROK_FRONTEND_SCHEME: https
ZROK_FRONTEND_PORT: 443
volumes:
caddy_data:
caddy_config:
**cat zrok-controller-config.yml.envsubst**
# _____ __ ___ | | __
# |_ / '__/ _ \| |/ /
# / /| | | (_) | <
# /___|_| \___/|_|\_\
# controller configuration
v: 4
admin:
# generate these admin tokens from a source of randomness, e.g.
# LC_ALL=C tr -dc _A-Z-a-z-0-9 < /dev/urandom | head -c32
secrets:
- ${ZROK_ADMIN_TOKEN}
endpoint:
host: [::]
port: ${ZROK_CTRL_PORT}
invites:
invites_open: true
token_strategy: store
store:
path: /var/lib/zrok-controller/sqlite3.db
type: sqlite3
ziti:
api_endpoint: https://ziti.${ZROK_DNS_ZONE}:${ZITI_CTRL_ADVERTISED_PORT}/edge/management/v1
username: admin
password: ${ZITI_PWD}
**cat zrok-frontend-config.yml.envsubst**
v: 3
host_match: ${ZROK_DNS_ZONE}
address: [::]:${ZROK_FRONTEND_PORT}
# delete if not using oauth for public shares
oauth:
bind_address: [::]:${ZROK_OAUTH_PORT}
redirect_url: https://oauth.${ZROK_DNS_ZONE}
cookie_domain: ${ZROK_DNS_ZONE}
hash_key: ${ZROK_OAUTH_HASH_KEY}
providers:
- name: github
client_id: ${ZROK_OAUTH_GITHUB_CLIENT_ID}
client_secret: ${ZROK_OAUTH_GITHUB_CLIENT_SECRET}
- name: google
client_id: ${ZROK_OAUTH_GOOGLE_CLIENT_ID}
client_secret: ${ZROK_OAUTH_GOOGLE_CLIENT_SECRET}
Maybe my way of thinking about this is incorrect. Perhaps I shouldn't change everything to IPv6, but instead check something like translating from IPv4 to IPv6 on the VPS. I haven't tried all possibilities yet. But its curious.
The YAML error means one of the required env vars wasn't set, causing the zrok controller template to render incorrectly. You can confirm which value is not set by printing the file.
docker compose run --rm --entrypoint= zrok-controller cat /etc/zrok-controller/config.yml
Example output
# _____ __ ___ | | __
# |_ / '__/ _ \| |/ /
# / /| | | (_) | <
# /___|_| \___/|_|\_\
# controller configuration
v: 4
admin:
# generate these admin tokens from a source of randomness, e.g.
# LC_ALL=C tr -dc _A-Z-a-z-0-9 < /dev/urandom | head -c32
secrets:
- zroktoken
endpoint:
host: 0.0.0.0
port: 18080
invites:
invites_open: true
token_strategy: store
store:
path: /var/lib/zrok-controller/sqlite3.db
type: sqlite3
ziti:
api_endpoint: https://ziti.share.example.com:1280/edge/management/v1
username: admin
password: zitiadminpw
Link to Docker self-hosting guide
Regarding IPv6: You can configure your Docker host to listen for ziti/zrok connections on IPv4, IPv6, or both. In any case, the Docker host must forward packets from your IP stack of choice to the containers, which you can configure in the ports
section of each service in the Docker compose file. Additionally, any domain names you use as advertised addresses in ziti/zrok must have AAAA records resolving to the IPv6 address where your Docker host is listening.
Thanks for your reply. At the moment, all containers are running and healthy. However, I still can't connect to the API endpoint at zrok.getitizen.com. How should I interpret these errors?
ziti-cli
ERROR ziti/tunnel/dns.NewDnsServer: system resolver test failed: failed to resolve ziti-tunnel.resolver.test: lookup ziti-tunnel.resolver.test on 127.0.0.11:53: no such host
WARNING ziti/tunnel/dns.flushDnsCaches: {error=[exec: "resolvectl": executable file not found in $PATH]} unable to find systemd-resolve or resolvectl in path, consider adding a dns flush to your restart process
zrok-frontend
{"file":"github.com/openziti/ziti/ziti/cmd/helpers/helpers.go:130","func":"github.com/openziti/ziti/ziti/cmd/helpers.StandardErrorMessage","level":"info","msg":"Connection error: Get https://ziti.getitzen.com:1280/.well-known/est/cacerts: dial tcp 172.18.0.3:1280: connect: connection refused","time":"2024-10-24T18:44:36.085Z"}
The connection to the server ziti.getitzen.com:1280 was refused - did you specify the right host or port?
My DNS zone is pointing to my public IPv6 address with three AAAA records (www, *, domain). The UFW ports opened are 1280, 3022, 80, and 443 for IPv4 and IPv6.
docker-pr 3174 root 4u IPv6 129587000 0t0 TCP [::1]:80 (LISTEN)
docker-pr 3179 root 4u IPv6 129587007 0t0 TCP [::1]:443 (LISTEN)
docker-pr 3433 root 4u IPv6 129592120 0t0 TCP *:1280 (LISTEN)
docker-pr 3438 root 4u IPv6 129592127 0t0 TCP *:3022 (LISTEN)
docker-pr 5639 root 4u IPv6 129647447 0t0 TCP *:8080 (LISTEN)
docker-pr 5644 root 4u IPv6 129647454 0t0 TCP *:8081 (LISTEN)
docker-pr 5893 root 4u IPv6 129658770 0t0 TCP *:18080 (LISTEN)
systemd-r 7238 systemd-resolve 14u IPv4 37890260 0t0 TCP 127.0.0.53:53 (LISTEN)
systemd-r 7238 systemd-resolve 16u IPv4 37890262 0t0 TCP 127.0.0.54:53 (LISTEN)
systemd-r 7238 systemd-resolve 18u IPv6 37890264 0t0 TCP [::1]:53 (LISTEN)
When I use https://dnschecker.org/ for my IPv6 address, it recognizes ports 18080, 3022, and 1280 as open, but ports 80 and 443 as closed. For my domain name, only ports 80 and 443 are open, while the others are closed.
These are harmless noise. They're emitted by the ziti edge quickstart
command running in the ziti-cli
container because there was a bug causing the ziti router to attempt creating a TPROXY interceptor and resolver when it was not configured with tunnel mode "tproxy." The fix is on the way in the next release, probably 1.1.16.
The zrok-frontend container failed to enroll its ziti identity because the ziti controller's client API was not reachable at ziti.getitzen.com:1280 which resolves in your network to 172.18.0.3:1280. The zrok-frontend was able to send a SYN packet to that address, but the TCP handshake was rejected because the port is not open.
Your Docker host is publishing 80,443 on the IPv6 loopback only (e.g., [::1]:80
), not listening for incoming requests on the host's main interface (e.g., *:80
). Check the ports
section of the compose file for the service that's publishing 80,443/tcp. Is that the Caddy container?
Okay, so I'll ignore that. It's good to see that project is more n more fix.
But it is, if what UFW is telling me is true. And I remembered to reload...
Maybe it's problem with iptables. However input and forward chains are set to accept.
Yes, but for now I'm trying again without TLS, but using Flexible mode in Cloudflare SSL settings.
I have theory that the VPS is not able to handle that configuration.
zrok-frontend failed to reach the advertised address if the Ziti controller. You can verify the port is open with ncat -vz ziti.getitzen.com 1280
or openssl s_client -connect ziti.getitzen.com:1280
.
zrok-frontend may be in the same Docker bridge network as the Ziti controller if you're following the Docker self hosting guide for zrok. In that case, there must be two paths to the Ziti controller: 1280 exposed inside the bridge and published to the global IPv6.
That will work without TLS for the domain names that resolve to zrok controller and the zrok frontend's wildcard record. CF can also be configured to use TLS on the backend/upstream to your VPS.
The domain names that resolve to the ziti controller and router must not be proxied by CF providing TLS. Those must be direct or proxied with passthrough TLS.