@markamind - what is the configuration that you have? Are you dockering everything within the docker-compose environment, and what do you have?
I will speak with regards to the simplified-docker-compose as that is what I am currently using. In the docker-compose I have 3 services running: controller (ziti-controller), edge router (ziti-edge-router) and console: ziti-console (ziti-console).
These are all attached to the zitiblue and zitired networks. This means that they can all talk too each other without having to go to the host network (ie stays within docker layer).
I have created a tunnel to feed this through without breaking to the host, by these commands:
ziti edge create config zac.intercept.v1 intercept.v1 '{"protocols":["tcp"],"addresses":["zac.ziti"], "portRanges":[{"low":443, "high":443}]}'
ziti edge create config zac.host.v1 host.v1 '{"protocol":"tcp", "address":"ziti-console", "port":8443}'
ziti edge create service zac.svc --configs zac.intercept.v1,zac.host.v1
ziti edge create service-policy zac.policy.dial Dial --service-roles "@zac.svc" --identity-roles '#zac-clients'
ziti edge create service-policy zac.policy.bind Bind --service-roles '@zac.svc' --identity-roles "@ziti-edge-router"
What I like about this, is that I am doing port translation transparently on the way through, that is, I can use normal HTTPS request, ie https://zac.ziti and it translates it through to port 8443 at the other end. I use the address of ziti-console
in the host.v1 declaration as this is resolvable (by docker) to the containers on that network. Hence, when I restart docker and it changes the IP address around, it will always find it.
So, now, I can get the zitified ZAC console by going to : https://zac.ziti and I get the ZAC console. The address of the controller (when connecting) is: https://ziti-controller:1280
Since I have this going, I can now make the 8443 port invisible to the private network, but commenting out the port command in docker compose:
ziti-console:
image: openziti/zac
environment:
- ZAC_SERVER_CERT_CHAIN=/openziti/pki/ziti-controller-intermediate/certs/ziti-controller-server.cert
- ZAC_SERVER_KEY=/openziti/pki/ziti-controller-intermediate/keys/ziti-controller-server.key
depends_on:
- ziti-controller
ports:
- "1408:1408"
# - "18443:8443"
and restarting the console (I needed to use 18443 as it was colliding with another service on 8443). So, now ZAC is dark on host network that is running docker. Caveat: While it is dark to the network, it is not dark to the host (where docker is running), because the local host can route to all docker networks. You can confirm this, by typing docker inspect docker_ziti-console-1
and it will show you the IP address that the container is running on. If you open a browser on host you can connect to it on 8443.
Having ZAC dark is a chicken and edge condition. You cannot make ZAC dark until you get some clients connected to it. You cannot get clients connected to it while it is dark. So the process would be to keep it attached to a local network, then when get the ZAC console going through ziti, then make it dark.