Hello,
I would like to set up a ziti-edge-tunnel on a Linux system with podman, which binds services, interrupts connection calls and routes them via the Ziti-network.
Perhaps your goal is a transparent proxy for a container, non-container processes running on the Docker host, or both. There's a different strategy for each.
You can use that container image openziti/ziti-edge-tunnellike this example to provide a transparent proxy and nameserver for all processes on the host, including containers. This is equivalent to installing the ziti-edge-tunnel Linux package on the host, and both require systemd-resolved on the host to auto-configure DNS, or you must configure the resolver to query the Ziti nameserver first (default: 100.64.0.2). It is not possible to use this image in a container-specific way, only to configure the entire host.
Another goal is to provide a transparent proxy and nameserver for a specific container(s), not the Docker host, and not all other processes on the Docker host. This is done with the Ziti "sidecar" approach. I recommend the openziti/ziti-router approach like this example. Here's a quick overview of the sidecar approach: configure a Ziti router with tunneling enabled to share a Docker network interface with the client application. The tunneler configures the Docker network namespace with transparent proxy (TPROXY) iptables rules and a nameserver for Ziti services. Only the containers sharing the Docker network interface can access the Ziti services.
I used the ziti-edge-tunnel in the container and adapted /etc/resolv.conf. The Ziti-Interface ziti0, is also started successfully and requests to services from the host system are also successfully routed via the Ziti-network. However, the containers themselves cannot resolve the domains.
You are running the tunneler in a privileged container on a Linux host with the goal of configuring the entire host, including all containers, with the same Ziti service access, correct?
You wish to manually configure the Docker host to use the Ziti nameserver. This can work if the tunneler privileged container has network mode "host" and there are no other processes managing libc's /etc/resolv.conf, eg NetworkManager, Resolved, etc.
After configuring the host resolver and verifying name resolution, you must configure the other containers to use the Ziti nameserver provided by the tunneler. This will involve DNS configuration and ensuring the containers have an IP route and firewall exceptions for the nameserver, if applicable.
I believe the default revolver configuration for Docker containers is to inherit the host configuration as a fallback and treat the Docker nameserver, which has features like resolving container names to Docker bridge IP addresses, as primary resolver.
This functions differently on the default Docker bridge and custom Docker bridges. Generally, use a custom bridge for the most intuitive behavior.
Glad you found it! For others, you don't necessarily need to change the resolver used by your distribution, just to follow that resolver's docs to set the Ziti tunneler nameserver as first to answer queries. Only systemd-resolved is reliably auto-configured, and you can manually configure resolveconf, Network Manager, Netplan, /etc/resolve.conf (as a last resort).
Network Manager is quite common. The rationale is to modify the connection profile with a primary (Ziti) and at least one recursive nameserver.