Ziti-edge-tunnel external resolution setup?

Hello, in our environments, we often have mixed name resolution services chained/recursing and participating in name resolution: consul dns, dnsmasq, aws or other nameservers, etc.

Coupling this with the fact that NixOS seems to behave a bit quirky with the tunneler auto DNS injection (I haven’t nailed down yet what is going on here, it isn’t a problem all the time, and it’s not readily repeatable), it would probably be a nice option to be able to disable tunneler auto resolver injection and to handle this ourselves in our various service configs so that everything plays together nicely.

I’m not sure if the tunneler always defaults to the same DNS binding IP (100.64.0.1 ?), but if not, that might also need to be made configurable by a flag also in order to be configured externally.

Has a disable flag for DNS auto-injection been considered for this purpose on the tunneler?

Hello,

By dns “auto-injection”, I assume you mean how ziti-edge-tunnel sets itself up as a dns server with the host’s resolver system? If so there currently is no way to disable it. While it shouldn’t be technically difficult to optionally integrate the DNS server with the host’s resolver, supporting such an option could lead to supportability issues for us if someone uses DNS names in their intercept configuration and somehow stumbles onto this option.

I’m very interested in the problems that you’re having - e.g. the reason that you want to disable the DNS server (assuming that’s what you actually want). Hopefully we can get to the bottom of it and address whatever issue you’re seeing. So please provide more info on that if you’re inclined.

With that said, the DNS server IP can be modified, albeit in what may seem like a roundabout way. The DNS IP is always the IP that is assigned to the tun device + 1, and the tun IP is the first IP in the DNS IP range.

The DNS IP range defaults to 100.64.0.1/10, and it can be overridden with the --dns-ip-range command-line option. Here’s what the IPs look like with the default dns IP range:

$ ip addr show dev tun0 
4: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 500
    link/none 
    inet 100.64.0.1/32 scope global tun0
       valid_lft forever preferred_lft forever
    inet6 fe80::43ec:b6e1:b8bd:a112/64 scope link stable-privacy 
       valid_lft forever preferred_lft forever
$ resolvectl dns
Global:
Link 2 (enp0s5): 10.211.55.1 fe80::21c:42ff:fe00:18%5439487
Link 4 (tun0): 100.64.0.2

There have been some changes in this area recently, and if you’re seeing a DNS IP of “100.64.0.1” then I suspect you’re a couple of revs back or something unexpected is happening with the IP calculation.

1 Like

Hi @scareything, thanks very much for the info. I’ve been doing some set up using ziti 0.26.8 and ziti-edge-tunneler 0.19.11. I see new versions have been released starting yesterday to ziti 0.26.9 and ziti-edge-tunneler 0.20.0.

Before mentioning more about the tunneler, I should probably add that I just updated to the latest ziti release (0.26.9), and the ziti-router bin is panicking on startup with the following:

{"file":"github.com/openziti/edge@v0.24.7/tunnel/dns/server.go:59","func":"github.com/openziti/edge/tunnel/dns.flushDnsCaches","level":"info","msg":"dns caches flushed","time":"2022-10-12T21:17:59.346Z"}
{"file":"github.com/openziti/edge@v0.24.7/router/handler_edge_ctrl/apiSessionAdded.go:200","func":"github.com/openziti/edge/router/handler_edge_ctrl.(*apiSessionAddedHandler).instantSync","level":"info","msg":"first api session syncId [cl964vnht01kip57o9wb1r9lt], starting","strategy":"instant","time":"2022-10-12T21:17:59.347Z"}
{"file":"github.com/openziti/edge@v0.24.7/router/handler_edge_ctrl/apiSessionAdded.go:265","func":"github.com/openziti/edge/router/handler_edge_ctrl.(*apiSessionSyncTracker).Add","level":"info","msg":"received api session sync chunk 0, isLast=true","time":"2022-10-12T21:17:59.348Z"}
{"file":"github.com/openziti/edge@v0.24.7/router/handler_edge_ctrl/apiSessionAdded.go:124","func":"github.com/openziti/edge/router/handler_edge_ctrl.(*apiSessionAddedHandler).applySync","level":"info","msg":"finished sychronizing api sessions [count: 6, syncId: cl964vnht01kip57o9wb1r9lt, duration: 60.021µs]","time":"2022-10-12T21:18:00.530Z"}
{"file":"github.com/openziti/edge@v0.24.7/tunnel/intercept/iputils.go:51","func":"github.com/openziti/edge/tunnel/intercept.SetDnsInterceptIpRange","level":"info","msg":"dns intercept IP range: 100.64.0.1 - 100.127.255.254","time":"2022-10-12T21:18:00.871Z"}
panic: runtime error: index out of range [28] with length 0
goroutine 61 [running]:
github.com/orcaman/concurrent-map/v2.ConcurrentMap[...].GetShard(...)
        github.com/orcaman/concurrent-map/v2@v2.0.0/concurrent_map.go:31
github.com/orcaman/concurrent-map/v2.ConcurrentMap[...].Get({0x0?, 0xe36a70?, 0x37e11d600?}, {0xc000976740?, 0x9a3190a2f3cd?})
        github.com/orcaman/concurrent-map/v2@v2.0.0/concurrent_map.go:85 +0x13e
github.com/openziti/edge/router/xgress_edge_tunnel.(*servicePoller).requestServiceListUpdate(0xc00081c910)
        github.com/openziti/edge@v0.24.7/router/xgress_edge_tunnel/servicepoll.go:122 +0x74
github.com/openziti/edge/router/xgress_edge_tunnel.(*servicePoller).pollServices(0xc00081c910, 0x124ec26?, 0xc0008fb500)
        github.com/openziti/edge@v0.24.7/router/xgress_edge_tunnel/servicepoll.go:108 +0x10a
created by github.com/openziti/edge/router/xgress_edge_tunnel.(*tunneler).Start
        github.com/openziti/edge@v0.24.7/router/xgress_edge_tunnel/tunneler.go:92 +0x80d

I rolled back to the prior version with the same config and verified the panic stops. The panic is repeatable including with wiping state and re-doing router enrollment. Currently trying to track down what’s going on here before continuing with the tunneler.

I’ll point someone at this immediately… Panics are (obviously) unwanted :confused:

1 Like

I ran into the exact same issue in my setup, so i guess its safe to assume its not “PEBKAC” :smiley:

Panic solved, thanks!
Reference.

TLDR – there appears to be a bug in ziti-edge-tunneler new version release 0.20.0 this is preventing name resolution. Rolling back to 0.19.11 fixes the behavior immediately. I looked through all my networking config to try to determine root cause before realizing it’s a version related problem. I’ll leave this below in case it’s worth something, but the cmd/config outputs below are more or less identical on either version (with the exception of dig working on the last version but not the new one).

John


Debugging name resolution not working on ziti-edge-tunneler v0.20.0

Ok, back to the tunneler DNS setup and working on debugging around that…

So I’ve set up a simple service. Just replicating what was in one of the last ZitiTV video,

# ziti edge show config netcat-example -- BIND
{
    "hostname": "localhost",
    "port": 54321,
    "protocol": "tcp"
}

# ziti edge show config netcat-example-client -- DIAL
{
    "hostname": "ken.nc.ziti",
    "port": 12345
}

The dial service is assigned by service policy to an identity on a x86-64 NixOS system with resolved enabled and 0.20.0 linux tunneler. I can see that when I start the tunneler up, the DNS functionality appears to be ok as indicated by the following log lines with trace debug enabled (snipping anything not DNS related):

...
(3248582)[        0.000]   DEBUG ziti-edge-tunnel:tun.c:295 init_dns_maintainer() setting up NETLINK listener
(3248582)[        0.000]    INFO tunnel-cbs:ziti_dns.c:171 seed_dns() DNS configured with range 100.64.0.0 - 100.127.255.255 (4194302 ips)
(3248582)[        0.000]   DEBUG tunnel-sdk:ziti_tunnel.c:321 ziti_tunneler_intercept() intercepting address[udp:100.64.0.2/32:53] service[ziti:dns-resolver]
(3248582)[        0.014]   DEBUG ziti-edge-tunnel:tun.c:271 on_dns_update_time() queuing DNS update
(3248582)[        0.014]    INFO ziti-edge-tunnel:resolvers.c:430 is_systemd_resolved_primary_resolver() Detected systemd-resolved is primary system resolver
(3248582)[        0.014]    INFO ziti-edge-tunnel:resolvers.c:66 init_libsystemd() Initializing libsystemd
(3248582)[        0.014]   DEBUG ziti-edge-tunnel:resolvers.c:93 init_libsystemd() Dynamically loaded libsystemd
(3248582)[        0.014]   DEBUG ziti-edge-tunnel:resolvers.c:329 try_libsystemd_resolver() Detected systemd is init system
(3248582)[        0.014]   DEBUG ziti-edge-tunnel:resolvers.c:336 try_libsystemd_resolver() Connected to system DBus
(3248582)[        0.014]   DEBUG ziti-edge-tunnel:resolvers.c:206 sd_bus_is_acquired_name() systemd-resolved DBus name found: org.freedesktop.resolve1
(3248582)[        0.014]    INFO ziti-edge-tunnel:resolvers.c:343 try_libsystemd_resolver() systemd-resolved selected as dns resolver manager
(3248582)[        0.016]   DEBUG ziti-edge-tunnel:resolvers.c:313 set_systemd_resolved_link_setting() Success in method invocation: SetLinkLLMNR for link: (tun0)
(3248582)[        0.016]   DEBUG ziti-edge-tunnel:resolvers.c:313 set_systemd_resolved_link_setting() Success in method invocation: SetLinkMulticastDNS for link: (tun0)
(3248582)[        0.016]   DEBUG ziti-edge-tunnel:resolvers.c:313 set_systemd_resolved_link_setting() Success in method invocation: SetLinkDNSOverTLS for link: (tun0)
(3248582)[        0.016]   DEBUG ziti-edge-tunnel:resolvers.c:313 set_systemd_resolved_link_setting() Success in method invocation: SetLinkDNSSEC for link: (tun0)
(3248582)[        0.016]   DEBUG ziti-edge-tunnel:resolvers.c:313 set_systemd_resolved_link_setting() Success in method invocation: SetLinkDNS for link: (tun0)
(3248582)[        0.016]   DEBUG ziti-edge-tunnel:resolvers.c:391 dns_update_systemd_resolved() Setting empty domain on interface: tun0
(3248582)[        0.016]   DEBUG ziti-edge-tunnel:resolvers.c:313 set_systemd_resolved_link_setting() Success in method invocation: SetLinkDomains for link: (tun0)
(3248582)[        0.016]   DEBUG ziti-edge-tunnel:resolvers.c:168 sd_bus_run_command() Success in command invocation: FlushCaches
(3248582)[        0.016]   DEBUG ziti-edge-tunnel:resolvers.c:168 sd_bus_run_command() Success in command invocation: ResetServerFeatures
(3248582)[        0.018]   DEBUG ziti-edge-tunnel:tun.c:266 after_set_dns() DNS update: 0
...
(3248582)[        2.409]    INFO tunnel-cbs:ziti_tunnel_cbs.c:403 new_ziti_intercept() creating intercept for service[netcat-example] with ziti-tunneler-client.v1 = {"hostname":"ken.nc.ziti","port":12345}
(3248582)[        2.409]    INFO tunnel-cbs:ziti_dns.c:297 new_ipv4_entry() registered DNS entry ken.nc.ziti -> 100.64.0.3
(3248582)[        2.409]   DEBUG tunnel-sdk:ziti_tunnel.c:321 ziti_tunneler_intercept() intercepting address[udp:100.64.0.3/32:12345] service[netcat-example]
(3248582)[        2.409]   DEBUG tunnel-sdk:ziti_tunnel.c:321 ziti_tunneler_intercept() intercepting address[tcp:100.64.0.3/32:12345] service[netcat-example]
...
(3248582)[       55.986]   TRACE tunnel-sdk:netif_shim.c:56 netif_shim_input() received packet UDP[100.64.0.1:35930 -> 100.64.0.2:53] len=88
(3248582)[       55.986]   TRACE tunnel-sdk:tunnel_udp.c:177 recv_udp() received datagram 100.64.0.1:35930->100.64.0.1:53
(3248582)[       55.986] VERBOSE tunnel-sdk:intercept.c:113 address_match() ziti_address_cidr match score 0
(3248582)[       55.986] VERBOSE tunnel-sdk:intercept.c:136 port_match() matching port 53 to range 12345
(3248582)[       55.986]   TRACE tunnel-sdk:tunnel_udp.c:201 recv_udp() no intercepted addresses match udp:100.64.0.2:53
(3248582)[       55.986]   TRACE tunnel-sdk:netif_shim.c:56 netif_shim_input() received packet UDP[100.64.0.1:36821 -> 100.64.0.2:53] len=88
(3248582)[       55.986]   TRACE tunnel-sdk:tunnel_udp.c:177 recv_udp() received datagram 100.64.0.1:36821->100.64.0.1:53
(3248582)[       55.986] VERBOSE tunnel-sdk:intercept.c:113 address_match() ziti_address_cidr match score 0
(3248582)[       55.986] VERBOSE tunnel-sdk:intercept.c:136 port_match() matching port 53 to range 12345
(3248582)[       55.986]   TRACE tunnel-sdk:tunnel_udp.c:201 recv_udp() no intercepted addresses match udp:100.64.0.2:53

Resolvctl and the resolver stub file seems to show that DNS intercepts from OpenZiti tunneler at 100.64.0.2 should be properly directed:

❯  cat /run/systemd/resolve/resolv.conf
# This is /run/systemd/resolve/resolv.conf managed by man:systemd-resolved(8).
# Do not edit.
...
nameserver 8.8.8.8
nameserver 8.8.4.4
nameserver 100.64.0.2
search .

❯ resolvectl status
Global
           Protocols: +LLMNR +mDNS -DNSOverTLS DNSSEC=allow-downgrade/unsupported
    resolv.conf mode: stub
  Current DNS Server: 8.8.8.8
         DNS Servers: 8.8.8.8 8.8.4.4
Fallback DNS Servers: 1.1.1.1#cloudflare-dns.com 8.8.8.8#dns.google 1.0.0.1#cloudflare-dns.com 8.8.4.4#dns.google 2606:4700:4700::1111#cloudflare-dns.com 2001:4860:4860::8888#dns.google
                      2606:4700:4700::1001#cloudflare-dns.com 2001:4860:4860::8844#dns.google

Link 3 (wlp4s0)
Current Scopes: LLMNR/IPv4 LLMNR/IPv6
     Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=allow-downgrade/supported

<...snip other interfaces...>

Link 267 (tun0)
    Current Scopes: DNS
         Protocols: +DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: 100.64.0.2
       DNS Servers: 100.64.0.2

However, if I check interface and port bindings, I don’t see the tunneler directly binding tcp/udp port 53 on interface 100.64.0.2 in netstat. Shouldn’t we see it here?

❯ sudo netstat -tupln | grep 53
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      3245770/systemd-res 
tcp        0      0 127.0.0.54:53           0.0.0.0:*               LISTEN      3245770/systemd-res 
tcp        0      0 0.0.0.0:5355            0.0.0.0:*               LISTEN      3245770/systemd-res 
tcp6       0      0 :::5355                 :::*                    LISTEN      3245770/systemd-res 
udp        0      0 127.0.0.54:53           0.0.0.0:*                           3245770/systemd-res 
udp        0      0 127.0.0.53:53           0.0.0.0:*                           3245770/systemd-res 
udp        0      0 0.0.0.0:5355            0.0.0.0:*                           3245770/systemd-res 
udp6       0      0 :::5355                 :::*                                3245770/systemd-res

IP information shows the tun0 device is set up properly for 100.64.0.1, nothing seen for 100.64.0.2. I’m thinking the 100.64.0.2 packets are handled indirectly though iptables rules or ip route instead of direct device and ip addr registration:

❯ ip a
<...snip...>
267: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 500
    link/none 
    inet 100.64.0.1/32 scope global tun0
       valid_lft forever preferred_lft forever
    inet6 fe80::fc94:f4be:91fd:b0/64 scope link stable-privacy 
       valid_lft forever preferred_lft forever

iptables-save, iptables -t nat -v --list, iptables -t mangle -v --list all show no rules which would intercept packets on 100.64.0.2 – ie: no hit for | grep '100.' in the output of those commands. Looks like this is handled by ip route:

❯ ip route
default via 192.168.3.1 dev wlp4s0 proto dhcp src 192.168.3.92 metric 3003 mtu 1500 
<...snip non-relevant routes...>
$PUBLIC_CONTROLLER_IP via 192.168.3.1 dev wlp4s0 
100.64.0.0/10 dev tun0 scope link 
100.64.0.2 dev tun0 scope link 
100.64.0.3 dev tun0 scope link 

But name resolution for the tunneler at 100.64.0.2 doesn’t appear to work yet:

# Should receive an answer
❯ dig ken.nc.ziti
; <<>> DiG 9.18.3 <<>> ken.nc.ziti
;; global options: +cmd
;; connection timed out; no servers could be reached

# Should receive an answer
❯ dig @100.64.0.2 ken.nc.ziti
; <<>> DiG 9.18.3 <<>> @100.64.0.2 ken.nc.ziti
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

# Expected
❯ dig @100.64.0.1 ken.nc.ziti
;; communications error to 100.64.0.1#53: connection refused

Oh! If I rollback to version 0.19.11, resolution works right away… Seems something has broken in 0.20.0.

Just found this in a newly created GH issue, opened about the same time I was writing this up :slight_smile:

Thanks so much the report! This is fixed in 0.20.1, which I’ll be releasing shortly.

1 Like

Sweet! Finally have something up and running with OpenZiti! So I’ve tested:

  • simple service single port ziti-edge-tunnel to ziti-edge-tunnel connection
  • vpn service port range ziti-edge-tunnel to ziti-edge-router as gateway connection

Super happy with the progress!

One quirk I’m noticing, although it doesn’t appear to be a functional issue, is that in the docker compose quickstart, I can query fabric APIs from the CLI on the controller, example:

ziti@ea0520668e5e:/openziti$ ziti fabric list links
╭────────────────────────┬───────────────────────┬───────────────────────┬─────────────┬─────────────┬─────────────┬───────────┬────────┬───────────╮
│ ID                     │ DIALER                │ ACCEPTOR              │ STATIC COST │ SRC LATENCY │ DST LATENCY │ STATE     │ STATUS │ FULL COST │
├────────────────────────┼───────────────────────┼───────────────────────┼─────────────┼─────────────┼─────────────┼───────────┼────────┼───────────┤
│ 1GVIH8jlfcwJywd5qVwQHx │ ziti-private-blue     │ ziti-edge-router      │           1 │       2.4ms │       2.3ms │ Connected │     up │         5 │
│ 1Ky2rGDsmHfqCakCq0Ultu │ ziti-private-red      │ ziti-fabric-router-br │           1 │       2.0ms │       2.0ms │ Connected │     up │         4 │
│ 1t0dq56R1eSPntE4Mhzd5M │ ziti-private-red      │ ziti-edge-router      │           1 │       2.5ms │       2.4ms │ Connected │     up │         5 │
│ 2DhzKGy9NcDvRGhs0rIhqo │ ziti-edge-router-wss  │ ziti-edge-router      │           1 │       2.4ms │       2.5ms │ Connected │     up │         5 │
│ 34I9yDHUb3ArsMWc99Z4QD │ ziti-edge-router-wss  │ ziti-fabric-router-br │           1 │       2.4ms │       2.5ms │ Connected │     up │         5 │
│ 38ayYaY2PGOSKrp4RrVXTI │ ziti-private-blue     │ ziti-fabric-router-br │           1 │       1.5ms │       2.1ms │ Connected │     up │         4 │
│ 5hfUoIu0gbiZv8PjY8EOcj │ ziti-private-blue     │ ziti-edge-router-wss  │           1 │       2.0ms │       1.9ms │ Connected │     up │         3 │
│ 5myYsACMbAShPDYC7PJZlJ │ ziti-private-red      │ ziti-edge-router-wss  │           1 │       2.0ms │       2.0ms │ Connected │     up │         4 │
│ 5s5SYuMFXPXCGeJWajihgu │ ziti-fabric-router-br │ ziti-edge-router      │           1 │       2.4ms │       2.4ms │ Connected │     up │         5 │
╰────────────────────────┴───────────────────────┴───────────────────────┴─────────────┴─────────────┴─────────────┴───────────┴────────┴───────────╯
results: 1-9 of 9

But if I try to do the same thing on the controller of my anywhere deployment, I get an API path not found:

# ziti fabric list links
error: error listing https://#EXTERNAL_ROUTER_CONTROLLER_FQDN:PORT/fabric/v1/links in Ziti Edge Controller. Status code: 404 Not Found, Server returned: {
    "error": {
        "cause": {
            "code": "UNHANDLED",
            "message": "path /fabric/v1/links was not found"
        },
        "code": "NOT_FOUND",
        "message": "The resource requested was not found or is no longer available",
        "requestId": "D8T-VWQJ3M"
    },
    "meta": {
        "apiEnrollmentVersion": "0.0.1",
        "apiVersion": "0.0.1"
    }
}

For PoC, I currently have both the controller and router service on the same machine until later when I’ll split them and use out-of-band cert distribution since some of the certs are shared between these two services. Since I have tunnel-to-tunnel services working, and also tunnel-to-router services working, it doesn’t look like this lack of ability to query fabric endpoints from CLI is harming anything.

Maybe just an ENV VAR definition which I’ve mucked up a bit – I find myself getting confused easily with them as there are so many… I teased them apart as best I could from the ziti-functions-cli.sh file into what vars are needed for the individual NixOS systemd controller and router services.

Since everything is mTLS encrypted, my tcpdump and strace friends aren’t as effective helpers as usual with figuring out whats going on with the fabric api call quirk :slight_smile:.

Also, @scareything, do you plan to release binary artifacts for the latest ziti-edge-tunnel release as well? I’ll give it a whirl once they are up. Getting NixOS source builds written is taking a lower priority since we have binary builds working now from your repo release artifacts.

Awesome, Thanks for the update!

The binary artifacts should have been automatically added to the release, but the release build failed because github is apparently deprecating one of the runners that we use:

This is a scheduled macOS-10.15 brownout. The macOS-10.15 environment

We just had this issue with our Linux runners. Sheesh! I'll be updating the runner for macOS soon, probably next week.

btw I spotted an issue in 0.20.1 that you might run into soon if you haven't already. Basically the tunneler could dial a service when a non-matching packet is intercepted. It's been fixed in 0.20.2 so you should grab those bits when you get a sec.

Thanks,
-Shawn

1 Like

Are you able to successfully login using the ziti cli? how do you login? do you use the alias from the helper script zitiLogin or do you issue ziti edge login or something else? My guess is that the controller's REST API is not exposed publically and the ziti CLI is using the public address.

Hi Clint! I do use the zitiLogin alias after sourcing the environment file in the shell made by an initial generateEnv call to the ziti-cli-functions.sh file.

I can login fine:

# zitiLogin
Token: $TOKEN
Saving identity 'default' to /var/lib/ziti-controller/ziti-cli.json

I can get other APIs from CLI ok – for example, from edge:

# ziti edge list sessions
╭───────────────────────────┬───────────────────────────┬──────────────┬──────╮
│ ID                        │ API SESSION ID            │ SERVICE NAME │ TYPE │
├───────────────────────────┼───────────────────────────┼──────────────┼──────┤
│ cl98xaomy1ihxll7o1z9frynd │ cl97rm8f1000mll7ohjpxkc48 │ vpn          │ Bind │
│ cl98xc1wk1ijsll7ogarjwj10 │ cl98vq5v61gekll7ogff3lbz8 │ vpn          │ Dial │
│ cl98xkpk21ivall7oz1zh476d │ cl98uyrcu1feall7olvqyv3rd │ vpn          │ Dial │
╰───────────────────────────┴───────────────────────────┴──────────────┴──────╯
results: 1-3 of 3

Just the fabric APIs seem to be an issue. I noticed in my config, I hadn’t opened the FW for port 6262, but have since opened that and no difference.

I’m not sure what would be specific to the fabric endpoints, but not the edge endpoints… Maybe an env var, or port that’s configured correctly for edge API, but incorrectly for fabric API.

Ah, zitiLogin does an edge login. Maybe there is a fabric login that I need to be doing.

The edge login is all you need. What is the version of the ziti CLI?

ziti --version
v0.26.8

Also if you look at your config file, usually at: $HOME/.ziti/quickstart/$(hostname)/$(hostname).yaml at the very end does it have ‘fabric’? Could that be missing somehow? For example mine looks like this from an install I did a week or two back:

tail -5 $HOME/.ziti/quickstart/$(hostname)/$(hostname).yaml
        options: { }
      - binding: edge-client
        options: { }
      - binding: fabric
        options: { }

That will be important for when you have more than one router. That's where external routers are going to end up connecting to the controller. (or whatever you define in the controller's config file at:

ctrl:
  listener:             tls:0.0.0.0:6262

The only thing I can think of is that you are missing the fabric web.name.apis.binding for example on one of my servers I have a block like this:

    apis:
      - binding: edge-client
        options: { }
        #- binding: fabric
        #options: { }
      - binding: edge-management
        options: { }

notice how my fabric isn't supplied? But when I listed links I got a different error.

ziti fabric list links
error: error listing https://ec2-18-188-201-183.us-east-2.compute.amazonaws.com:18441/fabric/v1/links in Ziti Edge Controller. Status code: 404 Not Found, Server returned:

What about the controller logs - anything useful in there?

1 Like

I think you probably nailed it with the suggestion of the binding: fabric and options: {} lines missing; they are indeed. Not sure how that happened. I’m running the latest versions of ziti controller and router, 0.26.10.

I’m going to respin the ziti environment shortly to make a few other changes and simplifications as well, and hopefully this will do it. I’ll confirm a later today or Monday. Thank you very much for the suggestion!

@scareything – ziti-edge-tunneler 0.20.2 is working well, thanks for the release and the bins!

@TheLumberjack – fabric APIs are responding now via CLI, I must have just cut and paste goofed on that one since I’m explicitly declaring config in the nix service module rather than generating it via CLI or ziti-functions-cli.sh function call. Thank you!

I’ve just learned how to codify an example service set up through CLI on controller and router during bootstrap instead of doing it manually via the UI post bootstrap. Then, only thing to make it work after spinning up a test environment is just to enroll some clients with the “vpn-users” role attribute set in this example. Nice!

Looking forward to getting more proficient with this tech! Have a good weekend!

1 Like