What is the formula for directing all traffice to Google Workspace through a single egress IP?

So if I were to diagram this, it's something like this that you're trying to do?

You're logged into ziti there since you can list your edge routers. Can you try policy-advisor again? ziti edge policy-advisor services -q (-q returns more quiet output)

Here is my output.

$ ziti edge policy-advisor services -q
ERROR: hello_service
  - Service is not accessible by any identities. Adjust service policies.

Forgive the diagram, my battery is almost dead.

image

Yep. You need to make a "dial" policy and authorize the identities you want to be able to act as a client to the http service, and you need to make a "bind" policy to authorize the identities you want to be able to act as the 'dark proxy' side (the server offload point).

The "bind" policy should reference the linux tunneler identity from the diagram as shown and the "dial" policy would reference your macos tunneler

I’m really sure I did that. Note the two policies here. One of them is a bind and it goes to the identity that hosts the service. And the dial goes to the identity. That is the client, the laptop.


It looks like you need to add your hello_service to your Dial Policy’s SERVICE ROLE ATTRIBUTES.

Your Bind Policy’s IDENTITY ROLE ATTRIBUTES field is empty (which identity is providing the service).

1 Like

Gotcha. So the @hello_service will be attached to both the bind and dial. Whereas the identity that can actually reach the service is attached to the bind policy and the client trying to dial the service is attached to the dial policy. Is that correct?

  • I can now dig hello.ziti @100.64.0.2
  • I can now ping hello.ziti if I fix my hosts file
  • I can’t curl https://hello.ziti -k.
$ curl -v https://hello.ziti -k
*   Trying 100.64.0.3:443...
* connect to 100.64.0.3 port 443 failed: Connection refused
* Failed to connect to hello.ziti port 443 after 84 ms: Couldn't connect to server
* Closing connection 0
curl: (7) Failed to connect to hello.ziti port 443 after 84 ms: Couldn't connect to server

So, now I get these warnings.

$ ziti edge policy-advisor services -q
ERROR: helloworld.jptech.corp (0) -> hello_service (0) Common Routers: (0/0) Dial: N Bind: Y
  - Identity has no edge routers assigned. Adjust edge router policies.
  - Service has no edge routers assigned. Adjust service edge router policies.

ERROR: atreyu (0) -> hello_service (0) Common Routers: (0/0) Dial: Y Bind: N
  - Identity has no edge routers assigned. Adjust edge router policies.
  - Service has no edge routers assigned. Adjust service edge router policies.

I have since then added router and service router policies.

$ ziti edge policy-advisor services -q
OKAY : helloworld.jptech.corp (1) -> hello_service (1) Common Routers: (1/1) Dial: N Bind: Y

OKAY : atreyu (1) -> hello_service (1) Common Routers: (1/1) Dial: Y Bind: N


note: this edge_router_policy didn’t want to autocomplete… so perhaps it’s incorrect. I had to type @ to get it to populate. As you can see when I go back in it is using a GUID instead of the name.

Any idea why I can’t curl but I can ping the service?

jp@falkor ~/git/jp/ansible $ ping hello.ziti
PING hello.ziti (100.64.0.3): 56 data bytes
64 bytes from 100.64.0.3: icmp_seq=0 ttl=255 time=2.919 ms
64 bytes from 100.64.0.3: icmp_seq=1 ttl=255 time=1.267 ms
^C
--- hello.ziti ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 1.267/2.093/2.919/0.826 ms

jp@falkor ~/git/jp/ansible $ curl -v https://hello.ziti
*   Trying 100.64.0.3:443...
^C

jp@falkor ~/git/jp/ansible $ openssl s_client -connect 100.64.0.3:443
001E11E001000000:error:8000003D:system library:BIO_connect:Connection refused:crypto/bio/bio_sock2.c:114:calling connect()
001E11E001000000:error:10000067:BIO routines:BIO_connect:connect error:crypto/bio/bio_sock2.c:116:
connect:errno=61

Curl will actually send traffic. Ping only tests the IP address.

I would look at the dialing and hosting identity’s logs to understand what happened. I expect a config is incorrect, often it’s at the hosting side

Here is what I get for journalctl -f

Aug 02 15:27:10 helloworld ziti-edge-tunnel[422]: (422)[      943.639]    WARN ziti-sdk:connect.c:367 connect_timeout() conn[0.0/Binding] bind timeout: no suitable edge router
Aug 02 15:27:20 helloworld ziti-edge-tunnel[422]: (422)[      953.842]    WARN ziti-sdk:connect.c:367 connect_timeout() conn[0.0/Binding] bind timeout: no suitable edge router
Aug 02 15:27:25 helloworld ziti-edge-tunnel[422]: (422)[      958.504]   ERROR tunnel-cbs:ziti_hosting.c:203 on_hosted_tcp_server_connect_complete() hosted_service[hello_service], client[falkor]: connect to tcp:::1:8080 failed: connection refused
Aug 02 15:27:31 helloworld ziti-edge-tunnel[422]: (422)[      964.054]    WARN ziti-sdk:connect.c:367 connect_timeout() conn[0.0/Binding] bind timeout: no suitable edge router
Aug 02 15:27:41 helloworld ziti-edge-tunnel[422]: (422)[      974.255]    WARN ziti-sdk:connect.c:367 connect_timeout() conn[0.0/Binding] bind timeout: no suitable edge router
Aug 02 15:27:51 helloworld ziti-edge-tunnel[422]: (422)[      984.463]    WARN ziti-sdk:connect.c:367 connect_timeout() conn[0.0/Binding] bind timeout: no suitable edge router

Specifically Aug 02 15:27:25 helloworld ziti-edge-tunnel[422]: (422)[ 958.504] ERROR tunnel-cbs:ziti_hosting.c:203 on_hosted_tcp_server_connect_complete() hosted_service[hello_service], client[falkor]: connect to tcp:::1:8080 failed: connection refused is indicating a refusal. I don’t know where to look for it though.

If I curl on the local lan I get Aug 02 15:29:56 helloworld nginx[2420]: 10.1.1.100 - - [02/Aug/2023:15:29:56 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.88.1" "-" and a response.

Ahhh... so i suspect you used "localhost:8080" for your "server side" and what's happening is you're being bitten by the "localhost is IPv6 and IPv4" issue we have seen before.

I expect your web server is not listening on IPv6 - but is instead listening on IPv4...

Change the bind-side config to use 127.0.0.1 instead of localhost and I betcha it'll work...

EDIT:

to explain a tiny bit more... "localhost" is special on most/many OS's and often times ipv6 is chosen over ipv4. When you use 127.0.0.1 which is basically "the same" as localhost you're telling the computer that you specifically want to use ipv4 and the OS doesn't have to guess if you mean ipv6 localhost (::1 like you see above) or ipv4 localhost (127.0.0.0/8)...

Yahtzee! That was it.

Ok. last question on this little jaunt. Anywhere in here is there SSL offloading happening or possible? Is OpenZITI capable of adding ssl as part of the proxy?

In the service definition, I have "encryptionRequired":true, which I am guessing doesn’t actually turn on encryption for the connection.

At this time, OpenZiti does not support the same kind of TLS offloading I think you’re thinking of. But I have a lot more to add on this… Hopefully some or all of it is helpful/relevant…

OpenZiti, by default, provides end-to-end encryption just by using OpenZiti (that’s the "encryptionRequired":true bit, don’t make that false! :slight_smile: )… So while you are curl’ing to “your.intercept:8080” on the client, once that traffic hits OpenZiti (which is fast, since it’s local to your OS) your traffic gets end-to-end encrypted all the way to the far side. Now, that’s not “SSL offloading” and it’s not the same thing as OpenZiti providing an https endpoint whatsoever… So I’m not sure if that’s what you are interested in or if you want the “lock” icon in the browser and want to curl to https://your.intercept… So your traffic is encrypted the whole way to the far side! but it’s “not TLS/SSL”… Same, but different…

So as an option, if you want, you can have (in your case) NGinx present an https endpoint and let NGinx offlload TLS… That’s straightforward enough. Just change http to https and off you go. That’ super-cool because then if you make a cert from a “usually trusted CA”, like letsencrypt, you can present that cert at the NGinx side and have TLS the whole way too, getting the little padlock in the browser etc. That might be what you’re after. Lots of people like that because you don’t actually need to present the server to the internet but you still get ‘actual’ TLS the whole way. (and still, don’t turn off that greate end-to-end encryption… never turn it off is our recommendation)

And another option is using something Caddy as your “server” which sort of then works as both NGinx and ziti-edge-tunnel combined. You can see it on their download page Download Caddy and read about that over on the readme GitHub - openziti-test-kitchen/ziti-caddy: Zitified Caddy server

And finally, you can look at the nginx module we made too and see what you think of that if you’re interested GitHub - openziti/ngx_ziti_module: An NGINX module that allows OpenZiti to front upstream servers

1 Like

Ok, I see. Well… that’s gonna take a bit of pondering then. Here’s my challenge. I have been using cloudflared (argo) tunnels for-ev-er, and they do the SSL offloading. So web apps have not needed SSL on the backend. This is great. This is what I am trying to replace. So what you describe sounds like I would need something tls/ssl on the backend. What Ziti does is eliminate the need for firewall holes and gives me policy based access.

Time to do some more research. But, my PoC is complete. Time to write it up and try to do it from my docs.

Thanks for all the help, this was a LONG thread!

I mean, you’re using NGinx from what I can see. I’d just move to that and your problem is solved. You just need to get yourself a wildcard cert from lets encrypt (pretty easy process if you can update a DNS entry, not ‘easy’ if you don’t know how but not super hard stuff) then you can make your wildcard cert and enjoy true NGinx offloaded TLS wherever you want :slight_smile:

That’s what I do for zrok (per the install instructions) and it works nicely. It’s also super handy to have for browzer too when you get around to that :slight_smile:

the NGinx container was just the simplest HTTP hello world, it’s kind of a goto because it’s consistent. So the PoC was to put something behind ziti to see how it works and what it takes to set it up. then it will be to see how durable it is, and what the technical cost is to implement it. In reality, it could be a variety of TCP connections from things like RDP to SQL to firebird databases that might need to truck across the fabric. HTTP is just about the simplest to test with. One of the big services I use is Apache Guacamole, and I love the fact that cloudflared puts it behind an SSL cert and authentication mechanism, so it checks a LOT of boxes for me. If I were to put Guacamole behind ziti I have to proxy it since it’s not https capable on it’s own. I run it in docker, btw. It’s not undoable, but having an ingress controller or proxy in front just adds complexity.

Makes sense. It’s something we have discussed as a good future enhancement since this is not the first time it’s come up. Excellent feedback! Thanks

I forgot to mention. Part of my PoC is to determine if the technical cost is greater than the cost of having it hosted and managed. There are a LOT of ZTNA-as-a-service companies out there. But I like FOSS. But I also like like my services to be cattle and not pets. I haven’t decided how to make Ziti go moo.

Don’t forget that CloudZiti is one of those ZTNA-as-a-service companies provided by NetFoundry… :smiley: shameless plug complete

I hear you though. I’m very interested in any stumbling blocks you might have around getting the server to not be a pet. If you need help there, we’re here to help. I’m certain we can get you there…

EDIT: forgot shameless link to NetFoundry Console

Ha, yeah, for sure, NetFoundry and the guy I talked to really got me interested. If NetFoundry could do the ssh offload then my search might be complete. Do you know if it does?

No that’s not something we offer generally. I do believe Browzer does do this but it’ll be"limited" to only http services. That’s usually what people actually want through so, it might be what you’re looking for. I think it’s still in active beta and I’m not entirely sure if it’s available for teams.

Maybe if I mention @tburtchell it’ll magically alert him and he can comment on the current status of Browzer in CloudZiti