Remote edge router configuration file changes

I am working through the early stages of a multi cloud deployment.

I learned a few more insights on how to do this from the following post

However, it also raised a few more points that I thought to ask for guidance on, especially given that the edge router will be located on a different server to the controller.

I have copied the main elements of the new edge router yaml file.


ctrl:
  endpoint:             tls:**ip-address-of-controller**:6262

link:
  dialers:
    - binding: transport
  listeners:
    - binding:          transport
      bind:             tls:0.0.0.0:10080
      advertise:        tls:**ip-address-of-new-server**:10080
      options:
        outQueueSize:   1000

listeners:
# bindings of edge and tunnel requires an "edge" section below
  - binding: edge
    address: tls:0.0.0.0:8500
    options:
      advertise: ip-address-of-new-server:8500
      connectTimeoutMs: 60000
      getSessionTimeout: 60s
  - binding: tunnel
    options:
      mode: host #tproxy|host


edge:
  csr:
    country: US
    province: NC
    locality: Charlotte
    organization: NetFoundry
    organizationalUnit: Ziti
    sans:
      dns:
        - instance-20220416-1603
        - localhost
      ip:
        - "127.0.0.1"
        - "ip-address-of-new-server"

Some questions are:

  1. Link section: I am assuming that this is what allows edge router to dial the controller... so in the bindings.. you specify the ip address of the new server to advertise on

is this correct?
the transport binding is used.. can you provide more details on what this specific options defines. I have seen this a few times but don't really understand it

  1. Listeners section: I am assuming that this is what allows incoming connections from the controller..

is it correct to specify the IP address as the new server

  1. edge section: I am assuming that this somehow connects with the certificate of the controller because of the CSR.. but not 100% sure ..

is it correct to specify the IP address as the new server
can you provide a few more details on what this specific section is defining to help my broader understanding

Other things I want to clarify are

port 6262: this is always allocated to the controller endpoint.

my understanding is that this port would need to be open on the firewall for the second server seeking to connect to the controller
I don't remember opening this port for the quickstart example
ie it is what allows connections from the edge router to the controller

port 10080: this is the port use to establish links between routers on different servers

my understanding is that this port would need to be open on the firewall for both servers
could this also be used if its two routers on the same server.. does this make any sense?

I found the following comments which help answer some of my questions..

I also noticed the following.. which actually helps answer another of the questions.

When you have a tunneller binding.... you need to use the host mode

Alternatively.. the link uses the underlying "transport" which could either by tcp or udp

does that make sense?

link:
  dialers:
    - binding: transport
  listeners:
    - binding:          transport

I worked this one out.. the port is only used to host the link.. so it does not need to be opened on the firewall

You can use host, tproxy or proxy for the mode. If you only want to host services, not intercept client traffic, then host mode is appropriate.
tproxy is full intercept and hosting and relies on the tproxy capability of linux networking.
proxy lets you specify which services you want to intercept and on which port you want to intercept them. It's not 'transparent' in that you can't intercept arbitrary ips/hostnames, you can only do direct proxying.

Note that if you want to tunnel traffic on something other than a linux server you'd generally use a standalone tunneler, like the windows or mac edge clients.

2 Likes