Dns routing inside Kubernetes with Edge tunneler daemonset

Previously, I was using the NetFoundry console, but now I'm setting up OpenZiti on Google Kubernetes Engine (GKE). I have successfully installed the Controller and Router and configured ingress for both, with access allowed through port 443.

Issue: I need to establish a connection to a private database hosted on AWS from GCP.

In the NetFoundry setup, I was able to accomplish this. The configuration involved:

  • Running the application on ECS/VM to access the database.
  • Installing a router on the VM.
  • Adding the database's custom DNS in Route 53/Cloud DNS with an intercept IP.
  • Routing to the router by adding entries in the route table (AWS) or route management (GCP).
  • Updating the security group/firewall to allow the required ports.
  • Permitting incoming traffic from the router to the application container/VM.

Process Flow:

  1. The application accesses the database URL.
  2. The URL is resolved in Route 53.
  3. The IP is routed to the Edge router.
  4. OpenZiti intercepts and resolves this IP.
  5. The request is returned to the application.

I'm attempting to replicate this setup from GKE using a tunneler. I have installed the Kubernetes DaemonSet as per the instructions found here: Kubernetes Node Daemonset | OpenZiti and am currently running the tunneler on the GKE cluster.

Goal: I need to access the database from a pod running inside the cluster.

If you were successful using the NetFoundry console, there's functionally no difference presuming you've configured everything correctly. I understand what you're trying to do, but I don't see what the problem or question is that you have? Can you expand on the problem?

I'm attempting to replicate this setup from GKE using a tunneler. I have installed the Kubernetes DaemonSet as per the instructions found here: Kubernetes Node Daemonset | OpenZiti and am currently running the tunneler on the GKE cluster.

Previously, when using Netfoundry, I implemented the following setup: I mapped the internal URL DNS to resolve using an intercept IP on Netfoundry, and then routed this to AWS. I avoided using the IP from Ziti (10.1.xx.x) because it could change whenever I made updates in the console. Instead, I opted for a static IP to ensure consistent DNS resolution and routing.

Now, I'm attempting to implement a similar configuration inside Kubernetes for the first time, and I'm unsure about the process for routing to a tunneler for DNS resolution. In the previous setup, I added a routing table to a router, which then handled the resolution. However, in this Kubernetes environment, it's unclear how to achieve the same result with a tunneler.

Additionally, I'm exploring the possibility of using an SDK to connect to a database so that I can avoid this DNS and routing stuff. Unfortunately, I haven't found any information on connecting to PostgreSQL using a Python SDK.

Did you have a look at this? Kubernetes Node Daemonset | OpenZiti.

Sorry, I missed a bit in your comment posted earlier about it. Can you share your dns configs related to ziti resolver?

On the SDK side, the easiest way to get started is with the python-sdk and monkeypatch, you just install the openziti:

pip install openziti

And you can use something like this:

def get_db_connection_pg8000():
    try:
        with openziti.monkeypatch():
            conn = pg8000.connect(
                user=config['database']['user'],
                password=config['database']['password'],
                host=config['database']['host'],
                port=config['database']['port'],
                database=config['database']['dbname']
            )
            return conn
    except Exception as e:
        logging.error(f"Error getting the connection: {e}")
        logging.error("Error connecting to the database", exc_info=True)
        raise

@app.route('/database/character/<string:name>', methods=['GET'])
def get_character(name):
    global config

    try:
        # Get OpenZiti Data
        with openziti.monkeypatch():
            # Connect to a Zitified Database
            conn = get_db_connection_pg8000()
            cursor = conn.cursor()
            cursor.execute("SELECT * FROM characters WHERE name ILIKE %s", (name,))
            character = cursor.fetchone()
            conn.close()

            if character:
                return jsonify({
                    "id": character[0],
                    "name": character[1],
                    "age": character[2],
                    "anime": character[3]
                })
            else:
                return jsonify({"error": "Character not found"}), 404

    except Exception as e:
        logging.error('General error: %s', str(e))
        return jsonify({'error': 'An unexpected error occurred'}), 500

That's actually part of my code (A flask server connecting to a DB to get data). I'm using the PG8000 driver on python.

3 Likes

Oh Cool. thanks for your code snippet let me try it. thanks

I think I fixed it by adding proper ingress for the controller and router to access from the public and then installed routers to forward non-ziti requests to resolve.
to avoid this, I'm going to try SDK to connect DB; it will get rid of all these DNS Routing stuff.

It seems to be working in GKE by enabling NodeLocal DNSCache. It is preferred way with ziti-edge-tunnel ds anyway, and it is not enabled by default when creating a cluster. Not sure if you had that enabled. In short, each copy of ds zet has its own resolver. They are not synchronized. Thus with large number of services (i.e. fqdns to resolve), the same service may be resolved to a different ip. This way the local resolver is preferred. We are still looking at this to see how we can ease the pain but the zitification is best way to ease that pain if possible for sure.

Btw,Nodelocalcache is enabled , let me explore it.
Zitification means using SDK?

Yes, zitification was coined to describe embedding ziti using sdks.

1 Like