OpenZiti's new MCP Gateway & LLM Gateway

We just open-sourced two projects we've been working on at NetFoundry: an MCP gateway and an LLM gateway. Both are built on OpenZiti, and they solve two sides of the same problem.

The MCP gateway gives AI assistants secure access to internal MCP tool servers - filesystem, databases, GitHub, whatever you're running - without exposing public endpoints. It aggregates multiple backend servers into a single connection, namespaces the tools (so your "read_file" from the filesystem backend doesn't collide with "read_file" from somewhere else), and lets you filter which tools each client can see. Filtered tools aren't checked at runtime - they don't exist in the registry. The whole thing runs over an OpenZiti overlay, so nothing listens on a public port.

The LLM gateway is an OpenAI-compatible proxy that routes requests across OpenAI, Anthropic, and Ollama. The part that's different from LiteLLM or Portkey is the security model - the gateway can run with zero listening ports, clients connect through the overlay with cryptographic identity, and you can reach Ollama instances on other machines without opening ports or setting up a VPN. It also has semantic routing that automatically picks the best model for each request using a three-layer cascade (keyword heuristics, embedding similarity, and an optional LLM classifier), plus weighted load balancing across multiple Ollama instances.

Both projects and how they fit together: https://openziti.ai

MCP Gateway: github.com/openziti/mcp-gateway
LLM Gateway: github.com/openziti/llm-gateway

3 Likes