Hi guys!
The idea for my project is to simulate an industrial network in a very simple way (only for a super simple demo) and to "demonstrate" Ziti benefits after adding it to the infrastructure.
For being more realistic, the idea is to do not change anything in the HMI code (a simple python script) and of course do not change anything in the PLC code (another python script).
The structure is the following:
devices
│ compose.yml
│
└───PLCsiemens
│ │ Dockerfile
│ │ PLCsiemens.py
│ │ requirements.txt
│
└───SiemensClient
| │ Dockerfile
| │ SiemensClient.py
| | requirements.txt
and the compose file is the following:
networks:
testnet:
name: testnet
driver: bridge
ipam:
config:
- subnet: ${NET_ID:-172.19.0.0/16}
services:
plcsiemens:
build:
context: ./PLCsiemens
dockerfile: Dockerfile
environment:
PLCSIEMENS_PORT: ${PLCSIEMENS_PORT:-102}
PROCESSING_FAILURE_RATE: ${PROCESSING_FAILURE_RATE:-0.15}
QUALITY_ASSURANCE_FAILURE_RATE: ${QUALITY_ASSURANCE_FAILURE_RATE:-0.18}
DISCARDING_OR_SENDING_FAILURE_RATE: ${DISCARDING_OR_SENDING_FAILURE_RATE:-0.11}
DEFECT_RATE: ${DEFECT_RATE:-0.24}
MEMORY_AREA_SIZE: ${MEMORY_AREA_SIZE:-8}
DATA_BLOCK_NUMBER: ${DATA_BLOCK_NUMBER:-5}
container_name: ${PLCSIEMENS_CONTAINER_NAME:-plcsiemens}
networks:
testnet:
ipv4_address: ${PLCSIEMENS_ADDRESS:-172.19.1.1}
command: ["python3", "PLCsiemens.py"]
siemensclient:
build:
context: ./SiemensClient
dockerfile: Dockerfile
stdin_open: true
tty: true
depends_on:
- ${PLCSIEMENS_CONTAINER_NAME:-plcsiemens}
environment:
PLCSIEMENS_ADDRESS: ${PLCSIEMENS_ADDRESS:-172.19.1.1}
PLCSIEMENS_PORT: ${PLCSIEMENS_PORT:-102}
PLCSIEMENS_RACK: ${PLCSIEMENS_RACK:-0}
PLCSIEMENS_SLOT: ${PLCSIEMENS_SLOT:-1}
container_name: ${HMISIEMENS_CONTAINER_NAME:-hmisiemens}
networks:
testnet:
ipv4_address: ${HMISIEMENS_ADDRESS:-172.19.1.2}
command: ["python3", "SiemensClient.py"]
All works fine up to this point.
Now, what i want to do is to create a new directory (NOT into devices) which defines all Ziti components (i've already done it for the controller and it works, but let me ask you starting from the beginning).
Obviously, in the compose.yml of ziti components i'll declare testnet as external making all ziti components able to be part of the network.
Reading docs, i found out (i think) that i need those components:
- Ziti controller;
- Ziti edge tunneler server-side;
- Ziti edge tunneler client-side;
In particular, i want the server-side tunneler being able to intercept all incoming traffic for the PLC and checking if it's authorized or not.
I've been able to add the ziti controller into the network (my PLC and HMI containers can correctly ping the ziti controller) but i'm in trouble with ziti tunnelers configuration. I can't understand which docker image to use and how to configure it to intercept traffic.
I hope i've been clear although my imperfect english and i hope you'll help me too!!
Thanks guys!
1 Like
I think this will work well for a demo! Your sketch helps me understand your filesystem layout and desired Compose project structure.
You have shown the devices and their applications, and now we need to add an OpenZiti network and a tunneling (proxy) container for the client and server devices. I will help with examples for both.
Here is an outline of a Docker Compose project's filesystem layout that could work. As you suggested, you can keep the project's files in various folders.
./ziti-ctrl/compose.yml
- an OpenZiti controller/router running in Docker
./devices/compose.yml
- applications in Docker connecting through ziti
./devices/PLCsiemens
- server app listening on 102/tcp
./devices/SiemensClient
- client app that will initiate connections to the server app's listener
You can add the two new tunneler containers (proxies) for the industrial device containers to ./devices/compose.yml
or a separate YAML file.
You suggested the OpenZiti controller/router could be in an "external" network. That would involve two separate Docker Compose projects, which is possible but unnecessary and adds slight complexity.
1 Like
Here's an example for ./ziti-ctrl/compose.yml
.
services:
ziti-ctrl:
image: openziti/ziti-cli
command: >
edge quickstart
--home /home/ziggy/quickstart
--ctrl-address ziti-controller
--ctrl-port 1280
--router-address ziti-router
--router-port 3022
--password ziggy123
user: "1000:1000"
working_dir: /home/ziggy
environment:
HOME: /home/ziggy
volumes:
- ./config:/home/ziggy
networks:
testnet:
aliases:
- ziti-controller
- ziti-router
expose:
- 1280
- 3022
This assumes your login UID on the Docker host is 1000.
Important: ensure you do mkdir ./ziti-ctrl/config
before running this container to ensure the ./config
directory is writeable by your UID. Of course, you may instead use a Docker volume if you have no need to conveniently edit the files in ./config
.
I will pause here and wait to hear from you to ensure we're on the right track. Here's an overview of the next steps:
- run ziti-ctrl and check logs to see it is healthy
- run ziti CLI commands to build the service
- add tunneler (proxy) containers for client and server
1 Like
Reminder to configure your Docker Compose project to use the two YAML files, e.g.,
export COMPOSE_FILE="./ziti-ctrl/compose.yml:./devices/compose.yml"
It seems like ziti-ztrl can't initialize the PKI for a permission problem, heres the output:
Attaching to ziti-ctrl-1
ziti-ctrl-1 | {"file":"github.com/openziti/ziti/ziti/cmd/edge/quickstart.go:103","func":"github.com/openziti/ziti/ziti/cmd/edge.(*QuickstartOpts).run","level":"info","msg":"permanent --home '/home/ziggy/quickstart' will not be removed on exit","time":"2024-08-28T12:58:19.964Z"}
ziti-ctrl-1 | emitting a minimal PKI
ziti-ctrl-1 | error: cannot sign: failed saving generated bundle: failed writing bundle root-ca within CA root-ca to the local filesystem: root directory for CA /home/ziggy/quickstart/pki/root-ca does not exist and cannot be created: failed creating CA root directory /home/ziggy/quickstart/pki/root-ca: mkdir /home/ziggy/quickstart: permission denied
ziti-ctrl-1 exited with code 1
I've tried removing user field in compose.yml, but it did not work too.
Is your UID 1000?
id -u
Before running the ziti-ctrl container, ensure that this directory on the Docker host is writable by your UID: ./ziti-ctrl/config
.
Then, when running the ziti-ctrl container with user: "1000:1000"
, assuming that is your correct UID on the Docker host, the container process will be able to edit files in that directory.
Yes I was 1000:1000, but, i don't know why, in the container /home/ziggy folder has root:root permissions!!
I expect you have an older copy of the ziti-cli image. You can add the :1.1.8
tag to the image to ensure you know which version you have. That's the latest stable release.
$ docker run --rm --entrypoint= docker.io/openziti/ziti-cli:1.1.8 ls -lnd /home/ziggy
drwxrwxr-x 1 2171 2171 4096 Jul 29 18:00 /home/ziggy
You can see how the owner of the directory changes to reflect the bound directory from the Docker host.
$ ls -lnd ./config
drwxrwxr-x 4 1000 1000 4.0K Aug 28 09:24 ./config/
$ docker run --rm --entrypoint= --volume ./config:/home/ziggy docker.io/openziti/ziti-cli:1.1.8 ls -ld /home/ziggy
drwxrwxr-x 4 1000 1000 4096 Aug 28 13:24 /home/ziggy
I'm a bit confused, I think it's not a question of version (e
I tried with the explit version you suggested and still /home/ziggy
is root owned) but more something I'm missing in terms of how docker compose works.
I fear that not existing prior docker compose execution /home/ziggy
gets created on docker compose start (with wrong rights) and then the other commands get executed as the user I specified. I'm trying to websearch a bit and checking the documentation to understand what I'm missing
I suspect ./ziti-ctrl/config
on the Docker host was created by root the first time you ran the ziti-ctrl
container.
Will you check the owner?
ls -lnd ./ziti-ctrl/config
EDIT: I changed the suggested file path to match our earlier posts in this thread.
I had the same idea hence I tried to create a Dockerfile like this:
FROM openziti/ziti-cli
ENV UID=${UID:-1000}
ENV GID=${GID:-1000}
ENV HOME=${HOME:-/home/ziggy}
RUN mkdir -p ${HOME} && \
chown -R ${UID}:${GID} ${HOME}
WORKDIR ${HOME}
but it says that I cannot change rights of the folder, probably there is a user change in the ziti-cli I'll check it
Anyhow obviously my choice for this structure with double folder led to the issue of having mismatch in using ./
but I think I have corrected all of them
You may customize the image with a USER root
directive to allow the build to change the image. Still, the standard image will work if ./ziti-ctrl/config
is writeable by UID 1000.
EDIT: correct Dockerfile syntax
Ok, It's getting stressfull so in order to understand what I'm missing I'll try to put everything back here, maybe you can spot the mistake. I have the following structure:
devices
│ compose.yml
│
└───PLCsiemens
│ │ Dockerfile
│ │ PLCsiemens.py
│ │ requirements.txt
│
└───SiemensClient
| │ Dockerfile
| │ SiemensClient.py
| | requirements.txt
|
ziti-ctrl
| Dockerfile
| compose.yml
└───config
then in the first compose inside devices I have:
networks:
testnet:
name: testnet
driver: bridge
ipam:
config:
- subnet: ${NET_ID:-172.19.0.0/16}
services:
plcsiemens:
build:
context: ./PLCsiemens
dockerfile: Dockerfile
environment:
PLCSIEMENS_PORT: ${PLCSIEMENS_PORT:-102}
PROCESSING_FAILURE_RATE: ${PROCESSING_FAILURE_RATE:-0.15}
QUALITY_ASSURANCE_FAILURE_RATE: ${QUALITY_ASSURANCE_FAILURE_RATE:-0.18}
DISCARDING_OR_SENDING_FAILURE_RATE: ${DISCARDING_OR_SENDING_FAILURE_RATE:-0.11}
DEFECT_RATE: ${DEFECT_RATE:-0.24}
MEMORY_AREA_SIZE: ${MEMORY_AREA_SIZE:-8}
DATA_BLOCK_NUMBER: ${DATA_BLOCK_NUMBER:-5}
container_name: ${PLCSIEMENS_CONTAINER_NAME:-plcsiemens}
networks:
testnet:
ipv4_address: ${PLCSIEMENS_ADDRESS:-172.19.1.1}
command: ["python3", "PLCsiemens.py"]
siemensclient:
build:
context: ./SiemensClient
dockerfile: Dockerfile
stdin_open: true
tty: true
depends_on:
- ${PLCSIEMENS_CONTAINER_NAME:-plcsiemens}
environment:
PLCSIEMENS_ADDRESS: ${PLCSIEMENS_ADDRESS:-172.19.1.1}
PLCSIEMENS_PORT: ${PLCSIEMENS_PORT:-102}
PLCSIEMENS_RACK: ${PLCSIEMENS_RACK:-0}
PLCSIEMENS_SLOT: ${PLCSIEMENS_SLOT:-1}
container_name: ${HMISIEMENS_CONTAINER_NAME:-hmisiemens}
networks:
testnet:
ipv4_address: ${HMISIEMENS_ADDRESS:-172.19.1.2}
command: ["python3", "SiemensClient.py"]
then in the folder ziti-ctrl
I have compose file as follow :
services:
ziti-ctrl:
# image: openziti/ziti-cli
build: ../ziti-ctrl/
command: >
edge quickstart
--home /home/ziggy/quickstart
--ctrl-address ziti-controller
--ctrl-port 1280
--router-address ziti-router
--router-port 3022
--password ziggy123
user: "${UID:-1000}:${GID:-1000}"
volumes:
- ../ziti-ctrl/config:/home/ziggy
networks:
testnet:
aliases:
- ziti-controller
- ziti-router
expose:
- 1280
- 3022
and Dockerfile:
FROM openziti/ziti-cli
ENV UID=${UID:-1000}
ENV GID=${GID:-1000}
ENV HOME=${HOME:-/home/ziggy}
USER root
RUN groupmod --gid ${UID} ziggy && \
usermod --uid ${UID} --gid ${GID} ziggy && \
mkdir -p ${HOME} && \
chown -R ${UID}:${GID} ${HOME}
USER ziggy
WORKDIR ${HOME}
aside from the fact that I have to do USER root
and then switch back to ziggy user, that is horrible I know, still it doesn't seem to work when I then run the container still /home/ziggy
is owned by root
PS: to run I export the following ENV var:
export COMPOSE_FILE="./devices/compose.yml:./ziti-ctrl/compose.yml"
EDIT: thanks mate I've just seen that you suggested to user USER root
to solve the issue
It's helpful to see the different pieces of the puzzle. Thanks.
Please let me know which UID:GID owns this directory.
ls -lnd ./ziti-ctrl/config
Also, some of the Dockerfile is not valid. The ENV
directive may contain a default assignment, but it can not contain a "bashism" like ${UID:-1000}
. You may also use build ARG
to allow overriding the default value.
I will help you customize the Dockerfile if you wish, but it is unnecessary for the ziti-ctrl container.
Another simpler option is to let Docker manage the storage. I was guessing it would be nice to have the files in ./config
on the Docker host to explore and edit, but that is also unnecessary.
Oh sorry I realize only now the mistake on the Dockerfiele
As for the ls command the result is:
drwxrwxr-x 2 1000 1000 4096 Aug 28 15:37 ./ziti-ctrl/config
Anyhow I guess that the volume strategy would be a good solution to solve evrything, I'll try and I'm sorry to make you waste time on this
1 Like
Ok, with the volume strategy all works fine, so we can finally continue with following steps to reach the main goal
1 Like
Great, here's our status.
- run ziti-ctrl and check logs to see it is healthy
- run ziti CLI commands to build the service
- add tunneler (proxy) containers for client and server
Here's an example BASH script that will run ziti
CLI inside the ziti-ctrl
container to build the service and simulate the policies.
docker compose exec -T ziti-ctrl bash << BASH
set -o errexit -o nounset -o pipefail -o xtrace
ziti edge login https://ziti-controller:1280 \
--ca=/home/ziggy/quickstart/pki/root-ca/certs/root-ca.cert \
--username=admin \
--password=ziggy123 \
ziti edge create identity "plc-client" \
--jwt-output-file /home/ziggy/plc-client.jwt \
--role-attributes plc-clients
ziti edge create identity "plc-host" \
--jwt-output-file /home/ziggy/plc-host.jwt \
--role-attributes plc-hosts
ziti edge list identities
ziti edge create config "plc-client-config" intercept.v1 \
'{"protocols":["tcp"],"addresses":["plcsiemens.ziti.internal"], "portRanges":[{"low":102, "high":102}]}'
ziti edge create config "plc-host-config" host.v1 \
'{"protocol":"tcp", "address":"plcsiemens","port":102}'
ziti edge list configs
ziti edge create service "plc-service" \
--configs plc-client-config,plc-host-config \
--role-attributes plc-services
ziti edge list services
ziti edge create service-policy "plc-host-policy" Bind \
--service-roles '#plc-services' \
--identity-roles '#plc-hosts'
ziti edge create service-policy "plc-client-policy" Dial \
--service-roles '#plc-services' \
--identity-roles '#plc-clients'
ziti edge list service-policies
ziti edge policy-advisor services plc-service --quiet
BASH
Ok i run every commands manually to check personally what happens (trying to understand each commands what does mean) and all works fine. Now the missing part in my mind: creating tunnelers containers and make all of this working good
Last command output (so you can check too if all is going well):
OKAY : plc-client (1) -> plc-service (1) Common Routers: (1/1) Dial: Y Bind: N
OKAY : plc-host (1) -> plc-service (1) Common Routers: (1/1) Dial: N Bind: Y
1 Like
Making progress!
- run ziti-ctrl and check logs to see it is healthy
- run ziti CLI commands to build the service
- add tunneler (proxy) containers for
Let's start with the server-side "hosting tunneler." We'll use the identity you created "plc-host" that has permission to host "plc-service." This tunneler is an exit point from the service that forwards traffic to tcp://plcsiemens:102
(the DNS name and port where the server container is listening), so it needs to be deployed in the compose network "testnet" where that DNS name resolves.
Here's an example of a ziti-host
service you can add in ./devices/compose.yml
(or a separate file if added to the list defined in env var COMPOSE_FILE
.
services:
ziti-host:
image: openziti/ziti-host:1.1.3
networks:
testnet:
volumes:
- ziti-host:/ziti-edge-tunnel
environment:
- ZITI_ENROLL_TOKEN
volumes:
ziti-host:
Set the token variable and run it.
ZITI_ENROLL_TOKEN="$(docker compose exec ziti-ctrl cat /home/ziggy/plc-host.jwt)" \
docker compose up ziti-host --detach;
docker compose logs ziti-host --follow;
You should see a log message like this.
ziti-host-1 | (13)[ 0.048] INFO ziti-edge-tunnel:ziti-edge-tunnel.c:1295 on_event() =============== service event (added) - plc-service:6TbgB3EgI2KywOZpi8jbhM ===============
1 Like
Finally, the client tunneler!
Although I asked you to create an identity "plc-client," we'll make a router for your client tunneler instead. This is the best way to accomplish OpenZiti service interception in Docker.
Run these commands in the ziti-ctrl
container.
ziti edge create edge-router "plc-client-router" \
--tunneler-enabled \
--jwt-output-file /home/ziggy/plc-client.erott.jwt \
ziti edge update identity "plc-client-router" \
--role-attributes plc-clients
Like the ziti-host
service/container, you can add the client tunneler to ./devices/compose.yml
or a separate file if it is added to the list defined in env var COMPOSE_FILE
.
services:
ziti-client:
image: openziti/ziti-router:1.1.9
expose:
- 3022
networks:
testnet:
ipv4_address: ${HMISIEMENS_ADDRESS:-172.19.1.2}
environment:
ZITI_CTRL_ADVERTISED_ADDRESS: ziti-controller
ZITI_ENROLL_TOKEN:
ZITI_ROUTER_MODE: tproxy
volumes:
- ziti-client:/ziti-router
dns:
- 127.0.0.1
- 1.1.1.1
user: root
cap_add:
- NET_ADMIN
siemensclient:
build:
context: ./SiemensClient
dockerfile: Dockerfile
stdin_open: true
tty: true
depends_on:
- ${PLCSIEMENS_CONTAINER_NAME:-plcsiemens}
environment:
PLCSIEMENS_ADDRESS: ${PLCSIEMENS_ADDRESS:-172.19.1.1}
PLCSIEMENS_PORT: ${PLCSIEMENS_PORT:-102}
PLCSIEMENS_RACK: ${PLCSIEMENS_RACK:-0}
PLCSIEMENS_SLOT: ${PLCSIEMENS_SLOT:-1}
container_name: ${HMISIEMENS_CONTAINER_NAME:-hmisiemens}
network_mode: service:ziti-client
command: ["python3", "SiemensClient.py"]
volumes:
ziti-client:
Let's pay special attention to the following:
- The
siemensclient
container has network_mode: service:ziti-client
meaning it does not have a separate container network interface, but is bonded to the ziti-client
container's network (IP, nameservers, firewall, etc.).
- the
ziti-client
container is assigned the static IP address you reserved for siemensclient
.
Start the client tunneler
You can copy/paste the token from the ziti-ctrl
container, or run one-liner similar to the one you ran for ziti-host
.
ZITI_ENROLL_TOKEN="$(docker compose exec ziti-ctrl cat /home/ziggy/plc-client.erott.jwt)" \
docker compose up ziti-client --detach
Intercept Address
I'm guessing the client app addresses the PLC server as tcp://${PLCSIEMENS_ADDRESS}:${PLCSIEMENS_PORT}
, and so the matching values from the example would be:
PLCSIEMENS_ADDRESS="plcsiemens.ziti.internal"
PLCSIEMENS_PORT="102"
I've uploaded a BASH script that demonstrates using a Ziti router as a client proxy w/ Ziti DNS. I thought it might be helpful to see a complete system working. This is a self-contained demo that requires only Docker.