Can anyone explain what the ziti-controller-init-container
container does? It appears to be necessary for the quickstart docker setup to work, but it runs and then exits. It executes a cli script, and that script creates two service-edge-router-policies
for for what looks like public access.
What else does the init container do?
Is the init container idempotent?
Is it possible to eliminate the init container in the quick start compose setup?
Alternatively, is there a compose setup with non-quickstart images?
It runs run-with-ziti-cli.sh
which subsequently runs access-control.sh
. After it executes it leaves a marker file behind named access-control.init
which controls subsequent runs from happening. Its purpose is to create the two policies needed by most 'quickstart' users found in access-control.sh. An edge router policy allowing all identities to access all edge routers with the #public attribute, and a service edge router policy allowing all services to use all edge routers.
It is not necessary for the compose environment to start (I just did it myself) but it is necessary if you want to use the environment without making an edge router policy and services edge router policy yourself. Alternatively, you can start just the controller and just the router and run the same commands specified in the access-control.sh script, it's just two ziti
cli commands .
Not really. The first time you run it it has side effects. Every time after that it is so technically that's a 'no'.
Yep. As mentioned above.
Yep. There are plain, no frills, no helper containers that you can run which nothing but ziti inside them. You put the PKI in place, you put the config in place etc. @qrkourier uses them for kubernetes deployments. They are much more streamlined and slimmed down compared to the quickstart. You can find them at the root of the ziti repo https://github.com/openziti/ziti/tree/main/docker-images
1 Like
TheLumberjack:
It is not necessary for the compose environment to start (I just did it myself) but it is necessary if you want to use the environment without making an edge router policy and services edge router policy yourself. Alternatively, you can start just the controller and just the router and run the same commands specified in the access-control.sh script, it's just two ziti
cli commands .
Everytime I try to eliminate it, the controller container crashes with the message below. That's why I have been deleting the init file every start.
adding /var/openziti/ziti-bin to the path
controller initialized. unsetting ZITI_USER/ZITI_PWD from env
panic: controllerConfig must provide [db] or [raft]
Here is my whole docker-compose.yaml file.
version: '2.4'
services:
# rm-init:
# image: rm-init
# volumes:
# - ./ziti-fs:/persistent
ziti-controller:
image: "${ZITI_IMAGE}:${ZITI_VERSION}"
env_file:
- ./.env
ports:
- ${ZITI_CTRL_EDGE_ADVERTISED_PORT:-1280}:${ZITI_CTRL_EDGE_ADVERTISED_PORT:-1280}
- ${ZITI_CTRL_ADVERTISED_PORT:-6262}:${ZITI_CTRL_ADVERTISED_PORT:-6262}
environment:
- ZITI_CTRL_NAME=${ZITI_CTRL_NAME:-ziti-edge-controller}
- ZITI_CTRL_EDGE_ADVERTISED_ADDRESS=${ZITI_CTRL_EDGE_ADVERTISED_ADDRESS:-ziti-edge-controller}
- ZITI_CTRL_EDGE_ADVERTISED_PORT=${ZITI_CTRL_EDGE_ADVERTISED_PORT:-1280}
- ZITI_CTRL_EDGE_IP_OVERRIDE=${ZITI_CTRL_EDGE_IP_OVERRIDE:-127.0.0.1}
- ZITI_CTRL_ADVERTISED_PORT=${ZITI_CTRL_ADVERTISED_PORT:-6262}
- ZITI_EDGE_IDENTITY_ENROLLMENT_DURATION=${ZITI_EDGE_IDENTITY_ENROLLMENT_DURATION}
- ZITI_ROUTER_ENROLLMENT_DURATION=${ZITI_ROUTER_ENROLLMENT_DURATION}
- ZITI_USER=${ZITI_USER:-admin}
- ZITI_PWD=${ZITI_PWD}
networks:
ziti:
aliases:
- ziti-edge-controller
volumes:
- ./ziti-fs:/persistent
entrypoint:
- "/var/openziti/scripts/run-controller.sh"
# depends_on:
# - rm-init
# ziti-controller-init-container:
# image: "${ZITI_IMAGE}:${ZITI_VERSION}"
# depends_on:
# - ziti-controller
# environment:
# - ZITI_CTRL_EDGE_ADVERTISED_ADDRESS=${ZITI_CTRL_EDGE_ADVERTISED_ADDRESS:-ziti-edge-controller}
# - ZITI_CTRL_EDGE_ADVERTISED_PORT=${ZITI_CTRL_EDGE_ADVERTISED_PORT:-1280}
# env_file:
# - ./.env
# networks:
# ziti:
# volumes:
# - ./ziti-fs:/persistent
# entrypoint:
# - "/var/openziti/scripts/run-with-ziti-cli.sh"
# command:
# - "/var/openziti/scripts/access-control.sh"
ziti-edge-router:
image: "${ZITI_IMAGE}:${ZITI_VERSION}"
env_file:
- ./.env
depends_on:
- ziti-controller
ports:
- ${ZITI_ROUTER_PORT:-3022}:${ZITI_ROUTER_PORT:-3022}
- ${ZITI_ROUTER_LISTENER_BIND_PORT:-10080}:${ZITI_ROUTER_LISTENER_BIND_PORT:-10080}
environment:
- ZITI_CTRL_ADVERTISED_ADDRESS=${ZITI_CTRL_ADVERTISED_ADDRESS:-ziti-controller}
- ZITI_CTRL_ADVERTISED_PORT=${ZITI_CTRL_ADVERTISED_PORT:-6262}
- ZITI_CTRL_EDGE_ADVERTISED_ADDRESS=${ZITI_CTRL_EDGE_ADVERTISED_ADDRESS:-ziti-edge-controller}
- ZITI_CTRL_EDGE_ADVERTISED_PORT=${ZITI_CTRL_EDGE_ADVERTISED_PORT:-1280}
- ZITI_ROUTER_NAME=${ZITI_ROUTER_NAME:-ziti-edge-router}
- ZITI_ROUTER_ADVERTISED_ADDRESS=${ZITI_ROUTER_ADVERTISED_ADDRESS:-ziti-edge-router}
- ZITI_ROUTER_PORT=${ZITI_ROUTER_PORT:-3022}
- ZITI_ROUTER_LISTENER_BIND_PORT=${ZITI_ROUTER_LISTENER_BIND_PORT:-10080}
- ZITI_ROUTER_ROLES=public
networks:
- ziti
volumes:
- ./ziti-fs:/persistent
entrypoint: /bin/bash
command: "/var/openziti/scripts/run-router.sh edge"
ziti-console:
image: openziti/zac
working_dir: /usr/src/app
environment:
- ZAC_SERVER_CERT_CHAIN=/persistent/pki/${ZITI_CTRL_EDGE_ADVERTISED_ADDRESS:-ziti-edge-controller}-intermediate/certs/${ZITI_CTRL_EDGE_ADVERTISED_ADDRESS:-ziti-edge-controller}-server.cert
- ZAC_SERVER_KEY=/persistent/pki/${ZITI_CTRL_EDGE_ADVERTISED_ADDRESS:-ziti-edge-controller}-intermediate/keys/${ZITI_CTRL_EDGE_ADVERTISED_ADDRESS:-ziti-edge-controller}-server.key
- ZITI_CTRL_EDGE_ADVERTISED_ADDRESS=${ZITI_CTRL_EDGE_ADVERTISED_ADDRESS:-ziti-edge-controller}
- ZITI_CTRL_EDGE_ADVERTISED_PORT=${ZITI_CTRL_EDGE_ADVERTISED_PORT:-1280}
- ZITI_CTRL_NAME=${ZITI_CTRL_NAME:-ziti-edge-controller}
- PORTTLS=8443
depends_on:
- ziti-controller
ports:
- 8443:8443
volumes:
- ./ziti-fs:/persistent
networks:
- ziti
caddy:
image: caddy:latest
container_name: caddy
depends_on:
- ziti-console
restart: unless-stopped
ports:
- "80:80"
- "443:443"
# networks:
# - caddy-net
volumes:
- ./caddy/data/:/data/
- ./caddy/config/:/config/
- ./caddy/Caddyfile:/etc/caddy/Caddyfile
networks:
- ziti
networks:
ziti:
Here is the environment when the init container runs OK
Thanks for the file. I just pruned docker and I'm giving it a try. It failed for me because the caddy file isn't found. I removed the caddy file, down'ed and up'ed and had a failure during startup.
Tried a very minimally different docker compose file, down'ed and up'ed fine. pruned everything, made sure I had no docker volume's left around, ran your compose again... and your compose file failed again...
Tried mine again and it started fine... So maybe there's some kind of strange docker race condition around the volume that's causing issues? That's (after removing caddy) the only difference I see between yours and mine...
You're using:
volumes:
- ./ziti-fs:/persistent
Ours has:
volumes:
- ziti-fs:/persistent
and declares the volume at the end:
volumes:
ziti-fs:
Compose file I used (from curl'ing it)
version: '2.4'
services:
ziti-controller:
image: "${ZITI_IMAGE}:${ZITI_VERSION}"
env_file:
- ./.env
ports:
- ${ZITI_CTRL_EDGE_ADVERTISED_PORT:-1280}:${ZITI_CTRL_EDGE_ADVERTISED_PORT:-1280}
- ${ZITI_CTRL_ADVERTISED_PORT:-6262}:${ZITI_CTRL_ADVERTISED_PORT:-6262}
environment:
- ZITI_CTRL_NAME=${ZITI_CTRL_NAME:-ziti-edge-controller}
- ZITI_CTRL_EDGE_ADVERTISED_ADDRESS=${ZITI_CTRL_EDGE_ADVERTISED_ADDRESS:-ziti-edge-controller}
- ZITI_CTRL_EDGE_ADVERTISED_PORT=${ZITI_CTRL_EDGE_ADVERTISED_PORT:-1280}
- ZITI_CTRL_EDGE_IP_OVERRIDE=${ZITI_CTRL_EDGE_IP_OVERRIDE:-127.0.0.1}
- ZITI_CTRL_ADVERTISED_PORT=${ZITI_CTRL_ADVERTISED_PORT:-6262}
- ZITI_EDGE_IDENTITY_ENROLLMENT_DURATION=${ZITI_EDGE_IDENTITY_ENROLLMENT_DURATION}
- ZITI_ROUTER_ENROLLMENT_DURATION=${ZITI_ROUTER_ENROLLMENT_DURATION}
- ZITI_USER=${ZITI_USER:-admin}
- ZITI_PWD=${ZITI_PWD}
networks:
ziti:
aliases:
- ziti-edge-controller
volumes:
- ziti-fs:/persistent
entrypoint:
- "/var/openziti/scripts/run-controller.sh"
ziti-edge-router:
image: "${ZITI_IMAGE}:${ZITI_VERSION}"
env_file:
- ./.env
depends_on:
- ziti-controller
ports:
- ${ZITI_ROUTER_PORT:-3022}:${ZITI_ROUTER_PORT:-3022}
- ${ZITI_ROUTER_LISTENER_BIND_PORT:-10080}:${ZITI_ROUTER_LISTENER_BIND_PORT:-10080}
environment:
- ZITI_CTRL_ADVERTISED_ADDRESS=${ZITI_CTRL_ADVERTISED_ADDRESS:-ziti-controller}
- ZITI_CTRL_ADVERTISED_PORT=${ZITI_CTRL_ADVERTISED_PORT:-6262}
- ZITI_CTRL_EDGE_ADVERTISED_ADDRESS=${ZITI_CTRL_EDGE_ADVERTISED_ADDRESS:-ziti-edge-controller}
- ZITI_CTRL_EDGE_ADVERTISED_PORT=${ZITI_CTRL_EDGE_ADVERTISED_PORT:-1280}
- ZITI_ROUTER_NAME=${ZITI_ROUTER_NAME:-ziti-edge-router}
- ZITI_ROUTER_ADVERTISED_ADDRESS=${ZITI_ROUTER_ADVERTISED_ADDRESS:-ziti-edge-router}
- ZITI_ROUTER_PORT=${ZITI_ROUTER_PORT:-3022}
- ZITI_ROUTER_LISTENER_BIND_PORT=${ZITI_ROUTER_LISTENER_BIND_PORT:-10080}
- ZITI_ROUTER_ROLES=public
networks:
- ziti
volumes:
- ziti-fs:/persistent
entrypoint: /bin/bash
command: "/var/openziti/scripts/run-router.sh edge"
ziti-console:
image: openziti/zac
working_dir: /usr/src/app
environment:
- ZAC_SERVER_CERT_CHAIN=/persistent/pki/${ZITI_CTRL_EDGE_ADVERTISED_ADDRESS:-ziti-edge-controller}-intermediate/certs/${ZITI_CTRL_EDGE_ADVERTISED_ADDRESS:-ziti-edge-controller}-server.cert
- ZAC_SERVER_KEY=/persistent/pki/${ZITI_CTRL_EDGE_ADVERTISED_ADDRESS:-ziti-edge-controller}-intermediate/keys/${ZITI_CTRL_EDGE_ADVERTISED_ADDRESS:-ziti-edge-controller}-server.key
- ZITI_CTRL_EDGE_ADVERTISED_ADDRESS=${ZITI_CTRL_EDGE_ADVERTISED_ADDRESS:-ziti-edge-controller}
- ZITI_CTRL_EDGE_ADVERTISED_PORT=${ZITI_CTRL_EDGE_ADVERTISED_PORT:-1280}
- ZITI_CTRL_NAME=${ZITI_CTRL_NAME:-ziti-edge-controller}
- PORTTLS=8443
depends_on:
- ziti-controller
ports:
- 8443:8443
volumes:
- ziti-fs:/persistent
networks:
- ziti
networks:
ziti:
volumes:
ziti-fs:
Bizzare....
Ok, I will try changing that in the volumes. I prefer volume mounts over shared volumes for simplicity, but there could be just that. Thanks for confirming!
Yeah.... If that 'fixes it' there's something I just don't understand happening with docker with the volume mounts that's just unexpected.
Still breaks for me.
ββLogs - Stats - Env - Config - Topββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βNOT OVERRIDING: env var ZITI_ROUTER_PORT already set. using existing value β²
βNOT OVERRIDING: env var ZITI_ROUTER_ROLES already set. using existing value β
βNOT OVERRIDING: env var ZITI_SCRIPTS already set. using existing value β
βNOT OVERRIDING: env var ZITI_SHARED already set. using existing value β
βNOT OVERRIDING: env var ZITI_USER already set. using existing value β
βNOT OVERRIDING: env var ZITI_VERSION already set. using existing value β
β β
βadding /var/openziti/ziti-bin to the path β
βcontroller initialized. unsetting ZITI_USER/ZITI_PWD from env β
βpanic: controllerConfig must provide [db] or [raft] β
β β
βgoroutine 1 [running]: β
βgithub.com/openziti/fabric/controller.LoadConfig({0xffffe837557a?, 0x4000135318?}) β
β github.com/openziti/fabric@v0.24.2/controller/config.go:270 +0x2074 β
βgithub.com/openziti/ziti/ziti/controller.run(0x400098f200?, {0x4000a67780, 0x1, 0x1?}) β
β github.com/openziti/ziti/ziti/controller/run.go:53 +0x400 β
βgithub.com/spf13/cobra.(*Command).execute(0x400098f200, {0x4000a67750, 0x1, 0x1}) β
β github.com/spf13/cobra@v1.7.0/command.go:944 +0x5ac β
βgithub.com/spf13/cobra.(*Command).ExecuteC(0x506bca0) β
β github.com/spf13/cobra@v1.7.0/command.go:1068 +0x340 β
βgithub.com/spf13/cobra.(*Command).Execute(...) β
β github.com/spf13/cobra@v1.7.0/command.go:992 β
βgithub.com/openziti/ziti/ziti/cmd.Execute() β
β github.com/openziti/ziti/ziti/cmd/cmd.go:79 +0x24 β
βmain.main() β
β github.com/openziti/ziti/ziti/main.go:51 +0x1c β
β βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
I'll try it on a wholly different machine. Just for completeness sake, what version of docker are you using? and you're on MacOS or linux or other? Feels like something goofy....
docker version
Client: Docker Engine - Community
Version: 23.0.3
API version: 1.42
Go version: go1.19.7
Git commit: 3e7cbfd
Built: Tue Apr 4 22:05:48 2023
OS/Arch: linux/amd64
Context: default
Server: Docker Engine - Community
Engine:
Version: 23.0.3
API version: 1.42 (minimum version 1.12)
Go version: go1.19.7
Git commit: 59118bf
Built: Tue Apr 4 22:05:48 2023
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.20
GitCommit: 2806fc1057397dbaeefbea0e4e17bddfbd388f38
runc:
Version: 1.1.5
GitCommit: v1.1.5-0-gf19387a
docker-init:
Version: 0.19.0
GitCommit: de40ad0
@gberl002 can you try this out too so that we have another person giving it a go?
Edit: I reverted the template back to the default volume behavior. It crashed again. I then did the following:
$ history
102 wget https://raw.githubusercontent.com/openziti/ziti/release-next/quickstart/docker/simplified-docker-compose.yml
104 docker compose down -v --rmi all
106 mv docker-compose.yaml docker-compose.yaml.bak
107 mv simplified-docker-compose.yml docker-compose.yaml
108 docker compose up -d
113 vim docker-compose.yaml # commented out initi service
117 docker compose down
115 docker compose up -d --remove-orphans
Here is a diff of the downloaded compose file and my changes.
root@ziti:/opt/ziti# diff docker-compose.yaml simplified-docker-compose.yml
29,45c29,45
< # ziti-controller-init-container:
< # image: "${ZITI_IMAGE}:${ZITI_VERSION}"
< # depends_on:
< # - ziti-controller
< # environment:
< # - ZITI_CTRL_EDGE_ADVERTISED_ADDRESS=${ZITI_CTRL_EDGE_ADVERTISED_ADDRESS:-ziti-edge-controller}
< # - ZITI_CTRL_EDGE_ADVERTISED_PORT=${ZITI_CTRL_EDGE_ADVERTISED_PORT:-1280}
< # env_file:
< # - ./.env
< # networks:
< # ziti:
< # volumes:
< # - ziti-fs:/persistent
< # entrypoint:
< # - "/var/openziti/scripts/run-with-ziti-cli.sh"
< # command:
< # - "/var/openziti/scripts/access-control.sh"
---
> ziti-controller-init-container:
> image: "${ZITI_IMAGE}:${ZITI_VERSION}"
> depends_on:
> - ziti-controller
> environment:
> - ZITI_CTRL_EDGE_ADVERTISED_ADDRESS=${ZITI_CTRL_EDGE_ADVERTISED_ADDRESS:-ziti-edge-controller}
> - ZITI_CTRL_EDGE_ADVERTISED_PORT=${ZITI_CTRL_EDGE_ADVERTISED_PORT:-1280}
> env_file:
> - ./.env
> networks:
> ziti:
> volumes:
> - ziti-fs:/persistent
> entrypoint:
> - "/var/openziti/scripts/run-with-ziti-cli.sh"
> command:
> - "/var/openziti/scripts/access-control.sh"
Here is my .env file
root@ziti:/opt/ziti# cat .env
ZITI_USER=admin.redacted
ZITI_PWD=redacted
ZITI_IMAGE=openziti/quickstart
ZITI_VERSION=0.30.1
ZITI_CTRL_ADVERTISED_ADDRESS=ziti.redacted
ZITI_CTRL_ADVERTISED_PORT=8440
ZITI_CTRL_EDGE_ADVERTISED_ADDRESS=ziti.redacted
ZITI_CTRL_EDGE_ADVERTISED_PORT=8441
ZITI_CTRL_NAME=ziti.redacted
ZITI_EDGE_IDENTITY_ENROLLMENT_DURATION=10080
ZITI_ROUTER_ADVERTISED_HOST=ziti.redacted
ZITI_ROUTER_ADVERTISED_ADDRESS=ziti.redacted
ZITI_ROUTER_ENROLLMENT_DURATION=10080
ZITI_ROUTER_LISTENER_BIND_PORT=8444
ZITI_ROUTER_NAME=ziti.redacted
ZITI_ROUTER_PORT=8442
ZITI_ROUTER_ROLES=public
Here are my computer specs for the host
root@ziti:/opt/ziti# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
Here are the docker details
root@ziti:/opt/ziti# docker version
Client: Docker Engine - Community
Version: 24.0.5
API version: 1.43
Go version: go1.20.6
Git commit: ced0996
Built: Fri Jul 21 20:35:41 2023
OS/Arch: linux/arm64
Context: default
Server: Docker Engine - Community
Engine:
Version: 24.0.5
API version: 1.43 (minimum version 1.12)
Go version: go1.20.6
Git commit: a61e2b4
Built: Fri Jul 21 20:35:41 2023
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.6.22
GitCommit: 8165feabfdfe38c65b599c4993d227328c231fca
runc:
Version: 1.1.8
GitCommit: v1.1.8-0-g82f18fe
docker-init:
Version: 0.19.0
GitCommit: de40ad0
Here is my memory
root@ziti:/opt/ziti# free -h
total used free shared buff/cache available
Mem: 935Mi 250Mi 497Mi 440Ki 258Mi 684Mi
Swap: 0B 0B 0B
Here is my Cloudformation template:
---
Parameters:
Vpc:
Type: String
Default: vpc-066106109d72e076b
KeyName:
Type: String
Default: ziti.mydomain.com-aws
InstanceType:
Type: String
Default: t4g.micro
ImageID:
Description: ARM Debian12
Type: String
# arm-debian12
Default: ami-0976ba1cd3f6173d8
# amd64-debian12
# Default: ami-071175b60c818694f
SSHLocation:
Type: String
Default: "redacted"
Resources:
Ec2Instance:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref ImageID
InstanceType: !Ref InstanceType
KeyName: !Ref KeyName
NetworkInterfaces:
- AssociatePublicIpAddress: 'true'
DeviceIndex: '0'
GroupSet:
- !Ref Ec2SecurityGroup
# SubnetId: !Ref Subnet
UserData:
Fn::Base64:
!Sub |
#!/bin/bash
echo export EDITOR=vi >> /root/.bashrc
echo set -o vi >> /root/.bashrc
echo alias ll=\"ls -lah\" >> /root/.bashrc
echo export EDITOR=vi >> /home/admin/.bashrc
echo set -o vi >> /home/admin/.bashrc
echo alias ll=\"ls -lah\" >> /home/admin/.bashrc
source ~/.bashrc
sudo apt update
sudo apt upgrade -y
sudo apt install -y vim yq gpg git pwgen
curl -sSL https://get.docker.com/ | sh
usermod -aG docker admin
curl https://raw.githubusercontent.com/jesseduffield/lazydocker/master/scripts/install_update_linux.sh | bash
sudo mv $HOME/.local/bin/lazydocker /usr/bin
sudo git clone https://github.com/openziti/ziti.git /opt/ziti
sudo reboot
Tags:
- Key: Name
Value:
Fn::Sub: "${AWS::StackName}-Ec2Instance"
- Key: StackName
Value:
Fn::Sub: "${AWS::StackName}"
Ec2SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Security Group for Domain Joined Windows
VpcId: !Ref Vpc
Ec2SGIngressSSH:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId:
Fn::GetAtt:
- Ec2SecurityGroup
- GroupId
IpProtocol: tcp
CidrIp: !Ref SSHLocation
FromPort: 22
ToPort: 22
Ec2SGIngressHTTP:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId:
Fn::GetAtt:
- Ec2SecurityGroup
- GroupId
IpProtocol: tcp
CidrIp: !Ref SSHLocation
FromPort: 80
ToPort: 80
Ec2SGIngressHTTPS:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId:
Fn::GetAtt:
- Ec2SecurityGroup
- GroupId
IpProtocol: tcp
CidrIp: !Ref SSHLocation
FromPort: 443
ToPort: 443
Ec2SGIngress8440:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId:
Fn::GetAtt:
- Ec2SecurityGroup
- GroupId
IpProtocol: tcp
CidrIp: !Ref SSHLocation
FromPort: 8440
ToPort: 8440
Ec2SGIngress8441:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId:
Fn::GetAtt:
- Ec2SecurityGroup
- GroupId
IpProtocol: tcp
CidrIp: !Ref SSHLocation
FromPort: 8441
ToPort: 8441
Ec2SGIngress8442:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId:
Fn::GetAtt:
- Ec2SecurityGroup
- GroupId
IpProtocol: tcp
CidrIp: !Ref SSHLocation
FromPort: 8442
ToPort: 8442
Ec2SGIngress8443:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId:
Fn::GetAtt:
- Ec2SecurityGroup
- GroupId
IpProtocol: tcp
CidrIp: !Ref SSHLocation
FromPort: 8443
ToPort: 8443
Ec2SGIngress8444:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId:
Fn::GetAtt:
- Ec2SecurityGroup
- GroupId
IpProtocol: tcp
CidrIp: !Ref SSHLocation
FromPort: 8444
ToPort: 8444
Eip:
Type: AWS::EC2::EIP
Properties:
Tags:
- Key: Name
Value:
Fn::Sub: "${AWS::StackName}-Ec2InstanceEip"
- Key: StackName
Value:
Fn::Sub: "${AWS::StackName}"
EipAssociation:
Type: AWS::EC2::EIPAssociation
Properties:
EIP: !Ref Eip
InstanceId: !Ref Ec2Instance
zitiA:
Type: 'AWS::Route53::RecordSet'
Properties:
HostedZoneId: redacted
Name: ziti.mydomain.com
Type: A
TTL: 60
ResourceRecords:
- !Ref Eip
This is quite frustrating. I tried it on an Ubuntu machine provisioned in AWS just now, using your exact steps (thank you) and it succeeded just fine. It's a t3a.small. I tried the experiment five full times and it started five full times.
I made a Debian 12 ARM image on AWS. I didn't use your full CF template, but did run your commands. I needed to add sudo
to your usermod
line for the admin user to get the docker user. Modified the 'addresseses' accordingly and compose up'ed... and the dang thing.... started !
I'm gonna try it on an m1mini... @qrkourier you have any thoughts on what might be going on here, maybe?
I'm seeing an issue with the caddy
container but the controller and router are working for me on my M1 Mac running Ventura (13.4.1)
Darwin MacBook-Pro.local 22.5.0 Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:20 PDT 2023; root:xnu-8796.121.3~7/RELEASE_ARM64_T6000 arm64
and the latest docker (24.0.5) with 5 CPUs and 7.9GB of memory allocated.
Client:
Cloud integration: v1.0.35-desktop+001
Version: 24.0.5
API version: 1.43
Go version: go1.20.6
Git commit: ced0996
Built: Fri Jul 21 20:32:30 2023
OS/Arch: darwin/arm64
Context: desktop-linux
Server: Docker Desktop 4.22.0 (117440)
Engine:
Version: 24.0.5
API version: 1.43 (minimum version 1.12)
Go version: go1.20.6
Git commit: a61e2b4
Built: Fri Jul 21 20:35:38 2023
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.6.21
GitCommit: 3dce8eb055cbb6872793272b4f20ed16117344f8
runc:
Version: 1.1.7
GitCommit: v1.1.7-0-g860f061
docker-init:
Version: 0.19.0
GitCommit: de40ad0
@TheLumberjack I am DMing you a video. I rebuilt my stack, deleted the whole biz, rebuild it based on a t2.micro
instead of the graviton instance. Same thing. It must be in my .env. I will send you my .env file as well.
Ok. I have created a slightly modified cloud formation script:
CloudFormation Script
---
Parameters:
Vpc:
Type: String
Default: YOUR_VPC_ID
SSHKeyName:
Type: String
Default: YOUR_SSH_KEYPAIR_NAME
InstanceType:
Type: String
Default: t4g.micro
ImageID:
Description: ARM Debian12
Type: String
Default: ami-0e4e72a29ae8c13b3
# Default: ami-0976ba1cd3f6173d8 DID NOT EXIST?
# amd64-debian12
# Default: ami-071175b60c818694f DID NOT EXIST?
# Default: ami-0ddf7dfd13a83d8c8
SSHLocation:
Type: String
Default: "YOUR_IP_ADDRESS/32"
AllOtherServiceAccess:
Type: String
Default: "0.0.0.0/0"
Route53ZoneId:
Type: String
Default: "YOUR_ROUTE53_ZONE_ID"
DNSName:
Type: String
Default: your.external.address.here
OpenZitiControllerPwd:
Type: String
Default: THE_PWD_YOU_WANT
Resources:
Ec2Instance:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref ImageID
InstanceType: !Ref InstanceType
KeyName: !Ref SSHKeyName
NetworkInterfaces:
- AssociatePublicIpAddress: 'true'
DeviceIndex: '0'
GroupSet:
- !Ref Ec2SecurityGroup
# SubnetId: !Ref Subnet
UserData:
Fn::Base64:
!Sub |
#!/bin/bash
echo export EDITOR=vi >> /root/.bashrc
echo set -o vi >> /root/.bashrc
echo alias ll=\"ls -lah\" >> /root/.bashrc
echo export EDITOR=vi >> /home/admin/.bashrc
echo set -o vi >> /home/admin/.bashrc
echo alias ll=\"ls -lah\" >> /home/admin/.bashrc
source ~/.bashrc
sudo apt update
sudo apt upgrade -y
sudo apt install -y vim yq gpg git pwgen
curl -sSL https://get.docker.com/ | sh
sudo usermod -aG docker admin
curl https://raw.githubusercontent.com/jesseduffield/lazydocker/master/scripts/install_update_linux.sh | bash
sudo mv $HOME/.local/bin/lazydocker /usr/bin
sudo git clone https://github.com/openziti/ziti.git /opt/ziti
mkdir -p /home/admin/compose
curl -so /home/admin/compose/docker-compose.yaml https://gist.githubusercontent.com/dovholuknf/db26b799d1fd1b2560864df067acb170/raw/bc3bd70185bd490f7ca70122443ae826c2ac2db5/gistfile1.txt
curl -so /home/admin/compose/.env https://get.openziti.io/dock/.env
sed -i -E "s/(ziti-edge-controller|ziti-edge-router|ziti-controller)/${DNSName}/g" /home/admin/compose/.env
sed -i "s/\#ZITI_CTRL_EDGE_ADVERTISED_PORT/ZITI_CTRL_EDGE_ADVERTISED_PORT/g" /home/admin/compose/.env
sed -i "s/\#ZITI_CTRL_ADVERTISED_PORT/ZITI_CTRL_ADVERTISED_PORT/g" /home/admin/compose/.env
sed -i "s/\#ZITI_ROUTER_NAME/ZITI_ROUTER_NAME/g" /home/admin/compose/.env
sed -i "s/\#ZITI_ROUTER_ADVERTISED_ADDRESS/ZITI_ROUTER_ADVERTISED_ADDRESS/g" /home/admin/compose/.env
sed -i "s/\#ZITI_ROUTER_PORT/ZITI_ROUTER_PORT/g" /home/admin/compose/.env
sed -i "s/\#ZITI_ROUTER_LISTENER_BIND_PORT/ZITI_ROUTER_LISTENER_BIND_PORT/g" /home/admin/compose/.env
sed -i "s/\#ZITI_ROUTER_ROLES/ZITI_ROUTER_ROLES/g" /home/admin/compose/.env
sed -i "s/\ZITI_PWD=/ZITI_PWD=${OpenZitiControllerPwd}/g" /home/admin/compose/.env
sudo chown -R admin:admin /home/admin/compose
sudo reboot
Tags:
- Key: Name
Value:
Fn::Sub: "${AWS::StackName}-Ec2Instance"
- Key: StackName
Value:
Fn::Sub: "${AWS::StackName}"
Ec2SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Security Group for Domain Joined Windows
VpcId: !Ref Vpc
Ec2SGIngressSSH:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId:
Fn::GetAtt:
- Ec2SecurityGroup
- GroupId
IpProtocol: tcp
CidrIp: !Ref SSHLocation
FromPort: 22
ToPort: 22
Ec2SGIngressHTTP:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId:
Fn::GetAtt:
- Ec2SecurityGroup
- GroupId
IpProtocol: tcp
CidrIp: !Ref AllOtherServiceAccess
FromPort: 80
ToPort: 80
Ec2SGIngressHTTPS:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId:
Fn::GetAtt:
- Ec2SecurityGroup
- GroupId
IpProtocol: tcp
CidrIp: !Ref AllOtherServiceAccess
FromPort: 443
ToPort: 443
Ec2SGIngress8440:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId:
Fn::GetAtt:
- Ec2SecurityGroup
- GroupId
IpProtocol: tcp
CidrIp: !Ref AllOtherServiceAccess
FromPort: 8440
ToPort: 8440
Ec2SGIngress8441:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId:
Fn::GetAtt:
- Ec2SecurityGroup
- GroupId
IpProtocol: tcp
CidrIp: !Ref AllOtherServiceAccess
FromPort: 8441
ToPort: 8441
Ec2SGIngress8442:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId:
Fn::GetAtt:
- Ec2SecurityGroup
- GroupId
IpProtocol: tcp
CidrIp: !Ref AllOtherServiceAccess
FromPort: 8442
ToPort: 8442
Ec2SGIngress8443:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId:
Fn::GetAtt:
- Ec2SecurityGroup
- GroupId
IpProtocol: tcp
CidrIp: !Ref AllOtherServiceAccess
FromPort: 8443
ToPort: 8443
Ec2SGIngress8444:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId:
Fn::GetAtt:
- Ec2SecurityGroup
- GroupId
IpProtocol: tcp
CidrIp: !Ref AllOtherServiceAccess
FromPort: 8444
ToPort: 8444
Eip:
Type: AWS::EC2::EIP
Properties:
Tags:
- Key: Name
Value:
Fn::Sub: "${AWS::StackName}-Ec2InstanceEip"
- Key: StackName
Value:
Fn::Sub: "${AWS::StackName}"
EipAssociation:
Type: AWS::EC2::EIPAssociation
Properties:
EIP: !Ref Eip
InstanceId: !Ref Ec2Instance
OpenZitiDNSRecord:
Type: 'AWS::Route53::RecordSet'
Properties:
HostedZoneId: !Ref Route53ZoneId
Name: !Ref DNSName
Type: A
TTL: 30
ResourceRecords:
- !Ref Eip
It's fully parameterized
It now also curls down a modified docker compose file from a github gist I created that doesn't have an init container
It now curls down the .env
file
It modifies the .env
file using a bunch of sed commands
I used aws cli
to deploy it to aws: aws cloudformation create-stack --stack-name ClintDiscourse1574 --template-body file://${PWD}/cf.template
After the dust settles (the t4g.micro isn't fast at all), you SHOULD be able to just run docker compose....
Here's a nine minute, narrated video of it "just working"... I added chapter markers to skip all the boring stuff waiting for the speedy t4g.micro...
1 Like
Oh, great, I will look at that tomorrow, thanks for all the work!!!
Ok, I can reproduce it now. This is probably the part I was doing you weren't. The quickstart, at least with pinned 0.30.1 cannot restart, if it does and the access-control.init
file exists, the controller crashes. I will upload a video that demonstrates this.
The implications are pretty huge, IMO. This means that the compose setup CANNOT restart or it will crash. This would be a terrible first experience for someone giving this a go.
Ah ha! I definitely didn't try restarting it! That's a bug we'll definitely want fixed, for sure. I'll give that a go now...
I just curl'ed the compose file and .env and stopped/started it a bunch of times and sadly, it works fine for me...
curl -so docker-compose.yml https://get.openziti.io/dock/simplified-docker-compose.yml
curl -so .env https://get.openziti.io/dock/.env
then i did a docker compose up -d
, docker compose down
a bunch of times and every time... It seems fine. Are you maybe terminating the docker compose bring up 'early' and triggering the bug that way maybe?
I still can't reproduce it I'm sorry to report I'll look for your follow-up.
Each time the init container shows me the expected message:
docker-aug31-ziti-controller-init-container-1 |
docker-aug31-ziti-controller-init-container-1 | *****************************************************************
docker-aug31-ziti-controller-init-container-1 | docker-compose init file has been detected, the initialization
docker-aug31-ziti-controller-init-container-1 | of the docker-compose environment has already happened. If you
docker-aug31-ziti-controller-init-container-1 | wish to allow this volume to be re-initialized, delete the file
docker-aug31-ziti-controller-init-container-1 | located at /persistent/access-control.init
docker-aug31-ziti-controller-init-container-1 | *****************************************************************
docker-aug31-ziti-controller-init-container-1 |
Can you capture docker compose logs ziti-controller
into a file and DM it to me? I can see in your video at 1:57:
goroutine 1 [running] :
github. con/openziti/fabric/controller. Loadconfig( {ΓΈx7ffe1a5e9595?, ΓΈxcΓΈΓΈΓΈ84b33ΓΈ?})
github. con/openziti/fabricΓΆvΓΈ .24.2/controtler/config.go:27ΓΈ +0*265c
github.com/openziti/ziti/ziti/controller/run.go:53 +0x507
github. .execute(0xcΓΈΓΈΓΈ9152ΓΈΓΈ, {OxcΓΈΓΈΓΈ9eda9ΓΈ, ΓΈxl, ΓΈxl})
What caught my eye is the Loadconfig panic. I'm pretty sure that is indicating the controller isn't able to read the config file. I'd like to see the full logs to see if there's anything else useful in there. It almost feels like a race condition of the controller starting before the instance can read the volume...
I'm gonna try this all over again on a 'slower' machine like that t4g, even following your steps to the letter, I still can't reproduce it on my local (somewhat speedy, definitely speedier than a t4g machine)