Stemming from this comment, how would I go about using hostPath
storage with the helm charts for persistent storage?
Welcome to the forum, @thedarkula!
I'll assume you have a single-node cluster, or have set node affinity for the controller so it's scheduled on the same node where the hostPath
volume exists. I'll assume you want to configure persistence for the controller chart.
Do you have a hostPath
provisioner? If so, your K8s distro might have a storage class that you can use to leverage that provisioner. Here's what the "standard" (default) storage class looks like in minikube:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
creationTimestamp: "2023-09-14T00:59:34Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: standard
resourceVersion: "257"
uid: c97171e8-93ba-4484-be00-7eef2314817b
provisioner: k8s.io/minikube-hostpath
reclaimPolicy: Delete
volumeBindingMode: Immediate
If you don't specify the storage class, then the controller chart will create a PVC with storageClassName: ""
, which means use the default storage class.
You may alternatively specify any existing storage class with input value .persistence.storageClass
. Yet another option is creating the volume in advance and specifying that volume name with input value .persistence.VolumeName
.
@qrkourier Thank you for the welcome!
I have this all running in a single-node k3s cluster.
I do have a number of other charts running with a hostPath
provisioner type already, but when I modify the ziti-controller chart values.yaml
to contain this:
persistence:
enabled: true
storageClass: hostPath
I receive this error:
PersistentVolumeClaim "ziti-controller" is invalid: spec.storageClassName: Invalid value: "hostPath": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
Right, hostPath
is a volume type that may be provided by one or more storage classes. You need a storage class that maps to a volume provider.
This K3S doc says you can use the built-in local-path
storage class to specify a hostPath
volume type.
Addendum: local-path
may be the default storage class, in which case you shouldn't need to specify it as an input value. Let me know if you found that necessary!
# the (default) is annotated in this list
kubectl get storageclass
Indeed, local-path
is the default.
To explain what I have working in a different chart:
example-hostpath-volume:
enabled: true
type: hostPath
retain: true
hostPath: /path/on/host
mountPath: /path/in/container
This functions, and does not actually create a PVC.
If I try to omit the type: hostPath
in the ziti-controller chart, it defaults to the local-path
provisioner.
How can I specify the hostPath/mountPath
values in the ziti-controller chart?
The chart currently requires either a storage class to create a PVC or an existing volume name. You could create the volume in advance and supply the name with .persistence.VolumeName
instead of a storage class name.
@qrkourier Ah, that works, thank you
Or you could use a storage provider like openebs https://openebs.io/ which can map the native kubernetes storage to the local host and even sync it though different nodes / hosts.
I have not testet this so far, but planning to go this direction when moving from docker to kubernetes.
OpenEBS sounds neat. Have you found that to be a good option for homelab-scale, e.g., single-node K8S, or is it something I would probably bring into the picture when I need a volume provisioner because I'm not using managed K8S in some cloud?
For me this will be good in a small environment where
-
you do not have shared storage available but like to implement redundancy
-
you do have a backup strategie where you do backup folders of the local host and do not want side containers exporting the data
-
Doing workarrounds like me actual for docker compose changing each compose file and having issues with the user permissions of the container.
With a cloud based k8s you'll have cloud storage and I believe also backups available.
I found an intersting article. Search for "The Ultimate Kubernetes Homelab" on the web.
Thank you for the tip!