Skip to content
This repository has been archived by the owner on Nov 9, 2022. It is now read-only.
This repository is currently being migrated. It's locked while the migration is in progress.

PVC attach/mount failed - csi.storageos.com not found #303

Open
sorinpepelea opened this issue Feb 7, 2021 · 7 comments
Open

PVC attach/mount failed - csi.storageos.com not found #303

sorinpepelea opened this issue Feb 7, 2021 · 7 comments

Comments

@sorinpepelea
Copy link

In a microk8s environment, I installed storageos cluster-operator with helm3 chart , csi enabled .

Everything runs ok , including GUI, and PVC creation.

POD's instead fails to attach - timeout with error : cannot find csi.storageos.com

My install command is as follow :
helm3 install grid-storageos storageos/storageos-operator
--namespace kube-system
--set cluster.kvBackend.address=192.168.0.10:2379 \ # VIP address of external etcd cluster (TLS disabled)
--set cluster.admin.username=storageos
--set cluster.admin.password=********
--set csi.enable=true
--set csi.deploymentStrategy=deployment
--set csi.enableControllerPublishCreds=true
--set csi.enableNodePublishCreds=true
--set csi.enableProvisionCreds=true

@Arau
Copy link
Contributor

Arau commented Feb 7, 2021

Hi @sorinpepelea,

the microk8s uses an immutable containerised kubelet. It is worth trying the sharedDir param from the values.yaml.

# sharedDir should be set if running kubelet in a container.  This should
# be the path shared into to kubelet container, typically:
# "/var/lib/kubelet/plugins/kubernetes.io~storageos".  If not set, defaults
# will be used.
sharedDir:

I don't know for certain if with microk8s the path that should be used is sharedDir: /var/lib/kubelet/plugins/kubernetes.io~storageos. But it is worth a try.

StorageOS and the Kubelet communicate through the CSI socket. So both of them need to be able to see that socket. By adding that sharedDir, the StorageOS Operator applies a shared mounted filesystem in between StorageOS and the Kubelet so they can both "see" the socket.

Let us know if that is not enough to make it work and we will see if we can look into more specifics. Microk8s is not fully supported at the moment by StorageOS, but we would be open to look into it 😃

@sorinpepelea
Copy link
Author

I could only find "/var/lib/kubelet/plugins_registry/storageos" folder in my system .. I have microk8s installed with snap ..(official instructions) .. there is no '/var/lib/kubelet/plugins/kubernetes.io~storageos' location .

if i install storageos with --set sharedDir='/var/lib/kubelet/plugins/kubernetes.io~storageos' , nothing changes, error persists.
I also try to verify if mountPropagation is true .. and it is. Also i don't really know if setting --set csi.deploymentStrategy=deployment might have some inpact ...

@sorinpepelea
Copy link
Author

In /var/lib/kubelet/plugins_registry i have a file :
csi.storageos.com-reg.sock ,
and a folder : storageos , that folder contains a file : csi.sock ..
I believe one of this must be accesible by kubelet ..but i don't know witch one ..

@sorinpepelea
Copy link
Author

But my error is csi.storageos.com not found, it does not says csi.storageos.com-reg not found

@Arau
Copy link
Contributor

Arau commented Feb 8, 2021

Hi @sorinpepelea,

I believe microk8s uses the kubelet root dir --root-dir=${SNAP_COMMON}/var/lib/kubelet. We tried internally and I think the solution could be to change that config to --root-dir=/var/lib/kubelet.

@sorinpepelea
Copy link
Author

so I should change

"--root-dir=${SNAP_COMMON}/var/lib/kubelet"
whith
"--root-dir=/var/lib/kubelet"
in
/var/snap/microk8s/1910/args/kubelet ?

And after a microk8s restart , install storageos ?
Did it worked internally ?.
Also is there an option to not depend on etcd ? (cause microk8s use something else now for HA)

@martincolladodev
Copy link

@sorinpepelea Did you tried that options? For etcd I deployed a docker etcd cluster, but would be nice that requirement will be an optional opt-in.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants