You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happend:
In case of node reboot, multus thin plugin failed to detect the master cni (cilium) even though we specified the readiness file for cilium.
We are using "auto" policy so it tries to find the first conf while cilium is not ready.
What you expected to happen:
Readiness file check also apply to config generation for thin plugin with auto policy.
How to reproduce it (as minimally and precisely as possible):
Delete main cni plugin and restart thin multus pod.
Anything else we need to know?:
We chose to use thin plugin as we discovered a more critical issue for thick.
When pods are initializing, force kill the daemon multus pod will cause pod stuck at init stage forever even after multus daemon pod recovered.
Probably worth another bug report. cc: @michaely-cb
Environment:
Multus version
latest master
Kubernetes version (use kubectl version): 1.24.4
Primary CNI for Kubernetes cluster: cilium
OS (e.g. from /etc/os-release): rocky8.5
The text was updated successfully, but these errors were encountered:
What happend:
In case of node reboot, multus thin plugin failed to detect the master cni (cilium) even though we specified the readiness file for cilium.
We are using "auto" policy so it tries to find the first conf while cilium is not ready.
For thick plugin, it has the right implementation that readiness file missing will also block config generation.
Probably something similar can be done for thin plugin.
What you expected to happen:
Readiness file check also apply to config generation for thin plugin with auto policy.
How to reproduce it (as minimally and precisely as possible):
Delete main cni plugin and restart thin multus pod.
Anything else we need to know?:
We chose to use thin plugin as we discovered a more critical issue for thick.
When pods are initializing, force kill the daemon multus pod will cause pod stuck at init stage forever even after multus daemon pod recovered.
Probably worth another bug report. cc: @michaely-cb
Environment:
latest master
kubectl version
): 1.24.4The text was updated successfully, but these errors were encountered: