You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
... which was not the case in v1.26.*, so it's a regression from my point of view.
Details:
We are using node labels, annotations and taints to ensure that certain workloads only run on specific nodes and have some individual configuration. Therefore, it is critical that node labels, annotations and taints are persisted.
In /etc/rancher/k3s/config.yaml, we only have --node-taint: specified. Configuring this ensures that when a new node joins the cluster it doesn't get unwanted pods scheduled prematurely. We do not specify --node-label however because...
it isn't required because we use kubectl patch node ... instead to set the desired node labels and annotations
there is no way in config.yaml to specify node annotations, so why bother?
This was all working fine in v1.26.*+k3s. No that we've upgraded to v1.31.1+k3s1, we figured that after restarting the k3s-agent service, all our node labels and annotations are gone.
Is there a way to bring the persistence back?
In order to reproduce the issue, you have to stop the k3s-agent service and wait for some seconds for the master node(s) to recognize the change and remove the node from the cluster. We are using embedded etcd BTW.
The text was updated successfully, but these errors were encountered:
There is nothing in k3s that resets node labels/annotations/taints when the agent is restarted. Similarly, there is nothing in k3s that will delete a node from the cluster when it is down. I suspect you have some 3rd party component enabled that is doing these things.
You are right, we are using the hcloud-cloud-controller-manager and that’s deleting nodes that are being offline for 30 seconds. Sorry for barking up the wrong tree.
... which was not the case in v1.26.*, so it's a regression from my point of view.
Details:
We are using node labels, annotations and taints to ensure that certain workloads only run on specific nodes and have some individual configuration. Therefore, it is critical that node labels, annotations and taints are persisted.
In
/etc/rancher/k3s/config.yaml
, we only have--node-taint:
specified. Configuring this ensures that when a new node joins the cluster it doesn't get unwanted pods scheduled prematurely. We do not specify--node-label
however because...kubectl patch node ...
instead to set the desired node labels and annotationsconfig.yaml
to specify node annotations, so why bother?This was all working fine in v1.26.*+k3s. No that we've upgraded to v1.31.1+k3s1, we figured that after restarting the k3s-agent service, all our node labels and annotations are gone.
Is there a way to bring the persistence back?
In order to reproduce the issue, you have to stop the k3s-agent service and wait for some seconds for the master node(s) to recognize the change and remove the node from the cluster. We are using embedded etcd BTW.
The text was updated successfully, but these errors were encountered: