Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

All nodes reported as under-utilized-undrainable #86

Open
EPinci opened this issue Mar 15, 2018 · 3 comments
Open

All nodes reported as under-utilized-undrainable #86

EPinci opened this issue Mar 15, 2018 · 3 comments

Comments

@EPinci
Copy link

EPinci commented Mar 15, 2018

Hi, I managed to deploy the autoscaler to my cluster with RBAC and all post ACS-Engine v12.0 settings.
I can see it is running but the logs always report:

2018-03-15 17:31:49,126 - autoscaler.cluster - INFO - ++++ Maintenance Begins ++++++
2018-03-15 17:31:49,127 - autoscaler.engine_scaler - INFO - ++++ Maintaining Nodes ++++++
2018-03-15 17:31:49,127 - autoscaler.engine_scaler - INFO - node: k8s-nodepool1-93283987-0                                                    state: under-utilized-undrainable
2018-03-15 17:31:49,127 - autoscaler.engine_scaler - INFO - node: k8s-nodepool1-93283987-1                                                    state: under-utilized-undrainable
2018-03-15 17:31:49,128 - autoscaler.engine_scaler - INFO - node: k8s-nodepool1-93283987-2                                                    state: under-utilized-undrainable
2018-03-15 17:31:49,128 - autoscaler.cluster - INFO - ++++ Maintenance Ends ++++++

As much as my nodes are well under utilized, how can I know what is actually blocking the scale down?

Thank you.

@wbuchwalter
Copy link
Owner

Most likely this is blocked by system pods.
You can check by looking at kubectl get pods -n kube-system.
kube-proxy and kube-svc-redirect pods are ignored, but your kube-dns, heapster, tunnelfront and dashboard pods might be spread among the 3 nodes.

@EPinci
Copy link
Author

EPinci commented Mar 19, 2018

Indeed I have kube-system pods everywhere: heapster, kube-dns, kube-proxy, dashboard, tiller Is there a way to ignore these as well?
Also I have a "non kube-system" daemonset, is it going to be an issue?

@EPinci
Copy link
Author

EPinci commented Apr 5, 2018

Also looks like that the autoscaler pod itself is a blocker to scale down.
There a way to bypass this one as well? Is this autoscaler PDB aware?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants