autoscaler/cluster-autoscaler
Thomas Hartland 01b97d2ad4 Use local gophercloud in cloudprovider/magnum
The import path is changed to the local directory.

This commit also breaks the dependency on
k8s.io/kubernetes/pkg/cloudprovider/providers/openstack
which was providing a Config struct.
The struct has been copied into magnum_util.go.
2019-04-24 11:33:43 +02:00
..
Godeps Revert vendoring extra gophercloud packages 2019-04-23 17:19:53 +02:00
_override/github.com/Azure Godeps update of google.golang.org/api 2019-03-22 18:32:53 +01:00
cloudprovider Use local gophercloud in cloudprovider/magnum 2019-04-24 11:33:43 +02:00
clusterstate Revert "Use cloudProvider.GetInstanceID() to get unregistered nodes" 2019-03-08 10:47:26 +08:00
config Consider GPU utilization in scaling down 2019-04-04 01:12:51 -07:00
context Define ProcessorCallbacks interface 2019-04-15 16:59:13 +02:00
core Include pods with NominatedNodeName in scheduledPods list used for scale-up considerations 2019-04-15 16:59:13 +02:00
estimator
expander rely on listers instead of watch api 2019-04-16 13:47:54 +02:00
metrics
processors Migrate filter out schedulabe to PodListProcessor 2019-04-15 16:59:13 +02:00
proposals
simulator Consider GPU utilization in scaling down 2019-04-04 01:12:51 -07:00
utils rely on listers instead of watch api 2019-04-16 13:47:54 +02:00
vendor Revert vendoring extra gophercloud packages 2019-04-23 17:19:53 +02:00
.gitignore priority expander 2019-03-22 10:43:20 +01:00
Dockerfile Use debian-base-amd64:v1.0.0 2019-03-26 10:19:27 +01:00
FAQ.md CA FAQs - Clarify error situations support 2019-04-22 17:07:15 +01:00
Makefile Enable race detection in CA test-in-docker make target 2019-04-12 11:38:40 +02:00
OWNERS Add Jiaxin Shan(Jeffwan@) as reviewer 2019-04-22 22:24:27 -07:00
README.md Improve AWS gotchas section and make cloud providers READMEs more discoverable #1744 2019-03-04 11:31:16 +00:00
cloudbuild.yaml
fix_gopath.sh Bug with with globbing 2019-04-08 10:18:41 +10:00
kubernetes.sync Update godeps based on k8s.io/kubernetes:release-1.14 2019-03-07 21:37:57 +01:00
main.go Migrate filter out schedulabe to PodListProcessor 2019-04-15 16:59:13 +02:00
main_test.go
push_image.sh
run.sh
update_toc.py
version.go Cluster Autoscaler 1.14.0-beta.2 2019-03-14 17:31:48 +01:00

README.md

Cluster Autoscaler

Introduction

Cluster Autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true:

  • there are pods that failed to run in the cluster due to insufficient resources,
  • there are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes.

FAQ/Documentation

An FAQ is available HERE.

You should also take a look at the notes and "gotchas" for your specific cloud provider:

Releases

We recommend using Cluster Autoscaler with the Kubernetes master version for which it was meant. The below combinations have been tested on GCP. We don't do cross version testing or compatibility testing in other environments. Some user reports indicate successful use of a newer version of Cluster Autoscaler with older clusters, however, there is always a chance that it won't work as expected.

Starting from Kubernetes 1.12, versioning scheme was changed to match Kubernetes minor releases exactly.

Kubernetes Version CA Version
1.13.X 1.13.X
1.12.X 1.12.X
1.11.X 1.3.X
1.10.X 1.2.X
1.9.X 1.1.X
1.8.X 1.0.X
1.7.X 0.6.X
1.6.X 0.5.X, 0.6.X*
1.5.X 0.4.X
1.4.X 0.3.X

*Cluster Autoscaler 0.5.X is the official version shipped with k8s 1.6. We've done some basic tests using k8s 1.6 / CA 0.6 and we're not aware of any problems with this setup. However, Cluster Autoscaler internally simulates Kubernetes' scheduler and using different versions of scheduler code can lead to subtle issues.

Notable changes

For CA 1.1.2 and later, please check release notes.

CA version 1.1.1:

  • Fixes around metrics in the multi-master configuration.
  • Fixes for unready nodes issues when quota is overrun.

CA version 1.1.0:

CA version 1.0.3:

  • Adds support for safe-to-evict annotation on pod. Pods with this annotation can be evicted even if they don't meet other requirements for it.
  • Fixes an issue when too many nodes with GPUs could be added during scale-up (https://github.com/kubernetes/kubernetes/issues/54959).

CA Version 1.0.2:

CA Version 1.0.1:

CA Version 1.0:

With this release we graduated Cluster Autoscaler to GA.

  • Support for 1000 nodes running 30 pods each. See: Scalability testing report
  • Support for 10 min graceful termination.
  • Improved eventing and monitoring.
  • Node allocatable support.
  • Removed Azure support. See: PR removing support with reasoning behind this decision
  • cluster-autoscaler.kubernetes.io/scale-down-disabled` annotation for marking nodes that should not be scaled down.
  • scale-down-delay-after-deleteandscale-down-delay-after-failureflags replacedscale-down-trial-interval`

CA Version 0.6:

CA Version 0.5.4:

  • Fixes problems with node drain when pods are ignoring SIGTERM.

CA Version 0.5.3:

CA Version 0.5.2:

CA Version 0.5.1:

CA Version 0.5:

  • CA continues to operate even if some nodes are unready and is able to scale-down them.
  • CA exports its status to kube-system/cluster-autoscaler-status config map.
  • CA respects PodDisruptionBudgets.
  • Azure support.
  • Alpha support for dynamic config changes.
  • Multiple expanders to decide which node group to scale up.

CA Version 0.4:

  • Bulk empty node deletions.
  • Better scale-up estimator based on binpacking.
  • Improved logging.

CA Version 0.3:

  • AWS support.
  • Performance improvements around scale down.

Deployment

Cluster Autoscaler is designed to run on Kubernetes master node. This is the default deployment strategy on GCP. It is possible to run a customized deployment of Cluster Autoscaler on worker nodes, but extra care needs to be taken to ensure that Cluster Autoscaler remains up and running. Users can put it into kube-system namespace (Cluster Autoscaler doesn't scale down node with non-mirrored kube-system pods running on them) and set a priorityClassName: system-cluster-critical property on your pod spec (to prevent your pod from being evicted).

Supported cloud providers: