This means that PreFilters are run once per pod in binpacking
instead of #pods*#nodes times. This makes a huge performance
difference in very large clusters.
The following things changed in scheduler and needed to be fixed:
* NodeInfo was moved to schedulerframework
* Some fields on NodeInfo are now exposed directly instead of via getters
* NodeInfo.Pods is now a list of *schedulerframework.PodInfo, not *apiv1.Pod
* SharedLister and NodeInfoLister were moved to schedulerframework
* PodLister was removed
filterOutSchedulableByPacking is an alternative to the older
filterOutSchedulable. filterOutSchedulableByPacking sorts pods in
unschedulableCandidates by priority and filters out pods that can be
scheduled on free capacity on existing nodes. It uses a basic packing
approach to do this. Pods with nominatedNodeName set are always
filtered out.
filterOutSchedulableByPacking is set to be used by default, but, this
can be toggled off by setting filter-out-schedulable-pods-uses-packing
flag to false, which would then activate the older and more lenient
filterOutSchedulable(now called filterOutSchedulableSimple).
Added test cases for both methods.
Fix error format strings according to best practices from CodeReviewComments
Fix error format strings according to best practices from CodeReviewComments
Reverted incorrect change to with error format string
Signed-off-by: CodeLingo Bot <hello@codelingo.io>
Signed-off-by: CodeLingoBot <hello@codelingo.io>
Signed-off-by: CodeLingo Bot <hello@codelingo.io>
Signed-off-by: CodeLingo Bot <bot@codelingo.io>
Resolve conflict
Signed-off-by: CodeLingo Bot <hello@codelingo.io>
Signed-off-by: CodeLingoBot <hello@codelingo.io>
Signed-off-by: CodeLingo Bot <hello@codelingo.io>
Signed-off-by: CodeLingo Bot <bot@codelingo.io>
Fix error strings in testscases to remedy failing tests
Signed-off-by: CodeLingo Bot <bot@codelingo.io>
Fix more error strings to remedy failing tests
Signed-off-by: CodeLingo Bot <bot@codelingo.io>
Adds a new optional flag named `configmap` to specify the name of a configmap containing node group specs.
The configmap is polled every `scan-interval` seconds to reconfigure cluster-autoscaler dynamically at runtime.
Example usage:
```
./cluster-autoscaler --v=4 --cloud-provider=aws --skip-nodes-with-local-storage=false --logtostderr --leader-elect=false --configmap=cluster-autoscaler --logtostderr
```
The configmap would look like:
```yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: cluster-autoscaler
namespace: kube-system
data:
settings: |-
{
"nodeGroups": [
{
"minSize": 1,
"maxSize": 2,
"name": "kubeawstest-nodepool1-AutoScaleWorker-1VWD4GAVG35L5"
}
]
}
```
Other notes:
* Make namespace defaults to "kube-system"
according to https://github.com/kubernetes/contrib/pull/2226#discussion_r94144267
* Trigger a full-recreate on a configuration change
according to https://github.com/kubernetes/contrib/pull/2226#issuecomment-269617410
* Introduced `autoscaler/` and moved all the dynamic/recreatable-at-runtime parts of autoscaler into there (Update: the package is now named `core` according to https://github.com/kubernetes/contrib/pull/2226#issuecomment-273071663)
* Extracted the core of CA(=`func Run()` in `main.go`) into `Autoscaler`
* `DynamicAutoscaler` is a wrapper around `Autoscaler` which achieves reconfiguration of CA by recreating an `Autoscaler` instance on a configmap change.
* Moved `scale_down*.go`, `scale_up*.go` and `utils*.go` into the `autoscaler` package accordingly because they seemed to be meant to be collocated in the same package as the core of CA (which is now implemented as `Autoscaler`)
* Moved the `createEventRecorder` func from the `main` package to the `utils/kubernetes` package to make it importable from both `main` and `autoscaler`