Commit Graph

105 Commits

Author SHA1 Message Date
Marwan Ahmed a3bada3708 correctly classify error for failed scale ups 2020-09-13 21:14:27 -07:00
Maciek Pytel 9fb6cdc079 Fix go fmt errors 2020-06-08 13:52:24 +02:00
Maciek Pytel 2160e6d49e Rewrite glogx to work with klogv2 (+rename klogx) 2020-06-05 17:22:26 +02:00
Maciek Pytel 655b4081f4 Migrate to klog v2 2020-06-05 17:22:26 +02:00
Enxebre 49ce70acbd report MaxNodesTotal count during scale up
This change adds a warning event to signal when a planned scale up
operation would go over the maximum total nodes. This is being proposed
for two primary reasons: as an event that can be watched during
end-to-end testing, and as a signal to users when this condition is
occurring.
2020-05-14 10:28:18 -04:00
Jakub Tużnik 73a5cdf928 Address recent breaking changes in scheduler
The following things changed in scheduler and needed to be fixed:
* NodeInfo was moved to schedulerframework
* Some fields on NodeInfo are now exposed directly instead of via getters
* NodeInfo.Pods is now a list of *schedulerframework.PodInfo, not *apiv1.Pod
* SharedLister and NodeInfoLister were moved to schedulerframework
* PodLister was removed
2020-04-24 17:54:47 +02:00
Julien Balestra 628128f65e cluster-autoscaler/taints: refactor current taint logics in the same package
Signed-off-by: Julien Balestra <julien.balestra@datadoghq.com>
2020-02-25 13:57:23 +01:00
Aleksandra Malinowska 5d44b202bc Forget FakeNodeInfoForNodeName ever existed 2020-02-21 15:36:21 +01:00
Łukasz Osipiuk 7b67d3f582 klog.Fatalf on error from ClusterSnapshot.Revert() 2020-02-04 20:52:07 +01:00
Łukasz Osipiuk e5c60c81a9 Remove Estimator's upcoming nodes paramter 2020-02-04 20:52:04 +01:00
Łukasz Osipiuk 9433ef9ffa Use ClusterSnapshot in ScaleUp 2020-02-04 20:51:43 +01:00
Łukasz Osipiuk 30ce46cc28 Pass ClusterSnapshot to BinpackingNodeEstimator 2020-02-04 20:51:29 +01:00
Łukasz Osipiuk 6b2287af4f Pass ClusterSnaphost explicitly to PredicateChecker 2020-02-04 20:51:24 +01:00
Łukasz Osipiuk b0c6d25182 Cleanup simulator.PredicateError 2020-02-04 20:51:11 +01:00
Łukasz Osipiuk 4a2b8c7dfc Remove use of PredicateMetadata 2020-02-04 20:51:05 +01:00
Aleksandra Malinowska c83d609352 Compute expansion options for additional created groups 2020-02-03 17:54:05 +01:00
Aleksandra Malinowska d6849e82b6 Simplify equivalence group usage 2020-01-15 19:40:45 +01:00
Łukasz Osipiuk 7f083d2393 Move core/utils.go to separate package and split into multiple files 2019-10-22 14:23:40 +02:00
Jakub Tużnik 43466ff837 Provide ScaleUpStatusProcessor with info about all rejected node groups
Previously, it had info only about the ones that actually exist.

The changes to the eventing processor are done to keep its previous
behavior the same.
2019-08-19 12:48:10 +02:00
Jakub Tużnik 935476a7e2 Provide more info to ScaleUpStatusProcessor
Add info about considered and created nodegroups to
ScaleUpStatusProcessor
2019-08-12 17:20:09 +02:00
Aleksandra Malinowska c27ae4eb24 Add resource limit type to NotTriggerScaleUp event 2019-07-03 16:38:46 +02:00
Łukasz Osipiuk a849ead286 Precompute inter pod equivalence groups in checkPodsSchedulableOnNode 2019-05-29 18:05:52 +02:00
Chris Bradfield 92ea680f1a Implement an --ignore-taint flag
This change adds support for a user to specify taints to ignore when
considering a node as a template for a node group.
2019-05-14 10:22:59 -07:00
Jiaxin Shan 90666881d3 Move GPULabel and GPUTypes to cloud provider 2019-03-25 13:03:01 -07:00
Andrew McDermott 5ae76ea66e UPSTREAM: <carry>: fix max cluster size calculation on scale up
When scaling up the calculation for computing the maximum cluster size
does not take into account the number of any upcoming nodes and it is
possible to grow the cluster beyond the cluster
size (--max-nodes-total).

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1670695
2019-03-08 13:28:58 +00:00
Pengfei Ni 128729bae9 Move schedulercache to package nodeinfo 2019-02-21 12:41:08 +08:00
Vivek Bagade 79ef3a6940 unexporting methods in utils.go 2019-01-25 00:06:03 +05:30
Łukasz Osipiuk 85a83b62bd Pass nodeGroup->NodeInfo map to ClusterStateRegistry
Change-Id: Ie2a51694b5731b39c8a4135355a3b4c832c26801
2019-01-08 15:52:00 +01:00
Kubernetes Prow Robot 4002559a4c
Merge pull request #1516 from frobware/fix-max-nodes-total-upstream
fix calculation of max cluster size
2019-01-03 10:02:38 -08:00
Maciej Pytel 3f0da8947a Use listers in scale-up 2019-01-02 15:56:01 +01:00
Kubernetes Prow Robot ab7f1e69be
Merge pull request #1464 from losipiuk/lo/stockouts2
Better quota-exceeded/stockout handling
2018-12-31 05:28:08 -08:00
Łukasz Osipiuk 9689b30ee4 Do not use time.Now() in RegisterFailedScaleUp 2018-12-28 17:17:07 +01:00
Łukasz Osipiuk da5bef307b Allow updating Increase for ScaleUpRequest in ClusterStateRegistry 2018-12-28 17:17:07 +01:00
Maciej Pytel 60babe7158 Use kubernetes lister for daemonset instead of custom one
Also migrate to using apps/v1.DaemonSet instead of old
extensions/v1beta1.
2018-12-28 13:55:41 +01:00
Andrew McDermott 5bc77f051c UPSTREAM: <carry>: fix calculation of max cluster size
When scaling up, the calculation for the maximum size of the cluster
based on `--max-nodes-total` doesn't take into account any nodes that
are in the process of coming up. This allows the cluster to grow
beyond the size specified.

With this change I now see:

scale_up.go:266] 21 other pods are also unschedulable
scale_up.go:423] Best option to resize: openshift-cluster-api/amcdermo-ca-worker-us-east-2b
scale_up.go:427] Estimated 18 nodes needed in openshift-cluster-api/amcdermo-ca-worker-us-east-2b
scale_up.go:432] Capping size to max cluster total size (23)
static_autoscaler.go:275] Failed to scale up: max node total count already reached
2018-12-18 17:05:19 +00:00
Łukasz Osipiuk 016bf7fc2c Use k8s.io/klog instead github.com/golang/glog 2018-11-26 17:30:31 +01:00
k8s-ci-robot 7008fb50be
Merge pull request #1380 from losipiuk/lo/backoff
Make Backoff interface
2018-11-07 05:13:43 -08:00
Aleksandra Malinowska 6febc1ddb0 Fix formatted log messages 2018-11-06 14:51:43 +01:00
Aleksandra Malinowska bf6ff4be8e Clean up estimators 2018-11-06 14:15:42 +01:00
Łukasz Osipiuk 0e2c3739b7 Use NodeGroup as key in Backoff 2018-10-30 18:17:26 +01:00
Łukasz Osipiuk 55fc1e2f00 Store NodeGroup in ScaleUpRequest and ScaleDownRequest 2018-10-30 18:03:04 +01:00
Maciej Pytel 6f5e6aab6f Move node group balancing to processor
The goal is to allow customization of this logic
for different use-case and cloudproviders.
2018-10-25 14:04:05 +02:00
Łukasz Osipiuk a266420f6a Recalculate clusterStateRegistry after adding multiple node groups 2018-10-02 17:15:20 +02:00
Łukasz Osipiuk 437efe4af6 If possible use nodeInfo based on created node group 2018-10-02 15:46:45 +02:00
Jakub Tużnik 8179e4e716 Refactor the scale-(up|down) status processors so that they have more info available
Replace the simple boolean ScaledUp property of ScaleUpStatus with a more
comprehensive ScaleUpResult. Add more possible values to ScaleDownResult.
Refactor the processors execution so that they are always executed every
iteration, even if RunOnce exits earlier.
2018-09-20 17:12:02 +02:00
Łukasz Osipiuk bf8cfef10b NodeGroupManager.CreateNodeGroup can return extra created node groups. 2018-09-19 13:55:51 +02:00
Łukasz Osipiuk 705a6d87e2 fixup! Call CheckPodsSchedulableOnNode in scale_up.go via caching layer 2018-09-17 13:01:19 +02:00
Łukasz Osipiuk 0ad4efe920 Call CheckPodsSchedulableOnNode in scale_up.go via caching layer 2018-09-13 17:01:15 +02:00
Aleksandra Malinowska b88e6019f7 code review fixes 3 2018-08-28 18:11:04 +02:00
Aleksandra Malinowska 5620f76c62 Pass NoScaleUpInfo to ScaleUpStatus processor 2018-08-28 14:26:03 +02:00