Merge pull request #3057 from Fish-pro/fix/statement
Fix syntax errors in the document
This commit is contained in:
commit
c5afa2fa64
|
@ -24,7 +24,7 @@ replicas during scheduling.
|
|||
## Workloads Observation from Karmada Control Plane
|
||||
After workloads (e.g. Deployments) are propagated to member clusters, users may also want to get the overall workload
|
||||
status across many clusters, especially the status of each `pod`. In this release, a `get` subcommand was introduced to
|
||||
the `kubectl-karmada`. With this command, user are now able get all kinds of resources deployed in member clusters from
|
||||
the `kubectl-karmada`. With this command, user are now able to get all kinds of resources deployed in member clusters from
|
||||
the Karmada control plane.
|
||||
|
||||
For example (get `deployment` and `pods` across clusters):
|
||||
|
|
|
@ -125,7 +125,7 @@ With these improvements, Karmada can easily manage hundreds of huge clusters. Th
|
|||
|
||||
#### Dependencies
|
||||
- Karmada is now built with Golang 1.18.3. (@RainbowMango, [#2032](https://github.com/karmada-io/karmada/pull/2032))
|
||||
- Kubernetes dependencies are now updated to v1.24.2. (@RainbowMango, [#2050](https://github.com/karmada-io/karmada/pull/2050))
|
||||
- All Kubernetes dependencies are now updated to v1.24.2. (@RainbowMango, [#2050](https://github.com/karmada-io/karmada/pull/2050))
|
||||
|
||||
#### Deprecation
|
||||
- `karmadactl`: Removed `--dry-run` flag from `describe`, `exec` and `log` commands. (@carlory, [#2023](https://github.com/karmada-io/karmada/pull/2023))
|
||||
|
|
|
@ -32,7 +32,7 @@ This document proposes a mechanism to specify that a member cluster resource sho
|
|||
|
||||
## Motivation
|
||||
|
||||
When a cluster is unjoined, karmada should provide a mechanism to cleanup the resources propagated by karmada. Currently, when unjoin a cluster, `Karmada` first try to remove propagated resource, and will skip remove if the cluster not ready.
|
||||
When a cluster is unjoined, Karmada should provide a mechanism to clean up the resources propagated by Karmada. Currently, when unjoining a cluster, `Karmada` first tries to remove the propagated resources, and will skip the removal if the cluster is not ready.
|
||||
|
||||
### Goals
|
||||
|
||||
|
@ -41,12 +41,12 @@ When a cluster is unjoined, karmada should provide a mechanism to cleanup the re
|
|||
|
||||
## Proposals
|
||||
|
||||
The Cluster struct should be updated to contain a `RemoveStrategy` member. `RemoveStrategy` will initially support two values, `Needless` and `Required`.
|
||||
The Cluster struct should be updated to contain a `RemoveStrategy` member. `RemoveStrategy` now supports `Needless` and `Required`.
|
||||
|
||||
- The `Needless` strategy will not cleanup any propagated resources.
|
||||
- The `Needless` strategy will not clean up any propagated resources.
|
||||
- The `Required` strategy will halt the unjoin process and set the `Cluster resource` in a failed state when encountering errors. Unjoining is blocked until all propagated resources have been removed successfully.
|
||||
|
||||
By default, `RemoveStrategy` will be `needless` on all cluster. A user must explicitly set a removal strategy on the join cluster.
|
||||
`RemoveStrategy` defaults to `needless` on all cluster. A user must explicitly set a removal strategy for the joined cluster.
|
||||
|
||||
|
||||
### Implementation details
|
||||
|
|
|
@ -31,7 +31,7 @@ this KEP. Describe why the change is important and the benefits to users.
|
|||
1. Bring HPA from single cluster to multiple clusters.
|
||||
1. Compatible with the HPA related resources in the single cluster.
|
||||
1. Tolerate the disaster of member cluster or karmada control plane.
|
||||
1. It is better to integrate well with the scenarios such as workloads shifting and cloud burst.
|
||||
1. It is better to integrate well with the scenarios such as workloads shifting and cloud bursting.
|
||||
1. It is better to support both Kubernetes HPA and customized HPA.
|
||||
|
||||
### Non-Goals
|
||||
|
@ -57,10 +57,10 @@ bogged down.
|
|||
-->
|
||||
|
||||
#### Story 1
|
||||
For a platform developer using Kubernetes, now I want to use Karmada to run apps on multiclusters. But the CD ecosystem is built based on the single cluster and the original HPA is heavily used. So I want to migrate the HPA resources to multiclusters without too much efforts. It is better to be compatible with the schema of HPA used in single cluster.
|
||||
As a platform developer using Kubernetes, I want to use Karmada to run apps on multiclusters, but my CD ecosystem is built based on a single cluster and the original HPA is heavily used. So I want to migrate the HPA resources to multiclusters without much effort. It is better to be compatible with the schema of HPA used in a single cluster.
|
||||
|
||||
#### Story 2
|
||||
For an application developer, I create an HPA CR for the application running on Karmada with FederatedHPA enabled.
|
||||
As an application developer, I create an HPA CR for the application running on Karmada with FederatedHPA enabled.
|
||||
```
|
||||
target cpu util 30%
|
||||
min replica 3
|
||||
|
@ -99,8 +99,8 @@ Consider including folks who also work outside the SIG or subproject.
|
|||
|
||||
There are no new CRDs or resources introduced in this design. All the core functions are implemented in the `FederatedHPAController`.
|
||||
1. The Kubernetes HPA components are still used in the member cluster and can work standalone.
|
||||
1. The FederatedHPAController is responsible for the purposes
|
||||
1. Watch the HPA resource and `PropagationPolicy/ResourceBinding` corresponding to the `Workload`, to learn which clusters the HPA resource should propagated to and what weight the workloads should be spread between clusters.
|
||||
1. The FederatedHPAController is responsible for the following purposes:
|
||||
1. Watch the HPA resource and `PropagationPolicy/ResourceBinding` corresponding to the `Workload`, to learn which clusters the HPA resource should be propagated to and at what weight the workloads should be spread between clusters.
|
||||
1. Create `Work` corresponding to HPA resource to spread the HPA to clusters. Distribute `min/max` fields of the HPA resources between member clusters based on the weight learned.
|
||||
1. Redistribute 'some' fields of the HPA resources after `PropagationPolicy/ResourceBinding` corresponding to `Workload` is changed.
|
||||
1. There will be `ResourceInterpreterWebhook`s provided for different types of `Workload`. They are responsible for retaining the `replicas` in the member clusters and aggregate statuses.
|
||||
|
|
|
@ -114,7 +114,7 @@ Consider the following in developing a test plan for this enhancement:
|
|||
- Will there be e2e and integration tests, in addition to unit tests?
|
||||
- How will it be tested in isolation vs with other components?
|
||||
|
||||
No need to outline all of the test cases, just the general strategy. Anything
|
||||
No need to outline all test cases, just the general strategy. Anything
|
||||
that would count as tricky in the implementation, and anything particularly
|
||||
challenging to test, should be called out.
|
||||
|
||||
|
|
|
@ -286,7 +286,7 @@ Consider the following in developing a test plan for this enhancement:
|
|||
- Will there be e2e and integration tests, in addition to unit tests?
|
||||
- How will it be tested in isolation vs with other components?
|
||||
|
||||
No need to outline all of the test cases, just the general strategy. Anything
|
||||
No need to outline all test cases, just the general strategy. Anything
|
||||
that would count as tricky in the implementation, and anything particularly
|
||||
challenging to test, should be called out.
|
||||
|
||||
|
|
|
@ -19,9 +19,9 @@ authors:
|
|||
|
||||
With the widespread used of multi-clusters, the single-cluster quota management `ResourceQuota` in Kubernetes can no longer meet the administrator's resource management and restriction requirements for federated clusters. Resource administrators often need to stand on the global dimension to manage and control the total consumption of resources by each business.
|
||||
|
||||
Creating a corresponding `namespace` and `ResourceQuotas` under each Kubernetes is the usual practice, and then Kubernetes will limit the resources by `ResourceQuotas`. However, with the growth of the number of businesses, the expansion and contraction of sub-clusters, the number of available resources and resource types of each cluster are different, which brings many problems to the administrator.
|
||||
Creating a corresponding `namespace` and `ResourceQuotas` under each Kubernetes cluster is a common practice, and then Kubernetes will limit the resources according to `ResourceQuotas`. However, today's administrators may be challenged by the exploding service volume, sub-cluster scaling needs, and resources of different amounts and types.
|
||||
|
||||
In addition, Karmada supports the propagation of `ResourceQuota` objects to Kubernetes clusters through PropagationPolicy, so as to achieve the purpose of creating quotas on multi-clusters by k8s native interfaces. However, it is impossible to freely adjust and limit the global resource usage of a business by PropagationPolicy, and it is quite troublesome to expand such a requirement ont it. We need a global quota for Karmada, not just `ResourceQuota` on sub-clusters.
|
||||
In addition, Karmada supports the propagation of `ResourceQuota` objects to Kubernetes clusters through a PropagationPolicy to create quotas for multi-clusters using native K8s APIs. However, it is impossible to freely adjust and limit the global resource usage of a business by PropagationPolicies solely, and doing this could be really troublesome. We need a global quota for Karmada, not just `ResourceQuota` on sub-clusters.
|
||||
|
||||
This document describes a quota system **`KarmadaQuota`** for Karmada. As a part of admission control, the **`KarmadaQuota`** enforcing hard resource usage limits per namespace.
|
||||
|
||||
|
@ -77,10 +77,10 @@ Now we can create a total quota for business A (business representative) on karm
|
|||
|
||||
### Risks and Mitigations
|
||||
|
||||
1. If a customer create a pod in member cluster itself, karmada is not able to perceive them, and then KarmadaQuota will not limit them too. It reverts the usage of single cluster.
|
||||
1. If a customer creates a pod in a member cluster, Karmada is not able to perceive it, and KarmadaQuota will not limit it too. It reverts the usage of a single cluster.
|
||||
|
||||
2. If a customer use karmada to propagate a controller which can create pods in the member clusters, as the quota webhook can not check these pods's resources, these pod will be created in the member clusters lead to a customer may use resources more than the quota limit. There are currently no mitigation measures yet, please try to avoid this usage, and continue to pay attention to the karmada community.
|
||||
See [discuss about this situation](https://github.com/karmada-io/karmada/pull/632).
|
||||
2. If a customer use karmada to propagate a controller which can create pods in the member clusters, as the quota webhook can't check these pods' resources, these pods will be created in the member clusters, which leads to resource use more than limited. There are currently no mitigation measures. Please try to avoid such a use, and stay tuned for the updates from the Karmada community.
|
||||
See [discuss this situation](https://github.com/karmada-io/karmada/pull/632).
|
||||
|
||||
## Design Details
|
||||
|
||||
|
|
|
@ -139,13 +139,13 @@ First, the existing plugins in Karmada Scheduler such as ClusterAffinity, APIIns
|
|||
|
||||
Based on this prefilter result, when assigning replicas, the Karmada Scheduler could try to calculate cluster max available replicas by starting gRPC requests concurrently to the Cluster Accurate Scheduler Estimator. At last, the Cluster Accurate Scheduler Estimator will soon return how many available replicas that the cluster could produce. Then the Karmada Scheduler assgin replicas into different clusters in terms of the estimation result.
|
||||
|
||||
We could implement this by modifying function calClusterAvailableReplicas to a interface. The previous estimation method, based on `ResourceSummary` in `Cluster.Status`, is able to be a default normal estimation approach. Now we could just add a switch to determine whether Cluster Accurate Scheduler Estimator is applied, while the estimator via `ResourceSummary` could be a default one that does not support disabled. In the future, after the scheduler profile is added, a user could customize the config by using a profile.
|
||||
We could implement this by modifying function calClusterAvailableReplicas to an interface. The previous estimation method, based on `ResourceSummary` in `Cluster.Status`, is able to be a default normal estimation approach. Now we could just add a switch to determine whether Cluster Accurate Scheduler Estimator is applied, while the estimator via `ResourceSummary` could be a default one that does not support disabled. In the future, after the scheduler profile is added, a user could customize the config by using a profile.
|
||||
|
||||
Furthermore, replica estimation can be considered as a new scheduler plugin.
|
||||
|
||||
### Karmada Cluster Accurate Scheduler Estimator
|
||||
|
||||
Cluster Accurate Scheduler Estimator is a independent component that works as a gRPC server. Before its server starts, a pod and node informer associated with a member cluster will be created as a cache. Once the cache has been synced, the gRPC server would start and serve the incoming scheduler request as a replica estimator. Each Cluster Accurate Scheduler Estimator serves for one cluster, as same as `karmada-agent`.
|
||||
Cluster Accurate Scheduler Estimator is an independent component that works as a gRPC server. Before its server starts, a pod and node informer associated with a member cluster will be created as a cache. Once the cache has been synced, the gRPC server would start and serve the incoming scheduler request as a replica estimator. Each Cluster Accurate Scheduler Estimator serves for one cluster, as same as `karmada-agent`.
|
||||
|
||||
There are five steps for a scheduler estimation:
|
||||
|
||||
|
|
Loading…
Reference in New Issue