Merge pull request #4095 from JadeFlute0127/dev2

fix spell error in test and docs
This commit is contained in:
karmada-bot 2023-10-31 14:35:35 +08:00 committed by GitHub
commit 3079ed201a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
14 changed files with 24 additions and 24 deletions

View File

@ -177,7 +177,7 @@ spec:
name: my-config
```
Creating a propagation policy to propagate the deployment to specific clusters. To enable auto-propagating dependencies, we need to set `propagateDeps` as `ture`.
Creating a propagation policy to propagate the deployment to specific clusters. To enable auto-propagating dependencies, we need to set `propagateDeps` as `true`.
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
@ -235,4 +235,4 @@ spec:
- Propose E2E test cases according to user stories above:
* Test if the dependent resources propagated to needed clusters.
## Alternatives
## Alternatives

View File

@ -348,7 +348,7 @@ For example:
| Run once an hour at the beginning of the hour | 0 * * * * |
#### karmada-webhook
In order to make sure the applied configuration is corrent, some validations are necessary for `CronFederatedHPA`, these logic should be implemented in `karmada-webhook`:
In order to make sure the applied configuration is correct, some validations are necessary for `CronFederatedHPA`, these logic should be implemented in `karmada-webhook`:
* If `spec.scaleTargetRef.apiVersion` is `autoscaling.karmada.io/v1alpha1`, `spec.scaleTargetRef.kind` can only be `FederatedHPA`, `spec.rules[*].targetMinReplicas` and `spec.rules[*].targetMaxReplicas` cannot be empty at the same time.
* If `spec.scaleTargetRef.apiVersion` is not `autoscaling.karmada.io/v1alpha1`, `spec.rules[*].targetReplicas` cannot be empty.
* `spec.rules[*].schedule` should be a valid cron format.

View File

@ -83,7 +83,7 @@ The diagram is explained below:
* Here give a service named `foo` in `member1`. We should use the full domain name: `foo.default.svc.cluster.local` to access this service. But we cannot use the same domain name in `member2`.
* `Karmada` exports the service through `ServiceExport` and imports it into `member2` through `ServiceImport`. At this time, the shadow service `derived-foo` will appear in `member2`. User in `memeber2` can access to the `foo` service in `memeber1` by using `derived-foo.default.svc.cluster.local`.
* `Karmada` exports the service through `ServiceExport` and imports it into `member2` through `ServiceImport`. At this time, the shadow service `derived-foo` will appear in `member2`. User in `member2` can access to the `foo` service in `member1` by using `derived-foo.default.svc.cluster.local`.
* After the `coreDNS` installed with `multicluster` found the `ServiceImport` had been created, it will analyze `name`, `namespace`, and `ips` fields of the `ServiceImport` and generate the rr records. In this example, the `ips` in `ServiceImport` can be the `clusterIP` of `derived-foo`.
@ -352,4 +352,4 @@ Here are another two ways I know:
First, using the same service name, not service with`derived-` prefix on other cluster. More details see **[bivas](https://github.com/bivas)** 's pr [proposal for native service discovery](https://github.com/karmada-io/karmada/pull/3694)
Second, install and using [submariner](https://submariner.io/) .
Second, install and using [submariner](https://submariner.io/) .

View File

@ -169,7 +169,7 @@ Users only need to add `conflict resolution` annotations in the `ResourceTemplat
#### Story 4
Similarly, if multiple `Deployment` is defined in one `PropagationPolicy` , and users hope `Karmada` ignoring takeover the conflict `Deployment` by default, but forcing takeover individual specificed conflict `Deployment` :
Similarly, if multiple `Deployment` is defined in one `PropagationPolicy` , and users hope `Karmada` ignoring takeover the conflict `Deployment` by default, but forcing takeover individual specified conflict `Deployment` :
A feasible practice is to declare `conflictResolution: Abort` in the `PropagationPolicy` (or leave it blank), and annotate `work.karmada.io/conflict-resolution: overwrite` in the `ResourceTemplate`.
@ -309,6 +309,6 @@ No such api modify even makes code more clean, but two reasons are under my cons
Adding this field to CRDs including `ResourceBinding` can more clearly demonstrate this ability to users than adding annotations.
2Adding annotations is just a **compatible** way for individual exceptions, even if we remove it, it's still justifiable. Assuming it doesn't exist,
we still need to modify the api of `ResourceBinding`. I mean, the annotation is just a addons, our desgin shouldn't overdependence on it.
we still need to modify the api of `ResourceBinding`. I mean, the annotation is just a addons, our design shouldn't overdependence on it.
3More convenient for code implementation

View File

@ -29,7 +29,7 @@ This proposal aims to provide a solution for users to teach Karmada to learn the
## Motivation
Nowadays, lots of people or projects extend Kubernetes by `Custom Resource Defination`. In order to propagate the
Nowadays, lots of people or projects extend Kubernetes by `Custom Resource Definition`. In order to propagate the
custom resources, Karmada has to learn the structure of the custom resource.
### Goals
@ -429,4 +429,4 @@ configuration would be a little complex to users.
[1]: https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/
[2]: https://github.com/karmada-io/karmada/tree/master/docs/proposals/configurable-local-value-retention
[2]: https://github.com/karmada-io/karmada/tree/master/docs/proposals/configurable-local-value-retention

View File

@ -31,7 +31,7 @@ The Cluster Accurate Scheduler Estimator aims to fix these problems.
### Goals
- Make the available replica estimation more acurate for scheduler decision reference.
- Make the available replica estimation more accurate for scheduler decision reference.
- Allow user to specify node claim such as `NodeAffinity`, `NodeSelector` and `Tolerations` for multi-cluster scheduling.
### Non-Goals
@ -137,7 +137,7 @@ type NodeClaim struct {
First, the existing plugins in Karmada Scheduler such as ClusterAffinity, APIInstalled and TaintToleration will select the suitable clusters.
Based on this prefilter result, when assigning replicas, the Karmada Scheduler could try to calculate cluster max available replicas by starting gRPC requests concurrently to the Cluster Accurate Scheduler Estimator. At last, the Cluster Accurate Scheduler Estimator will soon return how many available replicas that the cluster could produce. Then the Karmada Scheduler assgin replicas into different clusters in terms of the estimation result.
Based on this prefilter result, when assigning replicas, the Karmada Scheduler could try to calculate cluster max available replicas by starting gRPC requests concurrently to the Cluster Accurate Scheduler Estimator. At last, the Cluster Accurate Scheduler Estimator will soon return how many available replicas that the cluster could produce. Then the Karmada Scheduler assign replicas into different clusters in terms of the estimation result.
We could implement this by modifying function calClusterAvailableReplicas to an interface. The previous estimation method, based on `ResourceSummary` in `Cluster.Status`, is able to be a default normal estimation approach. Now we could just add a switch to determine whether Cluster Accurate Scheduler Estimator is applied, while the estimator via `ResourceSummary` could be a default one that does not support disabled. In the future, after the scheduler profile is added, a user could customize the config by using a profile.

View File

@ -65,7 +65,7 @@ Now we add a new cluster member4. We may want to reschedule some replicas toward
### Architecture
It is noticed that this design only focus on User Story 1, which means that only unscheduable pods are included for descheduling, usually happening when cluster resources are insufficient. Other stragety is not considered in this proposal because it needs more discussion.
It is noticed that this design only focus on User Story 1, which means that only unscheduable pods are included for descheduling, usually happening when cluster resources are insufficient. Other strategy is not considered in this proposal because it needs more discussion.
Here is the descheduler workflow.
@ -170,4 +170,4 @@ rpc UnschedulableReplicas(UnschedulableReplicasRequest) returns (UnschedulableRe
- E2E Test covering:
- Deploy karmada-descheduler.
- Rescheduling replicas when resources are insufficient.

View File

@ -298,7 +298,7 @@ continue to evaluate from this affinity term.
#### karmada-controller-manager
When creating or updating `ResourceBidning`/`ClusterResourceBinding`, the added
When creating or updating `ResourceBinding`/`ClusterResourceBinding`, the added
`OrderedClusterAffinities` in `PropagationPolicy`/`ClusterPropagationPolicy` should
be synced.

View File

@ -77,7 +77,7 @@ var _ = framework.SerialDescribe("Aggregated Kubernetes API Endpoint testing", f
})
ginkgo.BeforeEach(func() {
ginkgo.By(fmt.Sprintf("Joinning cluster: %s", clusterName), func() {
ginkgo.By(fmt.Sprintf("Joining cluster: %s", clusterName), func() {
opts := join.CommandJoinOption{
DryRun: false,
ClusterNamespace: secretStoreNamespace,
@ -91,7 +91,7 @@ var _ = framework.SerialDescribe("Aggregated Kubernetes API Endpoint testing", f
})
ginkgo.AfterEach(func() {
ginkgo.By(fmt.Sprintf("Unjoinning cluster: %s", clusterName), func() {
ginkgo.By(fmt.Sprintf("Unjoining cluster: %s", clusterName), func() {
opts := unjoin.CommandUnjoinOption{
DryRun: false,
ClusterNamespace: secretStoreNamespace,

View File

@ -114,7 +114,7 @@ var _ = ginkgo.Describe("FederatedResourceQuota auto-provision testing", func()
})
ginkgo.It("federatedResourceQuota should be propagated to new joined clusters", func() {
ginkgo.By(fmt.Sprintf("Joinning cluster: %s", clusterName), func() {
ginkgo.By(fmt.Sprintf("Joining cluster: %s", clusterName), func() {
opts := join.CommandJoinOption{
DryRun: false,
ClusterNamespace: "karmada-cluster",

View File

@ -365,7 +365,7 @@ var _ = framework.SerialDescribe("Karmadactl join/unjoin testing", ginkgo.Labels
})
ginkgo.BeforeEach(func() {
ginkgo.By(fmt.Sprintf("Joinning cluster: %s", clusterName), func() {
ginkgo.By(fmt.Sprintf("Joining cluster: %s", clusterName), func() {
opts := join.CommandJoinOption{
DryRun: false,
ClusterNamespace: "karmada-cluster",

View File

@ -89,7 +89,7 @@ var _ = ginkgo.Describe("[namespace auto-provision] namespace auto-provision tes
})
ginkgo.BeforeEach(func() {
ginkgo.By(fmt.Sprintf("Joinning cluster: %s", clusterName), func() {
ginkgo.By(fmt.Sprintf("Joining cluster: %s", clusterName), func() {
opts := join.CommandJoinOption{
DryRun: false,
ClusterNamespace: "karmada-cluster",

View File

@ -89,7 +89,7 @@ var _ = ginkgo.Describe("[cluster unjoined] reschedule testing", func() {
})
ginkgo.It("deployment reschedule testing", func() {
ginkgo.By(fmt.Sprintf("Joinning cluster: %s", newClusterName), func() {
ginkgo.By(fmt.Sprintf("Joining cluster: %s", newClusterName), func() {
opts := join.CommandJoinOption{
DryRun: false,
ClusterNamespace: "karmada-cluster",
@ -224,7 +224,7 @@ var _ = ginkgo.Describe("[cluster joined] reschedule testing", func() {
framework.RemovePropagationPolicy(karmadaClient, policy.Namespace, policy.Name)
})
ginkgo.By(fmt.Sprintf("Joinning cluster: %s", newClusterName))
ginkgo.By(fmt.Sprintf("Joining cluster: %s", newClusterName))
opts := join.CommandJoinOption{
DryRun: false,
ClusterNamespace: "karmada-cluster",
@ -283,7 +283,7 @@ var _ = ginkgo.Describe("[cluster joined] reschedule testing", func() {
return testhelper.IsExclude(newClusterName, targetClusterNames)
}, pollTimeout, pollInterval).Should(gomega.BeTrue())
ginkgo.By(fmt.Sprintf("Joinning cluster: %s", newClusterName))
ginkgo.By(fmt.Sprintf("Joining cluster: %s", newClusterName))
opts := join.CommandJoinOption{
DryRun: false,
ClusterNamespace: "karmada-cluster",

View File

@ -151,7 +151,7 @@ var _ = ginkgo.Describe("[karmada-search] karmada search testing", ginkgo.Ordere
searchObject(fmt.Sprintf(pathNSDeploymentsFmt, testNamespace), m1DmName, true)
})
ginkgo.It("[memeber2 deployments namespace] should be not searchable", func() {
ginkgo.It("[member2 deployments namespace] should be not searchable", func() {
searchObject(fmt.Sprintf(pathNSDeploymentsFmt, testNamespace), m2DmName, false)
})
@ -789,7 +789,7 @@ var _ = ginkgo.Describe("[karmada-search] karmada search testing", ginkgo.Ordere
// search cache should not have the deployment
searchObject(pathAllDeployments, existsDeploymentName, false)
// join the cluster
ginkgo.By(fmt.Sprintf("Joinning cluster: %s", clusterName), func() {
ginkgo.By(fmt.Sprintf("Joining cluster: %s", clusterName), func() {
opts := join.CommandJoinOption{
DryRun: false,
ClusterNamespace: "karmada-cluster",