[FIX] workloadspread yaml format error (#134)

Signed-off-by: 刘硕 <liushuo@zetyun.com>
Co-authored-by: 刘硕 <liushuo@zetyun.com>
This commit is contained in:
ls-2018 2023-09-14 10:08:10 +08:00 committed by GitHub
parent c33a011c0b
commit 02bc93a15e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
16 changed files with 210 additions and 210 deletions

View File

@ -49,16 +49,16 @@ spec:
operator: In
values:
- zone-a
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
maxReplicas: 3
tolertions: []
tolerations: [ ]
patch:
metadata:
labels:

View File

@ -48,16 +48,16 @@ spec:
operator: In
values:
- zone-a
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
maxReplicas: 3
tolertions: []
tolerations: [ ]
patch:
metadata:
labels:

View File

@ -42,16 +42,16 @@ spec:
operator: In
values:
- zone-a
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
maxReplicas: 3
tolertions: []
tolerations: [ ]
patch:
metadata:
labels:
@ -78,12 +78,12 @@ spec:
### sub-fields
- `name`: subset的名称在同一个WorkloadSpread下name唯一代表一个topology区域。
- `maxReplicas`该subset所期望调度的最大副本数需为 >= 0的整数。若设置为空代表不限制subset的副本数。
> 当前版本暂不支持百分比类型。
- `requiredNodeSelectorTerm`: 强制匹配到某个zone。
- `preferredNodeSelectorTerms`: 尽量匹配到某个zone。
**注意**requiredNodeSelectorTerm对应k8s nodeAffinity的requiredDuringSchedulingIgnoredDuringExecution。
@ -144,10 +144,10 @@ WorkloadSpread提供了两种调度策略默认为`Fixed`:
rescheduleCriticalSeconds: 30
```
- Fixed:
- Fixed:
workload严格按照`subsets`定义分布。
- Adaptive:
**Reschedule**Kruise检查`subset`中调度失败的Pod若超过用户定义的时间就将其调度到其他有可用的`subset`上。
@ -178,7 +178,7 @@ WorkloadSpread所管理的workload会按照`subsets`中定义的顺序扩缩容
### 扩容
- 按照`spec.subsets`中`subset`定义的顺序调度Pod当前`subset`的active Pod数量达到`maxReplicas`时再调度到下一个`subset`。
### 缩容
- 当`subset`的副本数(active)大于定义的maxReplicas时优先缩容多余的Pod。
- 依据`spec.subsets`中`subset`定义的顺序,后面`subset`的Pod先于前面的被删除。

View File

@ -42,16 +42,16 @@ spec:
operator: In
values:
- zone-a
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
maxReplicas: 3
tolertions: []
tolerations: [ ]
patch:
metadata:
labels:
@ -78,12 +78,12 @@ spec:
### sub-fields
- `name`: subset的名称在同一个WorkloadSpread下name唯一代表一个topology区域。
- `maxReplicas`该subset所期望调度的最大副本数需为 >= 0的整数。若设置为空代表不限制subset的副本数。
> 当前版本暂不支持百分比类型。
- `requiredNodeSelectorTerm`: 强制匹配到某个zone。
- `preferredNodeSelectorTerms`: 尽量匹配到某个zone。
**注意**requiredNodeSelectorTerm对应k8s nodeAffinity的requiredDuringSchedulingIgnoredDuringExecution。
@ -144,10 +144,10 @@ WorkloadSpread提供了两种调度策略默认为`Fixed`:
rescheduleCriticalSeconds: 30
```
- Fixed:
- Fixed:
workload严格按照`subsets`定义分布。
- Adaptive:
**Reschedule**Kruise检查`subset`中调度失败的Pod若超过用户定义的时间就将其调度到其他有可用的`subset`上。
@ -178,7 +178,7 @@ WorkloadSpread所管理的workload会按照`subsets`中定义的顺序扩缩容
### 扩容
- 按照`spec.subsets`中`subset`定义的顺序调度Pod当前`subset`的active Pod数量达到`maxReplicas`时再调度到下一个`subset`。
### 缩容
- 当`subset`的副本数(active)大于定义的maxReplicas时优先缩容多余的Pod。
- 依据`spec.subsets`中`subset`定义的顺序,后面`subset`的Pod先于前面的被删除。

View File

@ -42,16 +42,16 @@ spec:
operator: In
values:
- zone-a
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
maxReplicas: 3
tolertions: []
tolerations: [ ]
patch:
metadata:
labels:
@ -78,12 +78,12 @@ spec:
### sub-fields
- `name`: subset的名称在同一个WorkloadSpread下name唯一代表一个topology区域。
- `maxReplicas`该subset所期望调度的最大副本数需为 >= 0的整数。若设置为空代表不限制subset的副本数。
> 当前版本暂不支持百分比类型。
- `requiredNodeSelectorTerm`: 强制匹配到某个zone。
- `preferredNodeSelectorTerms`: 尽量匹配到某个zone。
**注意**requiredNodeSelectorTerm对应k8s nodeAffinity的requiredDuringSchedulingIgnoredDuringExecution。
@ -144,10 +144,10 @@ WorkloadSpread提供了两种调度策略默认为`Fixed`:
rescheduleCriticalSeconds: 30
```
- Fixed:
- Fixed:
workload严格按照`subsets`定义分布。
- Adaptive:
**Reschedule**Kruise检查`subset`中调度失败的Pod若超过用户定义的时间就将其调度到其他有可用的`subset`上。
@ -178,7 +178,7 @@ WorkloadSpread所管理的workload会按照`subsets`中定义的顺序扩缩容
### 扩容
- 按照`spec.subsets`中`subset`定义的顺序调度Pod当前`subset`的active Pod数量达到`maxReplicas`时再调度到下一个`subset`。
### 缩容
- 当`subset`的副本数(active)大于定义的maxReplicas时优先缩容多余的Pod。
- 依据`spec.subsets`中`subset`定义的顺序,后面`subset`的Pod先于前面的被删除。

View File

@ -42,16 +42,16 @@ spec:
operator: In
values:
- zone-a
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
maxReplicas: 3
tolertions: []
tolerations: [ ]
patch:
metadata:
labels:
@ -78,12 +78,12 @@ spec:
### sub-fields
- `name`: subset的名称在同一个WorkloadSpread下name唯一代表一个topology区域。
- `maxReplicas`该subset所期望调度的最大副本数需为 >= 0的整数。若设置为空代表不限制subset的副本数。
> 当前版本暂不支持百分比类型。
- `requiredNodeSelectorTerm`: 强制匹配到某个zone。
- `preferredNodeSelectorTerms`: 尽量匹配到某个zone。
**注意**requiredNodeSelectorTerm对应k8s nodeAffinity的requiredDuringSchedulingIgnoredDuringExecution。
@ -144,10 +144,10 @@ WorkloadSpread提供了两种调度策略默认为`Fixed`:
rescheduleCriticalSeconds: 30
```
- Fixed:
- Fixed:
workload严格按照`subsets`定义分布。
- Adaptive:
**Reschedule**Kruise检查`subset`中调度失败的Pod若超过用户定义的时间就将其调度到其他有可用的`subset`上。
@ -178,7 +178,7 @@ WorkloadSpread所管理的workload会按照`subsets`中定义的顺序扩缩容
### 扩容
- 按照`spec.subsets`中`subset`定义的顺序调度Pod当前`subset`的active Pod数量达到`maxReplicas`时再调度到下一个`subset`。
### 缩容
- 当`subset`的副本数(active)大于定义的maxReplicas时优先缩容多余的Pod。
- 依据`spec.subsets`中`subset`定义的顺序,后面`subset`的Pod先于前面的被删除。

View File

@ -46,16 +46,16 @@ spec:
operator: In
values:
- zone-a
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
maxReplicas: 3
tolertions: []
tolerations: [ ]
patch:
metadata:
labels:
@ -82,12 +82,12 @@ spec:
### sub-fields
- `name`: subset的名称在同一个WorkloadSpread下name唯一代表一个topology区域。
- `maxReplicas`该subset所期望调度的最大副本数需为 >= 0的整数。若设置为空代表不限制subset的副本数。
> 当前版本暂不支持百分比类型。
- `requiredNodeSelectorTerm`: 强制匹配到某个zone。
- `preferredNodeSelectorTerms`: 尽量匹配到某个zone。
**注意**requiredNodeSelectorTerm对应k8s nodeAffinity的requiredDuringSchedulingIgnoredDuringExecution。
@ -148,10 +148,10 @@ WorkloadSpread提供了两种调度策略默认为`Fixed`:
rescheduleCriticalSeconds: 30
```
- Fixed:
- Fixed:
workload严格按照`subsets`定义分布。
- Adaptive:
**Reschedule**Kruise检查`subset`中调度失败的Pod若超过用户定义的时间就将其调度到其他有可用的`subset`上。
@ -182,7 +182,7 @@ WorkloadSpread所管理的workload会按照`subsets`中定义的顺序扩缩容
### 扩容
- 按照`spec.subsets`中`subset`定义的顺序调度Pod当前`subset`的active Pod数量达到`maxReplicas`时再调度到下一个`subset`。
### 缩容
- 当`subset`的副本数(active)大于定义的maxReplicas时优先缩容多余的Pod。
- 依据`spec.subsets`中`subset`定义的顺序,后面`subset`的Pod先于前面的被删除。

View File

@ -46,16 +46,16 @@ spec:
operator: In
values:
- zone-a
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
maxReplicas: 3
tolertions: []
tolerations: [ ]
patch:
metadata:
labels:
@ -82,12 +82,12 @@ spec:
### sub-fields
- `name`: subset的名称在同一个WorkloadSpread下name唯一代表一个topology区域。
- `maxReplicas`该subset所期望调度的最大副本数需为 >= 0的整数。若设置为空代表不限制subset的副本数。
> 当前版本暂不支持百分比类型。
- `requiredNodeSelectorTerm`: 强制匹配到某个zone。
- `preferredNodeSelectorTerms`: 尽量匹配到某个zone。
**注意**requiredNodeSelectorTerm对应k8s nodeAffinity的requiredDuringSchedulingIgnoredDuringExecution。
@ -148,10 +148,10 @@ WorkloadSpread提供了两种调度策略默认为`Fixed`:
rescheduleCriticalSeconds: 30
```
- Fixed:
- Fixed:
workload严格按照`subsets`定义分布。
- Adaptive:
**Reschedule**Kruise检查`subset`中调度失败的Pod若超过用户定义的时间就将其调度到其他有可用的`subset`上。
@ -182,7 +182,7 @@ WorkloadSpread所管理的workload会按照`subsets`中定义的顺序扩缩容
### 扩容
- 按照`spec.subsets`中`subset`定义的顺序调度Pod当前`subset`的active Pod数量达到`maxReplicas`时再调度到下一个`subset`。
### 缩容
- 当`subset`的副本数(active)大于定义的maxReplicas时优先缩容多余的Pod。
- 依据`spec.subsets`中`subset`定义的顺序,后面`subset`的Pod先于前面的被删除。

View File

@ -48,16 +48,16 @@ spec:
operator: In
values:
- zone-a
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
maxReplicas: 3
tolertions: []
tolerations: [ ]
patch:
metadata:
labels:

View File

@ -43,16 +43,16 @@ spec:
operator: In
values:
- zone-a
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
maxReplicas: 3
tolertions: []
tolerations: [ ]
patch:
metadata:
labels:
@ -99,7 +99,7 @@ tolerations:
effect: "NoSchedule"
```
- `patch`: customize the Pod configuration of `subset`, such as Annotations, Labels, Env.
- `patch`: customize the Pod configuration of `subset`, such as Annotations, Labels, Env.
Example:
@ -145,10 +145,10 @@ WorkloadSpread provides two kind strategies, the default strategy is `Fixed`.
rescheduleCriticalSeconds: 30
```
- Fixed:
- Fixed:
Workload is strictly spread according to the definition of the subset.
Workload is strictly spread according to the definition of the subset.
- Adaptive:
**Reschedule**: Kruise will check the unschedulable Pods of subset. If it exceeds the defined duration, the failed Pods will be rescheduled to the other `subset`.
@ -179,8 +179,8 @@ The workload managed by WorkloadSpread will scale according to the defined order
### Scale out
- The Pods are scheduled in the subset order defined in the `spec.subsets`. It will be scheduled in the next `subset` while the replica number reaches the maxReplicas of `subset`
- The Pods are scheduled in the subset order defined in the `spec.subsets`. It will be scheduled in the next `subset` while the replica number reaches the maxReplicas of `subset`
### Scale in
- When the replica number of the `subset` is greater than the `maxReplicas`, the extra Pods will be removed in a high priority.

View File

@ -43,16 +43,16 @@ spec:
operator: In
values:
- zone-a
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
maxReplicas: 3
tolertions: []
tolerations: [ ]
patch:
metadata:
labels:
@ -99,7 +99,7 @@ tolerations:
effect: "NoSchedule"
```
- `patch`: customize the Pod configuration of `subset`, such as Annotations, Labels, Env.
- `patch`: customize the Pod configuration of `subset`, such as Annotations, Labels, Env.
Example:
@ -145,10 +145,10 @@ WorkloadSpread provides two kind strategies, the default strategy is `Fixed`.
rescheduleCriticalSeconds: 30
```
- Fixed:
- Fixed:
Workload is strictly spread according to the definition of the subset.
Workload is strictly spread according to the definition of the subset.
- Adaptive:
**Reschedule**: Kruise will check the unschedulable Pods of subset. If it exceeds the defined duration, the failed Pods will be rescheduled to the other `subset`.
@ -179,8 +179,8 @@ The workload managed by WorkloadSpread will scale according to the defined order
### Scale out
- The Pods are scheduled in the subset order defined in the `spec.subsets`. It will be scheduled in the next `subset` while the replica number reaches the maxReplicas of `subset`
- The Pods are scheduled in the subset order defined in the `spec.subsets`. It will be scheduled in the next `subset` while the replica number reaches the maxReplicas of `subset`
### Scale in
- When the replica number of the `subset` is greater than the `maxReplicas`, the extra Pods will be removed in a high priority.

View File

@ -43,16 +43,16 @@ spec:
operator: In
values:
- zone-a
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
maxReplicas: 3
tolertions: []
tolerations: [ ]
patch:
metadata:
labels:
@ -99,7 +99,7 @@ tolerations:
effect: "NoSchedule"
```
- `patch`: customize the Pod configuration of `subset`, such as Annotations, Labels, Env.
- `patch`: customize the Pod configuration of `subset`, such as Annotations, Labels, Env.
Example:
@ -145,10 +145,10 @@ WorkloadSpread provides two kind strategies, the default strategy is `Fixed`.
rescheduleCriticalSeconds: 30
```
- Fixed:
- Fixed:
Workload is strictly spread according to the definition of the subset.
Workload is strictly spread according to the definition of the subset.
- Adaptive:
**Reschedule**: Kruise will check the unschedulable Pods of subset. If it exceeds the defined duration, the failed Pods will be rescheduled to the other `subset`.
@ -179,8 +179,8 @@ The workload managed by WorkloadSpread will scale according to the defined order
### Scale out
- The Pods are scheduled in the subset order defined in the `spec.subsets`. It will be scheduled in the next `subset` while the replica number reaches the maxReplicas of `subset`
- The Pods are scheduled in the subset order defined in the `spec.subsets`. It will be scheduled in the next `subset` while the replica number reaches the maxReplicas of `subset`
### Scale in
- When the replica number of the `subset` is greater than the `maxReplicas`, the extra Pods will be removed in a high priority.

View File

@ -43,16 +43,16 @@ spec:
operator: In
values:
- zone-a
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
maxReplicas: 3
tolertions: []
tolerations: [ ]
patch:
metadata:
labels:
@ -99,7 +99,7 @@ tolerations:
effect: "NoSchedule"
```
- `patch`: customize the Pod configuration of `subset`, such as Annotations, Labels, Env.
- `patch`: customize the Pod configuration of `subset`, such as Annotations, Labels, Env.
Example:
@ -145,10 +145,10 @@ WorkloadSpread provides two kind strategies, the default strategy is `Fixed`.
rescheduleCriticalSeconds: 30
```
- Fixed:
- Fixed:
Workload is strictly spread according to the definition of the subset.
Workload is strictly spread according to the definition of the subset.
- Adaptive:
**Reschedule**: Kruise will check the unschedulable Pods of subset. If it exceeds the defined duration, the failed Pods will be rescheduled to the other `subset`.
@ -179,8 +179,8 @@ The workload managed by WorkloadSpread will scale according to the defined order
### Scale out
- The Pods are scheduled in the subset order defined in the `spec.subsets`. It will be scheduled in the next `subset` while the replica number reaches the maxReplicas of `subset`
- The Pods are scheduled in the subset order defined in the `spec.subsets`. It will be scheduled in the next `subset` while the replica number reaches the maxReplicas of `subset`
### Scale in
- When the replica number of the `subset` is greater than the `maxReplicas`, the extra Pods will be removed in a high priority.

View File

@ -47,16 +47,16 @@ spec:
operator: In
values:
- zone-a
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
maxReplicas: 3
tolertions: []
tolerations: [ ]
patch:
metadata:
labels:
@ -103,7 +103,7 @@ tolerations:
effect: "NoSchedule"
```
- `patch`: customize the Pod configuration of `subset`, such as Annotations, Labels, Env.
- `patch`: customize the Pod configuration of `subset`, such as Annotations, Labels, Env.
Example:
@ -149,10 +149,10 @@ WorkloadSpread provides two kind strategies, the default strategy is `Fixed`.
rescheduleCriticalSeconds: 30
```
- Fixed:
- Fixed:
Workload is strictly spread according to the definition of the subset.
Workload is strictly spread according to the definition of the subset.
- Adaptive:
**Reschedule**: Kruise will check the unschedulable Pods of subset. If it exceeds the defined duration, the failed Pods will be rescheduled to the other `subset`.
@ -183,8 +183,8 @@ The workload managed by WorkloadSpread will scale according to the defined order
### Scale out
- The Pods are scheduled in the subset order defined in the `spec.subsets`. It will be scheduled in the next `subset` while the replica number reaches the maxReplicas of `subset`
- The Pods are scheduled in the subset order defined in the `spec.subsets`. It will be scheduled in the next `subset` while the replica number reaches the maxReplicas of `subset`
### Scale in
- When the replica number of the `subset` is greater than the `maxReplicas`, the extra Pods will be removed in a high priority.

View File

@ -47,16 +47,16 @@ spec:
operator: In
values:
- zone-a
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
maxReplicas: 3
tolertions: []
tolerations: [ ]
patch:
metadata:
labels:
@ -103,7 +103,7 @@ tolerations:
effect: "NoSchedule"
```
- `patch`: customize the Pod configuration of `subset`, such as Annotations, Labels, Env.
- `patch`: customize the Pod configuration of `subset`, such as Annotations, Labels, Env.
Example:
@ -149,10 +149,10 @@ WorkloadSpread provides two kind strategies, the default strategy is `Fixed`.
rescheduleCriticalSeconds: 30
```
- Fixed:
- Fixed:
Workload is strictly spread according to the definition of the subset.
Workload is strictly spread according to the definition of the subset.
- Adaptive:
**Reschedule**: Kruise will check the unschedulable Pods of subset. If it exceeds the defined duration, the failed Pods will be rescheduled to the other `subset`.
@ -183,8 +183,8 @@ The workload managed by WorkloadSpread will scale according to the defined order
### Scale out
- The Pods are scheduled in the subset order defined in the `spec.subsets`. It will be scheduled in the next `subset` while the replica number reaches the maxReplicas of `subset`
- The Pods are scheduled in the subset order defined in the `spec.subsets`. It will be scheduled in the next `subset` while the replica number reaches the maxReplicas of `subset`
### Scale in
- When the replica number of the `subset` is greater than the `maxReplicas`, the extra Pods will be removed in a high priority.

View File

@ -49,16 +49,16 @@ spec:
operator: In
values:
- zone-a
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
preferredNodeSelectorTerms:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
maxReplicas: 3
tolertions: []
tolerations: [ ]
patch:
metadata:
labels: