Merge pull request #32393 from nate-double-u/merged-main-dev-1.24

Merged main into dev 1.24
This commit is contained in:
Kubernetes Prow Robot 2022-03-22 20:11:59 -07:00 committed by GitHub
commit 6c43d0942e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
42 changed files with 1549 additions and 1064 deletions

View File

@ -42,7 +42,7 @@ that lets you store configuration for other objects to use. Unlike most
Kubernetes objects that have a `spec`, a ConfigMap has `data` and `binaryData`
fields. These fields accept key-value pairs as their values. Both the `data`
field and the `binaryData` are optional. The `data` field is designed to
contain UTF-8 byte sequences while the `binaryData` field is designed to
contain UTF-8 strings while the `binaryData` field is designed to
contain binary data as base64-encoded strings.
The name of a ConfigMap must be a valid

View File

@ -344,7 +344,7 @@ pluginapi.Device{ID: "25102017", Health: pluginapi.Healthy, Topology:&pluginapi.
Here are some examples of device plugin implementations:
* The [AMD GPU device plugin](https://github.com/RadeonOpenCompute/k8s-device-plugin)
* The [Intel device plugins](https://github.com/intel/intel-device-plugins-for-kubernetes) for Intel GPU, FPGA and QuickAssist devices
* The [Intel device plugins](https://github.com/intel/intel-device-plugins-for-kubernetes) for Intel GPU, FPGA, QAT, VPU, SGX, DSA, DLB and IAA devices
* The [KubeVirt device plugins](https://github.com/kubevirt/kubernetes-device-plugins) for hardware-assisted virtualization
* The [NVIDIA GPU device plugin](https://github.com/NVIDIA/k8s-device-plugin)
* Requires [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) 2.0, which allows you to run GPU-enabled Docker containers.

View File

@ -316,7 +316,7 @@ spec:
parameters:
# The parameters for this IngressClass are specified in an
# IngressParameter (API group k8s.example.com) named "external-config",
# that's in the "external-configuration" configuration namespace.
# that's in the "external-configuration" namespace.
scope: Namespace
apiGroup: k8s.example.com
kind: IngressParameter

View File

@ -136,7 +136,7 @@ completion or failed for some reason. When you use `kubectl` to query a Pod with
a container that is `Terminated`, you see a reason, an exit code, and the start and
finish time for that container's period of execution.
If a container has a `preStop` hook configured, that runs before the container enters
If a container has a `preStop` hook configured, this hook runs before the container enters
the `Terminated` state.
## Container restart policy {#restart-policy}

View File

@ -238,7 +238,7 @@ kubectl rollout status -w deployment/frontend # Watch rolling
kubectl rollout restart deployment/frontend # Rolling restart of the "frontend" deployment
cat pod.json | kubectl replace -f - # Replace a pod based on the JSON passed into std
cat pod.json | kubectl replace -f - # Replace a pod based on the JSON passed into stdin
# Force replace, delete and then re-create the resource. Will cause a service outage.
kubectl replace --force -f ./pod.json

View File

@ -184,6 +184,16 @@ Used on: PersistentVolumeClaim
This annotation has been deprecated.
### volume.beta.kubernetes.io/mount-options (deprecated) {#mount-options}
Example : `volume.beta.kubernetes.io/mount-options: "ro,soft"`
Used on: PersistentVolume
A Kubernetes administrator can specify additional [mount options](/docs/concepts/storage/persistent-volumes/#mount-options) for when a PersistentVolume is mounted on a node.
This annotation has been deprecated.
### volume.kubernetes.io/storage-provisioner
Used on: PersistentVolumeClaim
@ -475,7 +485,7 @@ The [`securityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1
When you [specify the security context for a Pod](/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod),
the settings you specify apply to all containers in that Pod.
### container.seccomp.security.alpha.kubernetes.io/[NAME] {#container-seccomp-security-alpha-kubernetes-io}
### container.seccomp.security.alpha.kubernetes.io/[NAME] (deprecated) {#container-seccomp-security-alpha-kubernetes-io}
This annotation has been deprecated since Kubernetes v1.19 and will become non-functional in v1.25.
The tutorial [Restrict a Container's Syscalls with seccomp](/docs/tutorials/security/seccomp/) takes

View File

@ -99,10 +99,10 @@ and a memory limit of 200 MiB.
```yaml
...
resources:
limits:
memory: 200Mi
requests:
memory: 100Mi
limits:
memory: 200Mi
...
```

View File

@ -83,7 +83,7 @@ For example, to download version {{< param "fullversion" >}} on Linux, type:
```bash
chmod +x kubectl
mkdir -p ~/.local/bin/kubectl
mkdir -p ~/.local/bin
mv ./kubectl ~/.local/bin/kubectl
# and then append (or prepend) ~/.local/bin to $PATH
```

View File

@ -8,9 +8,9 @@ spec:
- name: memory-demo-3-ctr
image: polinux/stress
resources:
limits:
memory: "1000Gi"
requests:
memory: "1000Gi"
limits:
memory: "1000Gi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]

View File

@ -8,9 +8,9 @@ spec:
- name: memory-demo-ctr
image: polinux/stress
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
limits:
memory: "200Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]

View File

@ -21,7 +21,6 @@ content_type: concept
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install)はFlannelとCalicoをあわせたもので、ネットワークとネットワークポリシーを提供します。
* [Cilium](https://github.com/cilium/cilium)は、L3のネットワークとネットワークポリシーのプラグインで、HTTP/API/L7のポリシーを透過的に強制できます。ルーティングとoverlay/encapsulationモードの両方をサポートしており、他のCNIプラグイン上で機能できます。
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie)は、KubernetesをCalico、Canal、Flannel、Romana、Weaveなど選択したCNIプラグインをシームレスに接続できるようにするプラグインです。
* [Contiv](https://contiv.github.io)は、さまざまなユースケースと豊富なポリシーフレームワーク向けに設定可能なネットワーク(BGPを使用したネイティブのL3、vxlanを使用したオーバーレイ、古典的なL2、Cisco-SDN/ACI)を提供します。Contivプロジェクトは完全に[オープンソース](https://github.com/contiv)です。[インストーラ](https://github.com/contiv/install)はkubeadmとkubeadm以外の両方をベースとしたインストールオプションがあります。
* [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/)は、[Tungsten Fabric](https://tungsten.io)をベースにしている、オープンソースでマルチクラウドに対応したネットワーク仮想化およびポリシー管理プラットフォームです。ContrailおよびTungsten Fabricは、Kubernetes、OpenShift、OpenStack、Mesosなどのオーケストレーションシステムと統合されており、仮想マシン、コンテナ/Pod、ベアメタルのワークロードに隔離モードを提供します。
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually)は、Kubernetesで使用できるオーバーレイネットワークプロバイダーです。
* [Knitter](https://github.com/ZTE/Knitter/)は、1つのKubernetes Podで複数のネットワークインターフェイスをサポートするためのプラグインです。

View File

@ -0,0 +1,160 @@
---
title: Podのオーバーヘッド
content_type: concept
weight: 30
---
<!-- overview -->
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
PodをNode上で実行する時に、Pod自身は大量のシステムリソースを消費します。これらのリソースは、Pod内のコンテナ(群)を実行するために必要なリソースとして追加されます。Podのオーバーヘッドは、コンテナの要求と制限に加えて、Podのインフラストラクチャで消費されるリソースを計算するための機能です。
<!-- body -->
Kubernetesでは、Podの[RuntimeClass](/docs/concepts/containers/runtime-class/)に関連するオーバーヘッドに応じて、[アドミッション](/ja/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)時にPodのオーバーヘッドが設定されます。
Podのオーバーヘッドを有効にした場合、Podのスケジューリング時にコンテナのリソース要求の合計に加えて、オーバーヘッドも考慮されます。同様に、Kubeletは、Podのcgroupのサイズ決定時およびPodの退役の順位付け時に、Podのオーバーヘッドを含めます。
## Podのオーバーヘッドの有効化 {#set-up}
クラスター全体で`PodOverhead`の[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)が有効になっていること1.18時点ではデフォルトでオンになっています)と、`overhead`フィールドを定義する`RuntimeClass`が利用されていることを確認する必要があります。
## 使用例
Podのオーバーヘッド機能を使用するためには、`overhead`フィールドが定義されたRuntimeClassが必要です。例として、仮想マシンとゲストOSにPodあたり約120MiBを使用する仮想化コンテナランタイムで、次のようなRuntimeClassを定義できます。
```yaml
---
kind: RuntimeClass
apiVersion: node.k8s.io/v1
metadata:
name: kata-fc
handler: kata-fc
overhead:
podFixed:
memory: "120Mi"
cpu: "250m"
```
`kata-fc`RuntimeClassハンドラーを指定して作成されたワークロードは、リソースクォータの計算や、Nodeのスケジューリング、およびPodのcgroupのサイズ決定にメモリーとCPUのオーバーヘッドが考慮されます。
次のtest-podのワークロードの例を実行するとします。
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
runtimeClassName: kata-fc
containers:
- name: busybox-ctr
image: busybox
stdin: true
tty: true
resources:
limits:
cpu: 500m
memory: 100Mi
- name: nginx-ctr
image: nginx
resources:
limits:
cpu: 1500m
memory: 100Mi
```
アドミッション時、RuntimeClass[アドミッションコントローラー](/docs/reference/access-authn-authz/admission-controllers/)は、RuntimeClass内に記述された`オーバーヘッド`を含むようにワークロードのPodSpecを更新します。もし既にPodSpec内にこのフィールドが定義済みの場合、そのPodは拒否されます。この例では、RuntimeClassの名前しか指定されていないため、アドミッションコントローラーは`オーバーヘッド`を含むようにPodを変更します。
RuntimeClassのアドミッションコントローラーの後、更新されたPodSpecを確認できます。
```bash
kubectl get pod test-pod -o jsonpath='{.spec.overhead}'
```
出力は次の通りです:
```
map[cpu:250m memory:120Mi]
```
ResourceQuotaが定義されている場合、コンテナ要求の合計と`オーバーヘッド`フィールドがカウントされます。
kube-schedulerが新しいPodを実行すべきNodeを決定する際、スケジューラーはそのPodの`オーバーヘッド`と、そのPodに対するコンテナ要求の合計を考慮します。この例だと、スケジューラーは、要求とオーバーヘッドを追加し、2.25CPUと320MiBのメモリを持つNodeを探します。
PodがNodeにスケジュールされると、そのNodeのkubeletはPodのために新しい{{< glossary_tooltip text="cgroup" term_id="cgroup" >}}を生成します。基盤となるコンテナランタイムがコンテナを作成するのは、このPod内です。
リソースにコンテナごとの制限が定義されている場合(制限が定義されているGuaranteed QoSまたはBustrable QoS)、kubeletはそのリソース(CPUはcpu.cfs_quota_us、メモリはmemory.limit_in_bytes)に関連するPodのcgroupの上限を設定します。この上限は、コンテナの制限とPodSpecで定義された`オーバーヘッド`の合計に基づきます。
CPUについては、PodがGuaranteedまたはBurstable QoSの場合、kubeletはコンテナの要求の合計とPodSpecに定義された`オーバーヘッド`に基づいて`cpu.share`を設定します。
次の例より、ワークロードに対するコンテナの要求を確認できます。
```bash
kubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}'
```
コンテナの要求の合計は、CPUは2000m、メモリーは200MiBです。
```
map[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi]
```
Nodeで観測される値と比較してみましょう。
```bash
kubectl describe node | grep test-pod -B2
```
出力では、2250mのCPUと320MiBのメモリーが要求されており、Podのオーバーヘッドが含まれていることが分かります。
```
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default test-pod 2250m (56%) 2250m (56%) 320Mi (1%) 320Mi (1%) 36m
```
## Podのcgroupの制限を確認
ワークロードで実行中のNode上にある、Podのメモリーのcgroupを確認します。次に示す例では、CRI互換のコンテナランタイムのCLIを提供するNodeで[`crictl`](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md)を使用しています。これはPodのオーバーヘッドの動作を示すための高度な例であり、ユーザーがNode上で直接cgroupsを確認する必要はありません。
まず、特定のNodeで、Podの識別子を決定します。
```bash
# PodがスケジュールされているNodeで実行
POD_ID="$(sudo crictl pods --name test-pod -q)"
```
ここから、Podのcgroupのパスが決定します。
```bash
# PodがスケジュールされているNodeで実行
sudo crictl inspectp -o=json $POD_ID | grep cgroupsPath
```
結果のcgroupパスにはPodの`ポーズ中`コンテナも含まれます。Podレベルのcgroupはつ上のディレクトリです。
```
"cgroupsPath": "/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a"
```
今回のケースでは、Podのcgroupパスは、`kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`となります。メモリーのPodレベルのcgroupの設定を確認しましょう。
```bash
# PodがスケジュールされているNodeで実行
# また、Podに割り当てられたcgroupと同じ名前に変更
cat /sys/fs/cgroup/memory/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/memory.limit_in_bytes
```
予想通り320MiBです。
```
335544320
```
### Observability
Podのオーバヘッドが利用されているタイミングを特定し、定義されたオーバーヘッドで実行されているワークロードの安定性を観察するため、[kube-state-metrics](https://github.com/kubernetes/kube-state-metrics)には`kube_pod_overhead`というメトリクスが用意されています。この機能はv1.9のkube-state-metricsでは利用できませんが、次のリリースで期待されています。それまでは、kube-state-metricsをソースからビルドする必要があります。
## {{% heading "whatsnext" %}}
* [RuntimeClass](/ja/docs/concepts/containers/runtime-class/)
* [Podのオーバーヘッドの設計](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead)

View File

@ -0,0 +1,174 @@
---
title: スケジューリングフレームワーク
content_type: concept
weight: 90
---
<!-- overview -->
{{< feature-state for_k8s_version="v1.19" state="stable" >}}
スケジューリングフレームワークはKubernetesのスケジューラーに対してプラグイン可能なアーキテクチャです。
このアーキテクチャは、既存のスケジューラーに新たに「プラグイン」としてAPI群を追加するもので、プラグインはスケジューラー内部にコンパイルされます。このAPI群により、スケジューリングの「コア」の軽量かつ保守しやすい状態に保ちながら、ほとんどのスケジューリングの機能をプラグインとして実装することができます。このフレームワークの設計に関する技術的な情報についてはこちらの[スケジューリングフレームワークの設計提案][kep]をご覧ください。
[kep]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/624-scheduling-framework/README.md
<!-- body -->
# フレームワークのワークフロー
スケジューリングフレームワークは、いくつかの拡張点を定義しています。スケジューラープラグインは、1つ以上の拡張点で呼び出されるように登録します。これらのプラグインの中には、スケジューリングの決定を変更できるものから、単に情報提供のみを行うだけのものなどがあります。
この1つのPodをスケジュールしようとする各動作は**Scheduling Cycle**と**Binding Cycle**の2つのフェーズに分けられます。
## Scheduling Cycle & Binding Cycle
Scheduling CycleではPodが稼働するNodeを決定し、Binding Cycleではそれをクラスターに適用します。この2つのサイクルを合わせて「スケジューリングコンテキスト」と呼びます。
Scheduling CycleではPodに対して1つ1つが順番に実行され、Binding Cyclesでは並列に実行されます。
Podがスケジューリング不能と判断された場合や、内部エラーが発生した場合、Scheduling CycleまたはBinding Cycleを中断することができます。その際、Podはキューに戻され再試行されます。
## 拡張点
次の図はPodに対するスケジューリングコンテキストとスケジューリングフレームワークが公開する拡張点を示しています。この図では「Filter」がフィルタリングのための「Predicate」、「Scoring」がスコアリングのための「Priorities」機能に相当します。
1つのプラグインを複数の拡張点に登録することで、より複雑なタスクやステートフルなタスクを実行することができます。
{{< figure src="/images/docs/scheduling-framework-extensions.png" title="scheduling framework extension points" class="diagram-large">}}
### QueueSort {#queue-sort}
これらのプラグインはスケジューリングキュー内のPodをソートするために使用されます。このプラグインは、基本的に`Less(Pod1, Pod2)`という関数を提供します。また、このプラグインは、1つだけ有効化できます。
### PreFilter {#pre-filter}
これらのプラグインは、Podに関する情報を前処理したり、クラスターやPodが満たすべき特定の条件をチェックするために使用されます。もし、PreFilterプラグインのいずれかがエラーを返した場合、Scheduling Cycleは中断されます。
### Filter
FilterプラグインはPodを実行できないNodeを候補から除外します。各Nodeに対して、スケジューラーは設定された順番でFilterプラグインを呼び出します。もし、いずれかのFilterプラグインが途中でそのNodeを実行不可能とした場合、残りのプラグインではそのNodeは呼び出されません。Nodeは同時に評価されることがあります。
### PostFilter {#post-filter}
これらのプラグインはFilterフェーズで、Podに対して実行可能なNodeが見つからなかった場合にのみ呼び出されます。このプラグインは設定された順番で呼び出されます。もしいずれかのPostFilterプラグインが、あるNodeを「スケジュール可能(Schedulable)」と目星をつけた場合、残りのプラグインは呼び出されません。典型的なPostFilterの実装はプリエンプション方式で、他のPodを先取りして、Podをスケジューリングできるようにしようとします。
### PreScore {#pre-score}
これらのプラグインは、Scoreプラグインが使用する共有可能な状態を生成する「スコアリングの事前」作業を行うために使用されます。このプラグインがエラーを返した場合、Scheduling Cycleは中断されます。
### Score {#scoring}
これらのプラグインはフィルタリングのフェーズを通過したNodeをランク付けするために使用されます。スケジューラーはそれぞれのNodeに対して、それぞれのscoringプラグインを呼び出します。スコアの最小値と最大値の範囲が明確に定義されます。[NormalizeScore](#normalize-scoring)フェーズの後、スケジューラーは設定されたプラグインの重みに従って、全てのプラグインからNodeのスコアを足し合わせます。
### NormalizeScore {#normalize-scoring}
これらのプラグインはスケジューラーが最終的なNodeの順位を計算する前にスコアを修正するために使用されます。この拡張点に登録されたプラグインは、同じプラグインの[Score](#scoring)の結果を使用して呼び出されます。各プラグインはScheduling Cycle毎に、1回呼び出されます。
例えば、`BlinkingLightScorer`というプラグインが、点滅する光の数に基づいてランク付けをするとします。
```go
func ScoreNode(_ *v1.pod, n *v1.Node) (int, error) {
return getBlinkingLightCount(n)
}
```
ただし、`NodeScoreMax`に比べ、点滅をカウントした最大値の方が小さい場合があります。これを解決するために、`BlinkingLightScorer`も拡張点に登録する必要があります。
```go
func NormalizeScores(scores map[string]int) {
highest := 0
for _, score := range scores {
highest = max(highest, score)
}
for node, score := range scores {
scores[node] = score*NodeScoreMax/highest
}
}
```
NormalizeScoreプラグインが途中でエラーを返した場合、Scheduling Cycleは中断されます。
{{< note >}}
「Reserveの事前」作業を行いたいプラグインは、NormalizeScore拡張点を使用してください。
{{< /note >}}
### Reserve {#reserve}
Reserve拡張を実装したプラグインには、ReserveとUnreserve という2つのメソッドがあり、それぞれ`Reserve`
と`Unreserve`と呼ばれる2つの情報スケジューリングフェーズを返します。
実行状態を保持するプラグイン別名「ステートフルプラグイン」は、これらのフェーズを使用して、Podに対してNodeのリソースが予約されたり予約解除された場合に、スケジューラーから通知を受け取ります。
Reserveフェーズは、スケジューラーが実際にPodを指定されたNodeにバインドする前に発生します。このフェーズはスケジューラーがバインドが成功するのを待つ間にレースコンディションの発生を防ぐためにあります。
各Reserveプラグインの`Reserve`メソッドは成功することも失敗することもあります。もしどこかの`Reserve`メソッドの呼び出しが失敗すると、後続のプラグインは実行されず、Reserveフェーズは失敗したものとみなされます。全てのプラグインの`Reserve`メソッドが成功した場合、Reserveフェーズは成功とみなされ、残りのScheduling CycleとBinding Cycleが実行されます。
Unreserveフェーズは、Reserveフェーズまたは後続のフェーズが失敗した場合に、呼び出されます。この時、**全ての**Reserveプラグインの`Unreserve`メソッドが、`Reserve`メソッドの呼び出された逆の順序で実行されます。このフェーズは予約されたPodに関連する状態をクリーンアップするためにあります。
{{< caution >}}
`Unreserve`メソッドの実装は冪等性を持つべきであり、この処理で問題があった場合に失敗させてはなりません。
{{< /caution >}}
<!-- ### Permit -->
### Permit
_Permit_ プラグインは、各PodのScheduling Cycleの終了時に呼び出され、候補Nodeへのバインドを阻止もしくは遅延させるために使用されます。permitプラグインは次の3つのうちどれかを実行できます。
1. **承認(approve)** \
全てのPermitプラグインから承認(approve)されたPodは、バインド処理へ送られます。
1. **拒否(deny)** \
もしどれか1つのPermitプラグインがPodを拒否(deny)した場合、そのPodはスケジューリングキューに戻されます。
これは[Reserveプラグイン](#reserve)内のUnreserveフェーズで呼び出されます。
1. **待機(wait)** (タイムアウトあり) \
もしPermitプラグインが「待機(wait)」を返した場合、そのPodは内部の「待機中」Podリストに保持され、このPodに対するBinding Cycleは開始されるものの、承認(approve)されるまで直接ブロックされます。もしタイムアウトが発生した場合、この**待機(wait)**は**deny**へ変わり、対象のPodはスケジューリングキューに戻されると共に、[Reserveプラグイン](#reserve)のUnreserveフェーズが呼び出されます。
{{< note >}}
どのプラグインも「待機中」Podリストにアクセスして、それらを承認(approve)することができますが(参考:[`FrameworkHandle`](https://git.k8s.io/enhancements/keps/sig-scheduling/624-scheduling-framework#frameworkhandle))、その中の予約済みPodのバインドを承認(approve)できるのはPermitプラグインだけであると予想します。承認(approve)されたPodは、[PreBind](#pre-bind)フェーズへ送られます。
{{< /note >}}
### PreBind {#pre-bind}
これらのプラグインは、Podがバインドされる前に必要な作業を行うために使用されます。例えば、Podの実行を許可する前に、ネットワークボリュームをプロビジョニングし、Podを実行予定のNodeにマウントすることができます。
もし、いずれかのPreBindプラグインがエラーを返した場合、Podは[拒否](#reserve)され、スケジューリングキューに戻されます。
### Bind
これらのプラグインはPodをNodeにバインドするために使用されます。このプラグインは全てのPreBindプラグインの処理が完了するまで呼ばれません。それぞれのBindプラグインは設定された順序で呼び出されます。このプラグインは、与えられたPodを処理するかどうかを選択することができます。もしPodを処理することを選択した場合、**残りのBindプラグインは全てスキップされます。**
### PostBind {#post-bind}
これは単に情報提供のための拡張点です。Post-bindプラグインはPodのバインドが成功した後に呼び出されます。これはBinding Cycleの最後であり、関連するリソースのクリーンアップに使用されます。
## プラグインAPI
プラグインAPIには2つの段階があります。まず、プラグインを登録し設定することです。そして、拡張点インターフェースを使用することです。このインターフェースは次のような形式をとります。
```go
type Plugin interface {
Name() string
}
type QueueSortPlugin interface {
Plugin
Less(*v1.pod, *v1.pod) bool
}
type PreFilterPlugin interface {
Plugin
PreFilter(context.Context, *framework.CycleState, *v1.pod) error
}
// ...
```
## プラグインの設定
スケジューラーの設定でプラグインを有効化・無効化することができます。Kubernetes v1.18以降を使用しているなら、ほとんどのスケジューリング[プラグイン](/docs/reference/scheduling/config/#scheduling-plugins)は使用されており、デフォルトで有効になっています。
デフォルトのプラグインに加えて、独自のスケジューリングプラグインを実装し、デフォルトのプラグインと一緒に使用することも可能です。詳しくは[スケジューラープラグイン](https://github.com/kubernetes-sigs/scheduler-plugins)をご覧下さい。
Kubernetes v1.18以降を使用しているなら、プラグインのセットをスケジューラープロファイルとして設定し、様々な種類のワークロードに適合するように複数のプロファイルを定義することが可能です。詳しくは[複数のプロファイル](/docs/reference/scheduling/config/#multiple-profiles)をご覧下さい。

View File

@ -62,15 +62,15 @@ CAの秘密鍵をクラスターにコピーしたくない場合、自身で全
必要な証明書:
| デフォルトCN | 親認証局 | 組織       | 種類 | ホスト名 (SAN) |
|-------------------------------|---------------------------|----------------|----------------------------------------|---------------------------------------------|
| kube-etcd | etcd-ca | | server, client | `localhost`, `127.0.0.1` |
| デフォルトCN | 親認証局 | 組織       | 種類 | ホスト名 (SAN) |
|-------------------------------|---------------------------|----------------|----------------------------------------|-----------------------------------------------------|
| kube-etcd | etcd-ca | | server, client | `<hostname>`, `<Host_IP>`, `localhost`, `127.0.0.1` |
| kube-etcd-peer | etcd-ca | | server, client | `<hostname>`, `<Host_IP>`, `localhost`, `127.0.0.1` |
| kube-etcd-healthcheck-client | etcd-ca | | client | |
| kube-apiserver-etcd-client | etcd-ca | system:masters | client | |
| kube-apiserver | kubernetes-ca | | server | `<hostname>`, `<Host_IP>`, `<advertise_IP>`, `[1]` |
| kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | |
| front-proxy-client | kubernetes-front-proxy-ca | | client | |
| kube-etcd-healthcheck-client | etcd-ca | | client | |
| kube-apiserver-etcd-client | etcd-ca | system:masters | client | |
| kube-apiserver | kubernetes-ca | | server | `<hostname>`, `<Host_IP>`, `<advertise_IP>`, `[1]` |
| kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | |
| front-proxy-client | kubernetes-front-proxy-ca | | client | |
[1]: クラスターに接続するIPおよびDNS名( [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/)を使用する場合と同様、ロードバランサーのIPおよびDNS名、`kubernetes`、`kubernetes.default`、`kubernetes.default.svc`、`kubernetes.default.svc.cluster`、`kubernetes.default.svc.cluster.local`)

View File

@ -49,13 +49,15 @@ spec:
livenessProbe:
httpGet:
path: /healthz
port: 10251
port: 10259
scheme: HTTPS
initialDelaySeconds: 15
name: kube-second-scheduler
readinessProbe:
httpGet:
path: /healthz
port: 10251
port: 10259
scheme: HTTPS
resources:
requests:
cpu: '0.1'

View File

@ -1131,7 +1131,7 @@ API 버전 `apps/v1` 에서는 `.spec.selector` 와 `.metadata.labels` 이 설
`.spec.strategy.rollingUpdate.maxUnavailable` 은 업데이트 프로세스 중에 사용할 수 없는 최대 파드의 수를 지정하는 선택적 필드이다.
이 값은 절대 숫자(예: 5) 또는 의도한 파드 비율(예: 10%)이 될 수 있다.
절대 값은 반올림해서 백분율로 계산한다.
절대 값은 림해서 백분율로 계산한다.
만약 `.spec.strategy.rollingUpdate.maxSurge` 가 0이면 값이 0이 될 수 없다. 기본 값은 25% 이다.
예를 들어 이 값을 30%로 설정하면 롤링업데이트 시작시 즉각 이전 레플리카셋의 크기를
@ -1144,7 +1144,7 @@ API 버전 `apps/v1` 에서는 `.spec.selector` 와 `.metadata.labels` 이 설
`.spec.strategy.rollingUpdate.maxSurge` 는 의도한 파드의 수에 대해 생성할 수 있는 최대 파드의 수를 지정하는 선택적 필드이다.
이 값은 절대 숫자(예: 5) 또는 의도한 파드 비율(예: 10%)이 될 수 있다.
`MaxUnavailable` 값이 0이면 이 값은 0이 될 수 없다.
절대 값은 올림해서 백분율로 계산한다. 기본 값은 25% 이다.
절대 값은 올림해서 백분율로 계산한다. 기본 값은 25% 이다.
예를 들어 이 값을 30%로 설정하면 롤링업데이트 시작시 새 레플리카셋의 크기를 즉시 조정해서
기존 및 새 파드의 전체 갯수를 의도한 파드의 130%를 넘지 않도록 한다.

View File

@ -0,0 +1,58 @@
---
title: Intervalos de limite
content_type: concept
weight: 10
---
<!-- overview -->
Por padrão, os cointêineres são executados com [recursos computacionais](/docs/concepts/configuration/manage-resources-containers/) ilimitados em um cluster Kubernetes. Com cotas de recursos, os administradores de cluster podem restringir o consumo e a criação de recursos baseado no {{< glossary_tooltip text="namespace" term_id="namespace" >}}. Dentro de um _namespace_, pod ou contêiner pode haver o consumo de quantidade de CPU e memória definidos de acordo com a cota de recursos do _namespace_. Existe a preocupação de que um Pod ou contêiner possa monopolizar todos os recursos disponíveis, justamente por conta disso existe o conceito de _Limit Range_, ou intervalos de limite, que pode ser definido como uma política utilizada para a restrição de alocação de recursos (para pods ou contêineres) em um _namespace_.
<!-- body -->
Um _LimitRange_ fornece restrições que podem:
- Aplicar o uso mínimo e máximo de recursos computacionais por pod ou contêiner em um _namespace_.
- Impor a solicitação de armazenamento mínimo e máximo por _PersistentVolumeClaim_ em um _namespace_.
- Impor a proporção entre solicitação e limite para um recurso em um _namespace_.
- Definir a solicitação/limite padrão para recursos computacionais em um _namespace_ e utilizá-los automaticamente nos contêineres em tempo de execução.
## Ativando o LimitRange
O suporte ao _LimitRange_ foi ativado por padrão desde o Kubernetes 1.10.
Um _LimitRange_ é aplicado em um _namespace_ específico quando há um objeto _LimitRange_ nesse _namespace_.
O nome de um objeto _LimitRange_ deve ser um [nome de subdomínio DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido.
### Visão geral do Limit Range
- O administrador cria um _LimitRange_ em um _namespace_.
- Os usuários criam recursos como pods, contêineres e _PersistentVolumeClaims_ no _namespace_.
- O controlador de admissão `LimitRanger` impõe padrões e limites para todos os pods e contêineres que não definem os requisitos de recursos computacionais e rastreia o uso para garantir que não exceda o mínimo, o máximo e a proporção de recursos definidos em qualquer _LimitRange_ presente no _namespace_.
- Se estiver criando ou atualizando um recurso (Pod, Container, _PersistentVolumeClaim_) que viola uma restrição _LimitRange_, a solicitação ao servidor da API falhará com um código de status HTTP `403 FORBIDDEN` e uma mensagem explicando a restrição violada.
- Se um _LimitRange_ for ativado em um _namespace_ para recursos computacionais como `cpu` e `memória`, os usuários deverão especificar solicitações ou limites para esses valores. Caso contrário, o sistema pode rejeitar a criação do pod.
- As validações de _LimitRange_ ocorrem apenas no estágio de Admissão de Pod, não em Pods em Execução.
Alguns exemplos de políticas que podem ser criadas utilizando os intervalos de limite são:
- Em um cluster de 2 nós com capacidade de 8 GiB de RAM e 16 núcleos, restrinja os Pods em um namespace para solicitar 100m de CPU com um limite máximo de 500m para CPU e solicitar 200Mi para memória com um limite máximo de 600Mi para memória.
- Defina o limite e a solicitação de CPU padrão para 150m e a solicitação padrão de memória para 300Mi para contêineres iniciados sem solicitações de CPU e memória em suas especificações.
Caso os limites totais do namespace sejam menores que a soma dos limites dos Pods/Contêineres, pode haver contenção por recursos. Nesse caso, os contêineres ou Pods não serão criados.
Nem a contenção nem as alterações em um _LimitRange_ afetarão os recursos já criados.
## {{% heading "whatsnext" %}}
Consulte o [documento de design LimitRanger](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) para obter mais informações.
Para exemplos de uso de limites, leia:
- [Como configurar restrições mínimas e máximas de CPU por _namespace_](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/).
- [Como configurar restrições de memória mínima e máxima por _namespace_](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/).
- [como configurar solicitações e limites de CPU padrão por _namespace_](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/).
- [como configurar solicitações e limites de memória padrão por _namespace_](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/).
- [como configurar o consumo mínimo e máximo de armazenamento por _namespace_](/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage).
- Um [exemplo detalhado de configuração de cota por _namespace_](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/).

View File

@ -0,0 +1,649 @@
---
title: Cotas de Recursos
content_type: concept
weight: 20
---
<!-- overview -->
Quando vários usuários ou equipes compartilham um cluster com um número fixo de nós,
há uma preocupação de que uma equipe possa usar mais do que é justo durante o compartilhamento de recursos.
As cotas de recursos são uma ferramenta para os administradores resolverem essa preocupação.
<!-- body -->
Uma cota de recurso, definida por um objeto `ResourceQuota`, fornece restrições que limitam
consumo de recursos agregados por _namespace_. Pode limitar a quantidade de objetos que podem
ser criado em um _namespace_ por tipo, bem como a quantidade total de recursos computacionais que podem
ser consumidos por recursos nesse _namespace_.
As cotas de recursos funcionam assim:
- Diferentes equipes trabalham em diferentes _namespaces_. Atualmente, isso é voluntário, mas o suporte para tornar isso obrigatório por meio de ACLs está planejado.
- O administrador cria uma `ResourceQuota` para cada _namespace_.
- Os usuários criam recursos (pods, serviços, etc.) no _namespace_ e o sistema de cotas rastreia o uso para garantir que ele não exceda os limites de recursos definidos em um `ResourceQuota`.
- Se a criação ou atualização de um recurso violar uma restrição de cota, a solicitação falhará com código de status HTTP `403 FORBIDDEN` acompanhado de uma mensagem explicando a restrição que foi violada.
- Se a cota estiver habilitada em um _namespace_ para recursos computacionais como `cpu` e `memória`, os usuários devem especificar solicitações ou limites para esses valores; caso contrário, o sistema de cotas poderá rejeitar a criação de pods. Dica: use o controlador de admissão `LimitRanger` para forçar padrões para pods que não exigem recursos computacionais.
Veja o [passo a passo](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
para um exemplo de como evitar este problema.
O nome de um objeto `ResourceQuota` deve ser um [nome do subdomínio DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido.
Exemplos de políticas que podem ser criadas usando _namespaces_ e cotas são:
- Em um cluster com capacidade de 32 GiB de RAM e 16 núcleos, deixe a equipe A usar 20 GiB e 10 núcleos, deixe B usar 10GiB e 4 núcleos e mantenha 2GiB e 2 núcleos em reserva para alocação futura.
- Limite o _namespace_ "testing" para usar 1 núcleo e 1GiB de RAM. Deixe o namespace "produção" usar qualquer quantia.
Caso a capacidade total do cluster seja menor que a soma das cotas dos _namespaces_, pode haver contenção de recursos. Isso é tratado por ordem de chegada.
Nem a contenção nem as alterações na cota afetarão os recursos já criados.
## Ativando a cota de recursos
O suporte à cota de recursos é ativado por padrão para muitas distribuições do Kubernetes. Isto é
ativado quando a flag {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}} `--enable-admission-plugins=` tem `ResourceQuota` como
um de seus argumentos.
Uma cota de recurso é aplicada em um _namespace_ específico quando há um `ResourceQuota` nesse _namespace_.
## Cota de recursos computacionais
Você pode limitar a soma total de [recursos computacionais](/docs/concepts/configuration/manage-resources-containers/) que pode ser solicitado em um determinado _namespace_.
Os seguintes tipos de recursos são suportados:
| Nome do Recurso | Descrição |
| --------------------- | ----------------------------------------------------------- |
| `limits.cpu` | Em todos os pods em um estado não terminal, a soma dos limites de CPU não pode exceder esse valor. |
| `limits.memory` | Em todos os pods em um estado não terminal, a soma dos limites de memória não pode exceder esse valor.|
| `requests.cpu` | Em todos os pods em um estado não terminal, a soma das solicitações da CPU não pode exceder esse valor. |
| `requests.memory` | Em todos os pods em um estado não terminal, a soma das solicitações de memória não pode exceder esse valor. |
| `hugepages-<size>` | Em todos os pods em um estado não terminal, o número de solicitações de grandes páginas do tamanho especificado não pode exceder esse valor. |
| `cpu` | O mesmo que `requests.cpu` |
| `memory` | O mesmo que `requests.memory` |
### Cota de recursos para recursos estendidos
Além dos recursos mencionados acima, na versão 1.10, suporte a cotas para [recursos estendidos](/docs/concepts/configuration/manage-resources-containers/#extended-resources) foi adicionado.
Como o `overcommit` não é permitido para recursos estendidos, não faz sentido especificar tanto `requests` e `limits` para o mesmo recurso estendido em uma cota. Portanto, para recursos estendidos, apenas itens de cota com prefixo `requests.` é permitido por enquanto.
Tome o recurso GPU como exemplo, se o nome do recurso for `nvidia.com/gpu` e você quiser limitar o número total de GPUs solicitadas em um _namespace_ para 4, você pode definir uma cota da seguinte maneira:
* `requests.nvidia.com/gpu: 4`
Veja [como visualizar e definir cotas](#viewing-and-setting-quotas) para mais informações.
## Cota de recursos de armazenamento
Você pode limitar a soma total de [recursos de armazenamento](/docs/concepts/storage/persistent-volumes/) que podem ser solicitados em um determinado _namespace_.
Além disso, você pode limitar o consumo de recursos de armazenamento com base na classe de armazenamento associada.
| Nome do recurso | Descrição |
| --------------------- | ----------------------------------------------------------- |
| `requests.storage` | Em todas as solicitações de volume persistentes, a soma das solicitações de armazenamento não pode exceder esse valor.|
| `persistentvolumeclaims` | O número total de [PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) que podem existir no namespace. |
| `<storage-class-name>.storageclass.storage.k8s.io/requests.storage` | Em todas as solicitações de volume persistentes associadas ao `<storage-class-name>`, a soma das solicitações de armazenamento não pode exceder esse valor. |
| `<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims` | Em todas as declarações de volume persistentes associadas ao storage-class-name, o número total de [declarações de volume persistente](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) que podem existir no namespace. |
Por exemplo, se um operador deseja cotar armazenamento com classe de armazenamento `gold` separada da classe de armazenamento `bronze`, o operador pode definir uma cota da seguinte forma:
* `gold.storageclass.storage.k8s.io/requests.storage: 500Gi`
* `bronze.storageclass.storage.k8s.io/requests.storage: 100Gi`
Na versão 1.8, o suporte de cota para armazenamento temporário local foi adicionado como um recurso alfa:
| Nome do Recurso | Descrição |
| ------------------------------- |----------------------------------------------------------- |
| `requests.ephemeral-storage` | Em todos os pods no _namespace_, a soma das solicitações de armazenamento local efêmero não pode exceder esse valor.|
| `limits.ephemeral-storage` | Em todos os pods no _namespace_, a soma dos limites de armazenamento temporário local não pode exceder esse valor. |
| `ephemeral-storage` | O mesmo que `requests.ephemeral-storage`. |
{{< note >}}
Ao usar um tempo de execução do contêiner CRI, os logs do contêiner serão contabilizados na cota de armazenamento efêmero. Isso pode resultar no despejo inesperado de pods que esgotaram suas cotas de armazenamento. Consulte [Arquitetura de registro](/docs/concepts/cluster-administration/logging/) para mais detalhes.
{{< /note >}}
## Cota de contagem de objetos
Você pode definir cotas para o número total de determinados recursos de todos os padrões, tipos de recursos com _namespace_ usando a seguinte sintaxe:
* `count/<resource>.<group>` para recursos de grupos não principais
* `count/<resource>` para recursos do grupo principal
Exemplo de conjunto de recursos que os usuários podem querer colocar na cota de contagem de objetos:
* `count/persistentvolumeclaims`
* `count/services`
* `count/secrets`
* `count/configmaps`
* `count/replicationcontrollers`
* `count/deployments.apps`
* `count/replicasets.apps`
* `count/statefulsets.apps`
* `count/jobs.batch`
* `count/cronjobs.batch`
A mesma sintaxe pode ser usada para recursos personalizados. Por exemplo, para criar uma cota em um recurso personalizado `widgets` no grupo de API `example.com`, use `count/widgets.example.com`.
Ao usar a cota de recurso `count/*`, um objeto é cobrado na cota se existir no armazenamento do servidor. Esses tipos de cotas são úteis para proteger contra o esgotamento dos recursos de armazenamento. Por exemplo, você pode desejar limitar o número de segredos em um servidor devido ao seu grande tamanho. Muitos segredos em um cluster podem
na verdade, impedir que servidores e controladores sejam iniciados. Você pode definir uma cota para projetos para proteger contra um `CronJob` mal configurado. `CronJobs` que criam muitos `Jobs` em um _namespace_ podem levar a uma negação de serviço.
Também é possível fazer uma cota de contagem de objetos genéricos em um conjunto limitado de recursos.
Os seguintes tipos são suportados:
| Nome do Recurso | Descrição |
| ------------------------------- | ------------------------------------------------- |
| `configmaps` | O número total de `ConfigMaps` que podem existir no namespace. |
| `persistentvolumeclaims` | O número total de [PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) que podem existir no namespace. |
| `pods` | O número total de pods em um estado não terminal que pode existir no namespace. Um pod está em um estado terminal se `.status.phase in (Failed, Succeeded)` for verdadeiro. |
| `replicationcontrollers` | O número total de `ReplicationControllers` que podem existir no _namespace_. |
| `resourcequotas` | O número total de `ResourceQuotas` que podem existir no _namespace_. |
| `services` | O número total de Serviços que podem existir no _namespace_. |
| `services.loadbalancers` | O número total de serviços do tipo `LoadBalancer` que podem existir no _namespace_. |
| `services.nodeports` | O número total de serviços do tipo `NodePort` que podem existir no _namespace_. |
| `secrets` | O número total de segredos que podem existir no _namespace_. |
Por exemplo, a cota de `pods` conta e impõe um número máximo de `pods` criados em um único _namespace_ que não é terminal. Você pode querer definir uma cota `pods`em um _namespace_ para evitar o caso em que um usuário cria muitos `pods` pequenos e esgota o fornecimento de IPs de pod do cluster.
## Escopos de cota
Cada cota pode ter um conjunto associado de `scopes`. Uma cota só medirá o uso de um recurso se corresponder
a interseção de escopos enumerados.
Quando um escopo é adicionado à cota, ele limita o número de recursos aos quais ele dá suporte a aqueles que pertencem ao escopo. Os recursos especificados na cota fora do conjunto permitido resultam em um erro de validação.
| Escopo | Descrição |
| ----- | ----------- |
| `Terminating` | Pods correspondentes onde `.spec.activeDeadlineSeconds >= 0` |
| `NotTerminating` | Pods correspondentes onde `.spec.activeDeadlineSeconds is nil` |
| `BestEffort` | Pods correspondentes que tenham a qualidade de serviço de melhor esforço. |
| `NotBestEffort` | Pods correspondentes que não têm qualidade de serviço de melhor esforço. |
| `PriorityClass` | Corresponde aos pods que fazem referência à [classe de prioridade](/docs/concepts/scheduling-eviction/pod-priority-preemption) especificada. |
| `CrossNamespacePodAffinity` | Corresponde a pods que tenham [termos de (anti)afinidade](/docs/concepts/scheduling-eviction/assign-pod-node) de _namespace_ cruzado. |
O escopo `BestEffort` restringe uma cota ao rastreamento do seguinte recurso:
* `pods`
Os escopos `Termination`, `NotTerminate`, `NotBestEffort` e `PriorityClass`restringem uma cota para rastrear os seguintes recursos:
* `pods`
* `cpu`
* `memory`
* `requests.cpu`
* `requests.memory`
* `limits.cpu`
* `limits.memory`
Observe que você não pode especificar os escopos `Terminate` e o `NotTerminate`na mesma cota, e você também não pode especificar o `BestEffort` e`NotBestEffort` na mesma cota.
O `scopeSelector` suporta os seguintes valores no campo `operator`:
* `In`
* `NotIn`
* `Exists`
* `DoesNotExist`
Ao usar um dos seguintes valores como o `scopeName` ao definir o`scopeSelector`, o `operator` deve ser `Exists`.
* `Terminating`
* `NotTerminating`
* `BestEffort`
* `NotBestEffort`
Se o `operator` for `In` ou `NotIn`, o campo `values` deve ter pelo menos um valor. Por exemplo:
```yaml
scopeSelector:
matchExpressions:
- scopeName: PriorityClass
operator: In
values:
- middle
```
Se o `operator` for `Exists` ou `DoesNotExist`, o campo `values` *NÃO* deve ser especificado.
### Cota de recursos por classe de prioridade
{{< feature-state for_k8s_version="v1.17" state="stable" >}}
Os pods podem ser criados em uma [prioridade](/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority) específica. Você pode controlar o consumo de recursos do sistema de um pod com base na prioridade de um pod, usando o `scopeSelector`
campo na especificação de cota.
Uma cota é correspondida e consumida apenas se `scopeSelector` na especificação de cota selecionar o pod.
Quando a cota está no escopo da classe de prioridade usando o campo `scopeSelector`, objeto de cota
está restrito a rastrear apenas os seguintes recursos:
* `pods`
* `cpu`
* `memory`
* `ephemeral-storage`
* `limits.cpu`
* `limits.memory`
* `limits.ephemeral-storage`
* `requests.cpu`
* `requests.memory`
* `requests.ephemeral-storage`
Este exemplo cria um objeto de cota e o corresponde a pods em prioridades específicas. O exemplo
funciona da seguinte forma:
- Os pods no cluster têm uma das três classes de prioridade, "baixa", "média", "alta".
- Um objeto de cota é criado para cada prioridade.
Salve o seguinte YAML em um arquivo `quota.yml`.
```yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: ResourceQuota
metadata:
name: pods-high
spec:
hard:
cpu: "1000"
memory: 200Gi
pods: "10"
scopeSelector:
matchExpressions:
- operator : In
scopeName: PriorityClass
values: ["high"]
- apiVersion: v1
kind: ResourceQuota
metadata:
name: pods-medium
spec:
hard:
cpu: "10"
memory: 20Gi
pods: "10"
scopeSelector:
matchExpressions:
- operator : In
scopeName: PriorityClass
values: ["medium"]
- apiVersion: v1
kind: ResourceQuota
metadata:
name: pods-low
spec:
hard:
cpu: "5"
memory: 10Gi
pods: "10"
scopeSelector:
matchExpressions:
- operator : In
scopeName: PriorityClass
values: ["low"]
```
Aplique o YAML usando `kubectl create`.
```shell
kubectl create -f ./quota.yml
```
```
resourcequota/pods-high created
resourcequota/pods-medium created
resourcequota/pods-low created
```
Verifique se a cota `Used` é `0` usando `kubectl describe quota`.
```shell
kubectl describe quota
```
```
Name: pods-high
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 0 1k
memory 0 200Gi
pods 0 10
Name: pods-low
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 0 5
memory 0 10Gi
pods 0 10
Name: pods-medium
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 0 10
memory 0 20Gi
pods 0 10
```
Crie um pod com prioridade "high". Salve o seguinte YAML em um arquivo `high-priority-pod.yml`.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: high-priority
spec:
containers:
- name: high-priority
image: ubuntu
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
resources:
requests:
memory: "10Gi"
cpu: "500m"
limits:
memory: "10Gi"
cpu: "500m"
priorityClassName: high
```
Applique com `kubectl create`.
```shell
kubectl create -f ./high-priority-pod.yml
```
Verifique se as estatísticas "Used" para a cota de prioridade "high", `pods-high` foram alteradas e se
as outras duas cotas permanecem inalteradas.
```shell
kubectl describe quota
```
```
Name: pods-high
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 500m 1k
memory 10Gi 200Gi
pods 1 10
Name: pods-low
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 0 5
memory 0 10Gi
pods 0 10
Name: pods-medium
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 0 10
memory 0 20Gi
pods 0 10
```
### Cota de afinidade de pod entre _namespaces_
{{< feature-state for_k8s_version="v1.22" state="beta" >}}
Os operadores podem usar o escopo de cota `CrossNamespacePodAffinity` para limitar quais _namespaces_ têm permissão para ter pods com termos de afinidade que cruzam _namespaces_. Especificamente, ele controla quais pods são permitidos para definir os campos `namespaces` ou `namespaceSelector` em termos de afinidade de pod.
Impedir que os usuários usem termos de afinidade entre _namespaces_ pode ser desejável, pois um pod
com restrições antiafinidade pode bloquear pods de todos os outros _namespaces_ de ser agendado em um domínio de falha.
O uso desses operadores de escopo pode impedir certos _namespaces_ (`foo-ns` no exemplo abaixo) de ter pods que usam afinidade de pod entre _namespaces_ criando um objeto de cota de recurso nesse _namespace_ com escopo `CrossNamespaceAffinity` e limite rígido de 0:
```yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: disable-cross-namespace-affinity
namespace: foo-ns
spec:
hard:
pods: "0"
scopeSelector:
matchExpressions:
- scopeName: CrossNamespaceAffinity
```
Se os operadores quiserem proibir o uso de `namespaces` e `namespaceSelector` por padrão, e
permitir apenas para _namespaces_ específicos, eles podem configurar `CrossNamespaceAffinity`como um recurso limitado definindo o sinalizador kube-apiserver --admission-control-config-file
para o caminho do seguinte arquivo de configuração:
```yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: "ResourceQuota"
configuration:
apiVersion: apiserver.config.k8s.io/v1
kind: ResourceQuotaConfiguration
limitedResources:
- resource: pods
matchScopes:
- scopeName: CrossNamespaceAffinity
```
Com a configuração acima, os pods podem usar `namespaces` e `namespaceSelector` apenas na afinidade do pod se o _namespace_ em que foram criados tiver um objeto de cota de recurso com escopo `CrossNamespaceAffinity` e um limite rígido maior ou igual ao número de pods usando esses campos.
Esse recurso é beta e ativado por padrão. Você pode desativá-lo usando o [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `PodAffinityNamespaceSelector` no kube-apiserver e no kube-scheduler.
## Solicitações comparadas aos limites {#requests-vs-limits}
Ao alocar recursos computacionais, cada contêiner pode especificar uma solicitação e um valor limite para CPU ou memória. A cota pode ser configurada para cotar qualquer valor.
Se a cota tiver um valor especificado para `requests.cpu` ou `requests.memory`, ela exigirá que cada container faça uma solicitação explícita para esses recursos. Se a cota tiver um valor especificado para `limits.cpu` ou `limits.memory`, em seguida exige que cada contêiner de entrada especifique um limite explícito para esses recursos.
## Como visualizar e definir cotas
O Kubectl é compatível com a criação, atualização e visualização de cotas:
```shell
kubectl create namespace myspace
```
```shell
cat <<EOF > compute-resources.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
requests.nvidia.com/gpu: 4
EOF
```
```shell
kubectl create -f ./compute-resources.yaml --namespace=myspace
```
```shell
cat <<EOF > object-counts.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-counts
spec:
hard:
configmaps: "10"
persistentvolumeclaims: "4"
pods: "4"
replicationcontrollers: "20"
secrets: "10"
services: "10"
services.loadbalancers: "2"
EOF
```
```shell
kubectl create -f ./object-counts.yaml --namespace=myspace
```
```shell
kubectl get quota --namespace=myspace
```
```none
NAME AGE
compute-resources 30s
object-counts 32s
```
```shell
kubectl describe quota compute-resources --namespace=myspace
```
```none
Name: compute-resources
Namespace: myspace
Resource Used Hard
-------- ---- ----
limits.cpu 0 2
limits.memory 0 2Gi
requests.cpu 0 1
requests.memory 0 1Gi
requests.nvidia.com/gpu 0 4
```
```shell
kubectl describe quota object-counts --namespace=myspace
```
```none
Name: object-counts
Namespace: myspace
Resource Used Hard
-------- ---- ----
configmaps 0 10
persistentvolumeclaims 0 4
pods 0 4
replicationcontrollers 0 20
secrets 1 10
services 0 10
services.loadbalancers 0 2
```
Kubectl also supports object count quota for all standard namespaced resources
using the syntax `count/<resource>.<group>`:
```shell
kubectl create namespace myspace
```
```shell
kubectl create quota test --hard=count/deployments.apps=2,count/replicasets.apps=4,count/pods=3,count/secrets=4 --namespace=myspace
```
```shell
kubectl create deployment nginx --image=nginx --namespace=myspace --replicas=2
```
```shell
kubectl describe quota --namespace=myspace
```
```
Name: test
Namespace: myspace
Resource Used Hard
-------- ---- ----
count/deployments.apps 1 2
count/pods 2 3
count/replicasets.apps 1 4
count/secrets 1 4
```
## Capacidade e cota de Cluster
`ResourceQuotas` são independentes da capacidade do cluster. Eles estão expresso em unidades absolutas. Portanto, se você adicionar nós ao cluster, isso *não*
dá automaticamente a cada _namespace_ a capacidade de consumir mais recursos.
Às vezes, políticas mais complexas podem ser necessárias, como:
- Divida proporcionalmente os recursos totais do cluster entre várias equipes.
- Permita que cada locatário aumente o uso de recursos conforme necessário, mas tenha um generoso limite para evitar o esgotamento acidental de recursos.
- Detecte a demanda de um _namespace_, adicione nós e aumente a cota.
Tais políticas podem ser implementadas usando `ResourceQuotas` como blocos de construção, por
escrevendo um "controlador" que observa o uso da cota e ajusta os limites rígidos da cota de cada _namespace_ de acordo com outros sinais.
Observe que a cota de recursos divide os recursos agregados do cluster, mas não cria restrições em torno dos nós: pods de vários _namespaces_ podem ser executados no mesmo nó.
## Limite de consumo de classe de prioridade por padrão
Pode ser desejado que os pods com uma prioridade particular, por exemplo. "cluster-services",
deve ser permitido em um _namespace_, se, e somente se, existir um objeto de cota correspondente.
Com este mecanismo, os operadores podem restringir o uso de certas classes de prioridade para um número limitado de _namespaces_ , e nem todos poderão consumir essas classes de prioridade por padrão.
Para impor isso, a flag `kube-apiserver` `--admission-control-config-file` deve ser
usada para passar o caminho para o seguinte arquivo de configuração:
```yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: "ResourceQuota"
configuration:
apiVersion: apiserver.config.k8s.io/v1
kind: ResourceQuotaConfiguration
limitedResources:
- resource: pods
matchScopes:
- scopeName: PriorityClass
operator: In
values: ["cluster-services"]
```
Em seguida, crie um objeto de cota de recurso no _namespace_ `kube-system`:
{{< codenew file="policy/priority-class-resourcequota.yaml" >}}
```shell
kubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system
```
```none
resourcequota/pods-cluster-services created
```
Nesse caso, a criação de um pod será permitida se:
1. O `priorityClassName` do pod não foi especificado.
1. O `priorityClassName` do pod é especificado com um valor diferente de `cluster-services`.
1. O `priorityClassName` do pod está definido como `cluster-services`, ele deve ser criado no namespace `kube-system` e passou na verificação de cota de recursos.
Uma solicitação de criação de pod é rejeitada caso seu `priorityClassName` estiver definido como `cluster-services` e deve ser criado em um _namespace_ diferente de `kube-system`.
## {{% heading "whatsnext" %}}
- Veja o [documento de design de cota de recursos](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) para mais informações.
- Veja um [exemplo detalhado de como usar a cota de recursos](/docs/tasks/administer-cluster/quota-api-object/).
- Leia o [documento de design de suporte de cota para prioridade de classe](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-resourcequota.md).
- Veja [recursos limitados](https://github.com/kubernetes/kubernetes/pull/36765)

View File

@ -0,0 +1,10 @@
apiVersion: v1
kind: ResourceQuota
metadata:
name: pods-cluster-services
spec:
scopeSelector:
matchExpressions:
- operator : In
scopeName: PriorityClass
values: ["cluster-services"]

View File

@ -21,7 +21,6 @@ content_type: concept
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) объединяет Flannel и Calico, обеспечивая сеть и сетевую политик.
* [Cilium](https://github.com/cilium/cilium) - это плагин сети L3 и сетевой политики, который может прозрачно применять политики HTTP/API/L7. Поддерживаются как режим маршрутизации, так и режим наложения/инкапсуляции, и он может работать поверх других подключаемых модулей CNI.
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) позволяет Kubernetes легко подключаться к выбору плагинов CNI, таких как Calico, Canal, Flannel, Romana или Weave.
* [Contiv](https://contiv.github.io) предоставляет настраиваемую сеть (собственный L3 с использованием BGP, слоя с использованием vxlan, классический L2 и Cisco-SDN/ACI) для различных вариантов использования и обширную структуру политик. Проект Contiv имеет полностью [открытый исходный код](https://github.com/contiv). [Установка](https://github.com/contiv/install) обеспечивает варианты на основе как kubeadm так и без kubeadm.
* [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), основан на [Tungsten Fabric](https://tungsten.io), представляет собой платформу для виртуализации мультиоблачных сетей с открытым исходным кодом и управления политиками. Contrail и Tungsten Fabric are интегрированы с системами оркестровки, такими как Kubernetes, OpenShift, OpenStack и Mesos, и обеспечивают режимы изоляции для виртуальных машин, контейнеров/pod-ов и рабочих нагрузок без операционной системы.
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) - это поставщик оверлейной сети, который можно использовать с Kubernetes.
* [Knitter](https://github.com/ZTE/Knitter/) - это плагин для поддержки нескольких сетевых интерфейсов Kubernetes pod-ов.

View File

@ -17,7 +17,7 @@ card:
<!-- body -->
Kubernetes — это портативная расширяемая платформа с открытым исходным кодом для управления контейнеризованными рабочими нагрузками и сервисами, которая облегчает как декларативную настройку, так и автоматизацию. У платформы есть большая, быстро растущая экосистема. Сервисы, поддержка и инструменты Kubernetes широко доступны.
Название Kubernetes происходит от греческого, что означает рулевой или штурман. Google открыл исходный код Kubernetes в 2014 году. Kubernetes основывается на [десятилетнем опыте работе Google с масштабными рабочими нагрузками](https://research.google/pubs/pub43438), в сочетании с лучшими в своем классе идеями и практиками сообщества.
Название Kubernetes происходит от греческого, что означает рулевой или штурман. Google открыл исходный код Kubernetes в 2014 году. Kubernetes основывается на [десятилетнем опыте работы Google с масштабными рабочими нагрузками](https://research.google/pubs/pub43438), в сочетании с лучшими в своем классе идеями и практиками сообщества.
## История
Давайте вернемся назад и посмотрим, почему Kubernetes так полезен.

View File

@ -236,10 +236,10 @@ persistentvolumeclaim/my-pvc created
```
<!--
If you're interested in learning more about `kubectl`, go ahead and read [kubectl Overview](/docs/reference/kubectl/overview/).
-->
If you're interested in learning more about `kubectl`, go ahead and read [Command line tool (kubectl)](/docs/reference/kubectl/).
-->
如果你有兴趣进一步学习关于 `kubectl` 的内容,请阅读
[kubectl 概述](/zh/docs/reference/kubectl/overview/)。
[命令行工具kubectl](/zh/docs/reference/kubectl/)。
<!--
## Using labels effectively

View File

@ -29,9 +29,9 @@ Ingress 控制器不是随集群自动启动的。
基于此页面,你可选择最适合你的集群的 ingress 控制器实现。
Kubernetes 作为一个项目,目前支持和维护
[AWS](https://github.com/kubernetes-sigs/aws-load-balancer-controller#readme)
[GCE](https://git.k8s.io/ingress-gce/README.md)
和 [nginx](https://git.k8s.io/ingress-nginx/README.md#readme) Ingress 控制器。
[AWS](https://github.com/kubernetes-sigs/aws-load-balancer-controller#readme)
[GCE](https://git.k8s.io/ingress-gce/README.md)
和 [Nginx](https://git.k8s.io/ingress-nginx/README.md#readme) Ingress 控制器。
<!-- body -->
@ -46,32 +46,37 @@ Kubernetes 作为一个项目,目前支持和维护
* [AKS Application Gateway Ingress Controller](https://azure.github.io/application-gateway-kubernetes-ingress/) is an ingress controller that configures the [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/overview).
* [Ambassador](https://www.getambassador.io/) API Gateway is an [Envoy](https://www.envoyproxy.io)-based ingress
controller.
* [Apache APISIX ingress controller](https://github.com/apache/apisix-ingress-controller) is an [Apache APISIX](https://github.com/apache/apisix)-based ingress controller.
* [Avi Kubernetes Operator](https://github.com/vmware/load-balancer-and-ingress-services-for-kubernetes) provides L4-L7 load-balancing using [VMware NSX Advanced Load Balancer](https://avinetworks.com/).
* [BFE Ingress Controller](https://github.com/bfenetworks/ingress-bfe) is a [BFE](https://www.bfe-networks.net)-based ingress controller.
* The [Citrix ingress controller](https://github.com/citrix/citrix-k8s-ingress-controller#readme) works with
Citrix Application Delivery Controller.
* [Contour](https://projectcontour.io/) is an [Envoy](https://www.envoyproxy.io/) based ingress controller.
* [EnRoute](https://getenroute.io/) is an [Envoy](https://www.envoyproxy.io) based API gateway that can run as an ingress controller.
-->
* [AKS 应用程序网关 Ingress 控制器](https://azure.github.io/application-gateway-kubernetes-ingress/)
是一个配置 [Azure 应用程序网关](https://docs.microsoft.com/azure/application-gateway/overview)
的 Ingress 控制器。
* [Ambassador](https://www.getambassador.io/) API 网关是一个基于
[Envoy](https://www.envoyproxy.io) 的 Ingress
控制器。
[Envoy](https://www.envoyproxy.io) 的 Ingress 控制器。
* [Apache APISIX Ingress 控制器](https://github.com/apache/apisix-ingress-controller)
是一个基于 [Apache APISIX 网关](https://github.com/apache/apisix) 的 Ingress 控制器。
* [Avi Kubernetes Operator](https://github.com/vmware/load-balancer-and-ingress-services-for-kubernetes)
使用 [VMware NSX Advanced Load Balancer](https://avinetworks.com/)
提供第 4 到第 7 层的负载均衡。
* [BFE Ingress 控制器](https://github.com/bfenetworks/ingress-bfe) 是一个基于 [BFE](https://www.bfe-networks.net) 的 Ingress 控制器。
<!--
* [BFE Ingress Controller](https://github.com/bfenetworks/ingress-bfe) is a [BFE](https://www.bfe-networks.net)-based ingress controller.
* The [Citrix ingress controller](https://github.com/citrix/citrix-k8s-ingress-controller#readme) works with
Citrix Application Delivery Controller.
* [Contour](https://projectcontour.io/) is an [Envoy](https://www.envoyproxy.io/) based ingress controller.
* [EnRoute](https://getenroute.io/) is an [Envoy](https://www.envoyproxy.io) based API gateway that can run as an ingress controller.
* [Easegress IngressController](https://github.com/megaease/easegress/blob/main/doc/reference/ingresscontroller.md) is an [Easegress](https://megaease.com/easegress/) based API gateway that can run as an ingress controller.
-->
* [BFE Ingress 控制器](https://github.com/bfenetworks/ingress-bfe)是一个基于
[BFE](https://www.bfe-networks.net) 的 Ingress 控制器。
* [Citrix Ingress 控制器](https://github.com/citrix/citrix-k8s-ingress-controller#readme)
可以用来与 Citrix Application Delivery Controller 一起使用。
* [Contour](https://projectcontour.io/) 是一个基于 [Envoy](https://www.envoyproxy.io/)
的 Ingress 控制器。
* [EnRoute](https://getenroute.io/) 是一个基于 [Envoy](https://www.envoyproxy.io) API 网关,
可以作为 Ingress 控制器来执行。
* [Easegress IngressController](https://github.com/megaease/easegress/blob/main/doc/ingresscontroller.md) 是一个基于 [Easegress](https://megaease.com/easegress/) API 网关,可以作为 Ingress 控制器来执行。
* [EnRoute](https://getenroute.io/) 是一个基于 [Envoy](https://www.envoyproxy.io)
的 API 网关,可以用作 Ingress 控制器。
* [Easegress IngressController](https://github.com/megaease/easegress/blob/main/doc/reference/ingresscontroller.md)
是一个基于 [Easegress](https://megaease.com/easegress/) 的 API 网关,可以用作 Ingress 控制器。
<!--
* F5 BIG-IP [Container Ingress Services for Kubernetes](https://clouddocs.f5.com/containers/latest/userguide/kubernetes/)
lets you use an Ingress to configure F5 BIG-IP virtual servers.
@ -88,10 +93,9 @@ Kubernetes 作为一个项目,目前支持和维护
[用于 Kubernetes 的容器 Ingress 服务](https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest)
让你能够使用 Ingress 来配置 F5 BIG-IP 虚拟服务器。
* [Gloo](https://gloo.solo.io) 是一个开源的、基于 [Envoy](https://www.envoyproxy.io) 的
Ingress 控制器,能够提供 API 网关功能,
* [HAProxy Ingress](https://haproxy-ingress.github.io/) 针对
[HAProxy](https://www.haproxy.org/#desc)
的 Ingress 控制器。
Ingress 控制器,能够提供 API 网关功能。
* [HAProxy Ingress](https://haproxy-ingress.github.io/) 是一个针对
[HAProxy](https://www.haproxy.org/#desc) 的 Ingress 控制器。
* [用于 Kubernetes 的 HAProxy Ingress 控制器](https://github.com/haproxytech/kubernetes-ingress#readme)
也是一个针对 [HAProxy](https://www.haproxy.org/#desc) 的 Ingress 控制器。
* [Istio Ingress](https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/)
@ -101,21 +105,26 @@ Kubernetes 作为一个项目,目前支持和维护
is an ingress controller driving [Kong Gateway](https://konghq.com/kong/).
* The [NGINX Ingress Controller for Kubernetes](https://www.nginx.com/products/nginx-ingress-controller/)
works with the [NGINX](https://www.nginx.com/resources/glossary/nginx/) webserver (as a proxy).
* The [Pomerium Ingress Controller](https://www.pomerium.com/docs/k8s/ingress.html) is based on [Pomerium](https://pomerium.com/), which offers context-aware access policy.
* [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/) HTTP router and reverse proxy for service composition, including use cases like Kubernetes Ingress, designed as a library to build your custom proxy.
-->
* [用于 Kubernetes 的 Kong Ingress 控制器](https://github.com/Kong/kubernetes-ingress-controller#readme)
是一个用来驱动 [Kong Gateway](https://konghq.com/kong/) 的 Ingress 控制器。
* [用于 Kubernetes 的 NGINX Ingress 控制器](https://www.nginx.com/products/nginx-ingress-controller/)
能够与 [NGINX](https://www.nginx.com/resources/glossary/nginx/)
网页服务器(作为代理)一起使用。
* [Pomerium Ingress 控制器](https://www.pomerium.com/docs/k8s/ingress.html)
基于 [Pomerium](https://pomerium.com/),能提供上下文感知的准入策略。
* [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/) HTTP
路由器和反向代理可用于服务组装,支持包括 Kubernetes Ingress
这类使用场景,是一个用以构造你自己的定制代理的库。
<!--
* The [Traefik Kubernetes Ingress provider](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) is an
ingress controller for the [Traefik](https://traefik.io/traefik/) proxy.
* [Tyk Operator](https://github.com/TykTechnologies/tyk-operator) extends Ingress with Custom Resources to bring API Management capabilities to Ingress. Tyk Operator works with the Open Source Tyk Gateway & Tyk Cloud control plane.
* [Voyager](https://appscode.com/products/voyager) is an ingress controller for
[HAProxy](https://www.haproxy.org/#desc).
-->
* [用于 Kubernetes 的 Kong Ingress 控制器](https://github.com/Kong/kubernetes-ingress-controller#readme)
是一个用来驱动 [Kong Gateway](https://konghq.com/kong/) 的 Ingress 控制器。
* [用于 Kubernetes 的 NGINX Ingress 控制器](https://www.nginx.com/products/nginx-ingress-controller/)
能够与 [NGINX](https://www.nginx.com/resources/glossary/nginx/) Web 服务器(作为代理)
一起使用。
* [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/) HTTP
路由器和反向代理可用于服务组装,支持包括 Kubernetes Ingress 这类使用场景,
设计用来作为构造你自己的定制代理的库。
* [Traefik Kubernetes Ingress 提供程序](https://doc.traefik.io/traefik/providers/kubernetes-ingress/)
是一个用于 [Traefik](https://traefik.io/traefik/) 代理的 Ingress 控制器。
* [Tyk Operator](https://github.com/TykTechnologies/tyk-operator)
@ -130,23 +139,31 @@ Kubernetes 作为一个项目,目前支持和维护
## 使用多个 Ingress 控制器
<!--
You may deploy [any number of ingress controllers](https://git.k8s.io/ingress-nginx/docs/user-guide/multiple-ingress.md#multiple-ingress-controllers)
within a cluster. When you create an ingress, you should annotate each ingress with the appropriate
[`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster)
to indicate which ingress controller should be used if more than one exists within your cluster.
You may deploy any number of ingress controllers using [ingress class](/docs/concepts/services-networking/ingress/#ingress-class)
within a cluster. Note the `.metadata.name` of your ingress class resource. When you create an ingress you would need that name to specify the `ingressClassName` field on your Ingress object (refer to [IngressSpec v1 reference](/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressSpec). `ingressClassName` is a replacement of the older [annotation method](/docs/concepts/services-networking/ingress/#deprecated-annotation).
-->
你可以使用
[Ingress 类](/zh/docs/concepts/services-networking/ingress/#ingress-class)在集群中部署任意数量的
Ingress 控制器。
请注意你的 Ingress 类资源的 `.metadata.name` 字段。
当你创建 Ingress 时,你需要用此字段的值来设置 Ingress 对象的 `ingressClassName` 字段(请参考
[IngressSpec v1 reference](/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressSpec))。
`ingressClassName`
是之前的[注解](/zh/docs/concepts/services-networking/ingress/#deprecated-annotation)做法的替代。
If you do not define a class, your cloud provider may use a default ingress controller.
<!--
If you do not specify an IngressClass for an Ingress, and your cluster has exactly one IngressClass marked as default, then Kubernetes [applies](/docs/concepts/services-networking/ingress/#default-ingress-class) the cluster's default IngressClass to the Ingress.
You mark an IngressClass as default by setting the [`ingressclass.kubernetes.io/is-default-class` annotation](/docs/reference/labels-annotations-taints/#ingressclass-kubernetes-io-is-default-class) on that IngressClass, with the string value `"true"`.
Ideally, all ingress controllers should fulfill this specification, but the various ingress
controllers operate slightly differently.
-->
你可以在集群中部署[任意数量的 ingress 控制器](https://git.k8s.io/ingress-nginx/docs/user-guide/multiple-ingress.md#multiple-ingress-controllers)。
创建 ingress 时,应该使用适当的
[`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster)
注解每个 Ingress 以表明在集群中如果有多个 Ingress 控制器时,应该使用哪个 Ingress 控制器。
如果不定义 `ingress.class`,云提供商可能使用默认的 Ingress 控制器。
如果你不为 Ingress 指定一个 IngressClass并且你的集群中只有一个 IngressClass 被标记为了集群默认,那么
Kubernetes 会[应用](/zh/docs/concepts/services-networking/ingress/#default-ingress-class)此默认
IngressClass。
你可以通过将
[`ingressclass.kubernetes.io/is-default-class` 注解](/zh/docs/reference/labels-annotations-taints/#ingressclass-kubernetes-io-is-default-class)
的值设置为 `"true"` 来将一个 IngressClass 标记为集群默认。
理想情况下,所有 Ingress 控制器都应满足此规范,但各种 Ingress 控制器的操作略有不同。

View File

@ -31,15 +31,14 @@ For clarity, this guide defines the following terms:
* Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes [networking model](/docs/concepts/cluster-administration/networking/).
* Service: A Kubernetes {{< glossary_tooltip term_id="service" >}} that identifies a set of Pods using {{< glossary_tooltip text="label" term_id="label" >}} selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network.
-->
* 节点Node: Kubernetes 集群中其中一台工作机器,是集群的一部分。
* 节点Node: Kubernetes 集群中一台工作机器,是集群的一部分。
* 集群Cluster: 一组运行由 Kubernetes 管理的容器化应用程序的节点。
在此示例和在大多数常见的 Kubernetes 部署环境中,集群中的节点都不在公共网络中。
* 边缘路由器Edge router: 在集群中强制执行防火墙策略的路由器router
可以是由云提供商管理的网关,也可以是物理硬件。
* 集群网络Cluster network: 一组逻辑的或物理的连接,根据 Kubernetes
[网络模型](/zh/docs/concepts/cluster-administration/networking/) 在集群内实现通信。
* 服务ServiceKubernetes {{< glossary_tooltip text="服务" term_id="service" >}}使用
{{< glossary_tooltip text="标签" term_id="label" >}}选择算符selectors标识的一组 Pod。
* 边缘路由器Edge Router: 在集群中强制执行防火墙策略的路由器。可以是由云提供商管理的网关,也可以是物理硬件。
* 集群网络Cluster Network: 一组逻辑的或物理的连接,根据 Kubernetes
[网络模型](/zh/docs/concepts/cluster-administration/networking/)在集群内实现通信。
* 服务ServiceKubernetes {{< glossary_tooltip term_id="service" >}}
使用{{< glossary_tooltip text="标签" term_id="label" >}}选择器selectors辨认一组 Pod。
除非另有说明,否则假定服务只具有在集群网络中可路由的虚拟 IP。
<!--
@ -80,20 +79,20 @@ graph LR;
<!--
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers) is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
-->
可以将 Ingress 配置为服务提供外部可访问的 URL、负载均衡流量、终止 SSL/TLS以及提供基于名称的虚拟主机等能力
Ingress 可为 Service 提供外部可访问的 URL、负载均衡流量、终止 SSL/TLS以及基于名称的虚拟托管
[Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers)
通常负责通过负载均衡器来实现 Ingress尽管它也可以配置边缘路由器或其他前端来帮助处理流量。
<!--
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically
uses a service of type [Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport) or
uses a service of type [Service.Type=NodePort](/docs/concepts/services-networking/service/#type-nodeport) or
[Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer).
-->
Ingress 不会公开任意端口或协议。
将 HTTP 和 HTTPS 以外的服务公开到 Internet 时,通常使用
[Service.Type=NodePort](/zh/docs/concepts/services-networking/service/#nodeport)
[Service.Type=NodePort](/zh/docs/concepts/services-networking/service/#type-nodeport)
或 [Service.Type=LoadBalancer](/zh/docs/concepts/services-networking/service/#loadbalancer)
类型的服务
类型的 Service
<!--
## Prerequisites
@ -102,7 +101,7 @@ You must have an [ingress controller](/docs/concepts/services-networking/ingress
-->
## 环境准备
你必须具有 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers) 才能满足 Ingress 的要求。
你必须拥有一个 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers) 才能满足 Ingress 的要求。
仅创建 Ingress 资源本身没有任何效果。
<!--
@ -144,29 +143,47 @@ The name of an Ingress object must be a valid
For general information about working with config files, see [deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/), [configuring containers](/docs/tasks/configure-pod-container/configure-pod-configmap/), [managing resources](/docs/concepts/cluster-administration/manage-deployment/).
Ingress frequently uses annotations to configure some options depending on the Ingress controller, an example of which
is the [rewrite-target annotation](https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md).
Different [Ingress controller](/docs/concepts/services-networking/ingress-controllers) support different annotations. Review the documentation for
Different [Ingress controllers](/docs/concepts/services-networking/ingress-controllers) support different annotations. Review the documentation for
your choice of Ingress controller to learn which annotations are supported.
-->
与所有其他 Kubernetes 资源一样Ingress 需要使用 `apiVersion`、`kind` 和 `metadata` 字段。
Ingress 对象的命名必须是合法的 [DNS 子域名名称](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
关使用配置文件的一般信息,请参见[部署应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/)、
与所有其他 Kubernetes 资源一样Ingress 需要指定 `apiVersion`、`kind` 和 `metadata` 字段。
Ingress 对象的命名必须是合法的 [DNS 子域名名称](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
于如何使用配置文件,请参见[部署应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/)、
[配置容器](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)、
[管理资源](/zh/docs/concepts/cluster-administration/manage-deployment/)。
Ingress 经常使用注解annotations来配置一些选项具体取决于 Ingress 控制器,例如
[重写目标注解](https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md)。
不同的 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers)
支持不同的注解。查看文档以供你选择 Ingress 控制器,以了解支持哪些注解。
Ingress 经常使用注解annotations来配置一些选项具体取决于 Ingress
控制器,例如[重写目标注解](https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md)。
不同的 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers)支持不同的注解。
查看你所选的 Ingress 控制器的文档,以了解其支持哪些注解。
<!--
The Ingress [spec](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)
has all the information needed to configure a load balancer or proxy server. Most importantly, it
contains a list of rules matched against all incoming requests. Ingress resource only supports rules
for directing HTTP traffic.
for directing HTTP(S) traffic.
-->
Ingress [规约](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)
提供了配置负载均衡器或者代理服务器所需的所有信息。
最重要的是,其中包含与所有传入请求匹配的规则列表。
Ingress 资源仅支持用于转发 HTTP 流量的规则。
Ingress 资源仅支持用于转发 HTTP(S) 流量的规则。
<!--
If the `ingressClassName` is omitted, a [default Ingress class](#default-ingress-class)
should be defined.
There are some ingress controllers, that work without the definition of a
default `IngressClass`. For example, the Ingress-NGINX controller can be
configured with a [flag](https://kubernetes.github.io/ingress-nginx/#what-is-the-flag-watch-ingress-without-class)
`--watch-ingress-without-class`. It is [recommended](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-instance-of-the-ingresss-nginx-controller-in-my-cluster-what-should-i-do) though, to specify the
default `IngressClass` as shown [below](#default-ingress-class).
-->
如果 `ingressClassName` 被省略,那么你应该定义一个[默认 Ingress 类](#default-ingress-class)。
有一些 Ingress 控制器不需要定义默认的 `IngressClass`。比如Ingress-NGINX
控制器可以通过[参数](https://kubernetes.github.io/ingress-nginx/#what-is-the-flag-watch-ingress-without-class)
`--watch-ingress-without-class` 来配置。
不过仍然[推荐](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-instance-of-the-ingresss-nginx-controller-in-my-cluster-what-should-i-do)
按[下文](#default-ingress-class)所示来设置默认的 `IngressClass`
<!--
### Ingress rules
@ -199,24 +216,35 @@ Each HTTP rule contains the following information:
A `defaultBackend` is often configured in an Ingress controller to service any requests that do not
match a path in the spec.
-->
通常在 Ingress 控制器中会配置 `defaultBackend`(默认后端),以服务于任何不符合规约中 `path`请求。
通常在 Ingress 控制器中会配置 `defaultBackend`(默认后端),以服务于无法与规约中 `path` 匹配的所有请求。
<!--
### DefaultBackend {#default-backend}
An Ingress with no rules sends all traffic to a single default backend. The `defaultBackend` is conventionally a configuration option of the [Ingress controller](/docs/concepts/services-networking/ingress-controllers) and is not specified in your Ingress resources.
-->
### DefaultBackend {#default-backend}
An Ingress with no rules sends all traffic to a single default backend and `.spec.defaultBackend`
is the backend that should handle requests in that case.
The `defaultBackend` is conventionally a configuration option of the
[Ingress controller](/docs/concepts/services-networking/ingress-controllers) and
is not specified in your Ingress resources.
If no `.spec.rules` are specified, `.spec.defaultBackend` must be specified.
If `defaultBackend` is not set, the handling of requests that do not match any of the rules will be up to the
ingress controller (consult the documentation for your ingress controller to find out how it handles this case).
没有 `rules` 的 Ingress 将所有流量发送到同一个默认后端。
`defaultBackend` 通常是 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers)
的配置选项,而非在 Ingress 资源中指定。
<!--
If none of the hosts or paths match the HTTP request in the Ingress objects, the traffic is
routed to your default backend.
-->
如果 `hosts``paths` 都没有与 Ingress 对象中的 HTTP 请求匹配,则流量将路由到默认后端。
### 默认后端 {#default-backend}
没有设置规则的 Ingress 将所有流量发送到同一个默认后端,而
`.spec.defaultBackend` 则是在这种情况下处理请求的那个默认后端。
`defaultBackend` 通常是
[Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers)的配置选项,而非在
Ingress 资源中指定。
如果未设置任何的 `.spec.rules`,那么必须指定 `.spec.defaultBackend`
如果未设置 `defaultBackend`,那么如何处理所有与规则不匹配的流量将交由
Ingress 控制器决定(请参考你的 Ingress 控制器的文档以了解它是如何处理那些流量的)。
如果没有 `hosts``paths` 与 Ingress 对象中的 HTTP 请求匹配,则流量将被路由到默认后端。
<!--
### Resource backends {#resource-backend}
@ -229,9 +257,8 @@ with static assets.
-->
### 资源后端 {#resource-backend}
`Resource` 后端是一个 `ObjectRef`,指向同一名字空间中的另一个
Kubernetes将其作为 Ingress 对象。`Resource` 与 `Service` 配置是互斥的,在
二者均被设置时会无法通过合法性检查。
`Resource` 后端是一个引用,指向同一命名空间中的另一个 Kubernetes 资源,将其作为 Ingress 对象。
`Resource` 后端与 Service 后端是互斥的,在二者均被设置时会无法通过合法性检查。
`Resource` 后端的一种常见用法是将所有入站数据导向带有静态资产的对象存储后端。
{{< codenew file="service/networking/ingress-resource-backend.yaml" >}}
@ -410,41 +437,140 @@ IngressClass 资源包含额外的配置,其中包括应当实现该类的控
{{< codenew file="service/networking/external-lb.yaml" >}}
<!--
IngressClass resources contain an optional parameters field. This can be used to
reference additional implementation-specific configuration for this class.
The `.spec.parameters` field of an IngressClass lets you reference another
resource that provides configuration related to that IngressClass.
The specific type of parameters to use depends on the ingress controller
that you specify in the `.spec.controller` field of the IngressClass.
-->
IngressClass 资源包含一个可选的 `parameters` 字段,可用于为该类引用额外的、
特定于具体实现的配置。
IngressClass 中的 `.spec.parameters` 字段可用于引用其他资源以提供额外的相关配置。
参数(`parameters`)的具体类型取决于你在 `.spec.controller` 字段中指定的 Ingress 控制器。
<!--
#### Namespace-scoped parameters
-->
#### 名字空间域的参数
### IngressClass scope
{{< feature-state for_k8s_version="v1.22" state="beta" >}}
Depending on your ingress controller, you may be able to use parameters
that you set cluster-wide, or just for one namespace.
-->
### IngressClass 的作用域
取决于你的 Ingress 控制器,你可能可以使用集群范围设置的参数或某个名字空间范围的参数。
{{< tabs name="tabs_ingressclass_parameter_scope" >}}
{{% tab name="集群作用域" %}}
<!--
The default scope for IngressClass parameters is cluster-wide.
If you set the `.spec.parameters` field and don't set
`.spec.parameters.scope`, or if you set `.spec.parameters.scope` to
`Cluster`, then the IngressClass refers to a cluster-scoped resource.
The `kind` (in combination the `apiGroup`) of the parameters
refers to a cluster-scoped API (possibly a custom resource), and
the `name` of the parameters identifies a specific cluster scoped
resource for that API.
For example:
-->
IngressClass 的参数默认是集群范围的。
如果你设置了 `.spec.parameters` 字段且未设置 `.spec.parameters.scope`
字段,或是将 `.spec.parameters.scope` 字段设为了 `Cluster`,那么该
IngressClass 所指代的即是一个集群作用域的资源。
参数的 `kind`(和 `apiGroup` 一起)指向一个集群作用域的
API可能是一个定制资源Custom Resource而它的
`name` 则为此 API 确定了一个具体的集群作用域的资源。
示例:
```yaml
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: external-lb-1
spec:
controller: example.com/ingress-controller
parameters:
# 此 IngressClass 的配置定义在一个名为 “external-config-1” 的
# ClusterIngressParameterAPI 组为 k8s.example.net资源中。
# 这项定义告诉 Kubernetes 去寻找一个集群作用域的参数资源。
scope: Cluster
apiGroup: k8s.example.net
kind: ClusterIngressParameter
name: external-config-1
```
{{% /tab %}}
{{% tab name="命名空间作用域" %}}
{{< feature-state for_k8s_version="v1.23" state="stable" >}}
<!--
`Parameters` field has a `scope` and `namespace` field that can be used to
reference a namespace-specific resource for configuration of an Ingress class.
`Scope` field defaults to `Cluster`, meaning, the default is cluster-scoped
resource. Setting `Scope` to `Namespace` and setting the `Namespace` field
will reference a parameters resource in a specific namespace:
If you set the `.spec.parameters` field and set
`.spec.parameters.scope` to `Namespace`, then the IngressClass refers
to a namespaced-scoped resource. You must also set the `namespace`
field within `.spec.parameters` to the namespace that contains
the parameters you want to use.
Namespace-scoped parameters avoid the need for a cluster-scoped CustomResourceDefinition
for a parameters resource. This further avoids RBAC-related resources
that would otherwise be required to grant permissions to cluster-scoped
resources.
The `kind` (in combination the `apiGroup`) of the parameters
refers to a namespaced API (for example: ConfigMap), and
the `name` of the parameters identifies a specific resource
in the namespace you specified in `namespace`.
-->
`parameters` 字段有一个 `scope``namespace` 字段,可用来引用特定
于名字空间的资源,对 Ingress 类进行配置。
`scope` 字段默认为 `Cluster`,表示默认是集群作用域的资源。
`scope` 设置为 `Namespace` 并设置 `namespace` 字段就可以引用某特定
名字空间中的参数资源。
如果你设置了 `.spec.parameters` 字段且将 `.spec.parameters.scope`
字段设为了 `Namespace`,那么该 IngressClass 将会引用一个命名空间作用域的资源。
`.spec.parameters.namespace` 必须和此资源所处的命名空间相同。
有了名字空间域的参数,就不再需要为一个参数资源配置集群范围的 CustomResourceDefinition。
除此之外,之前对访问集群范围的资源进行授权,需要用到 RBAC 相关的资源,现在也不再需要了。
参数的 `kind`(和 `apiGroup`
一起)指向一个命名空间作用域的 API例如ConfigMap而它的
`name` 则确定了一个位于你指定的命名空间中的具体的资源。
{{< codenew file="service/networking/namespaced-params.yaml" >}}
<!--
Namespace-scoped parameters help the cluster operator delegate control over the
configuration (for example: load balancer settings, API gateway definition)
that is used for a workload. If you used a cluster-scoped parameter then either:
- the cluster operator team needs to approve a different team's changes every
time there's a new configuration change being applied.
- the cluster operator must define specific access controls, such as
[RBAC](/docs/reference/access-authn-authz/rbac/) roles and bindings, that let
the application team make changes to the cluster-scoped parameters resource.
-->
命名空间作用域的参数帮助集群操作者将控制细分到用于工作负载的各种配置中比如负载均衡设置、API
网关定义)。如果你使用集群作用域的参数,那么你必须从以下两项中选择一项执行:
- 每次修改配置,集群操作团队需要批准其他团队的修改。
- 集群操作团队定义具体的准入控制,比如 [RBAC](/zh/docs/reference/access-authn-authz/rbac/)
角色与角色绑定,以使得应用程序团队可以修改集群作用域的配置参数资源。
<!--
The IngressClass API itself is always cluster-scoped.
Here is an example of an IngressClass that refers to parameters that are
namespaced:
-->
IngressClass API 本身是集群作用域的。
这里是一个引用命名空间作用域的配置参数的 IngressClass 的示例:
```yaml
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: external-lb-2
spec:
controller: example.com/ingress-controller
parameters:
# 此 IngressClass 的配置定义在一个名为 “external-config” 的
# IngressParameterAPI 组为 k8s.example.com资源中
# 该资源位于 “external-configuration” 命名空间中。
scope: Namespace
apiGroup: k8s.example.com
kind: IngressParameter
namespace: external-configuration
name: external-config
```
{{% /tab %}}
{{< /tabs >}}
<!--
### Deprecated Annotation
@ -456,8 +582,8 @@ never formally defined, but was widely supported by Ingress controllers.
-->
### 废弃的注解 {#deprecated-annotation}
在 Kubernetes 1.18 版本引入 IngressClass 资源和 `ingressClassName` 字段之前,
Ingress 类是通过 Ingress 中的一个 `kubernetes.io/ingress.class` 注解来指定的。
在 Kubernetes 1.18 版本引入 IngressClass 资源和 `ingressClassName` 字段之前,Ingress
类是通过 Ingress 中的一个 `kubernetes.io/ingress.class` 注解来指定的。
这个注解从未被正式定义过,但是得到了 Ingress 控制器的广泛支持。
<!--
@ -468,9 +594,8 @@ Ingress, the field is a reference to an IngressClass resource that contains
additional Ingress configuration, including the name of the Ingress controller.
-->
Ingress 中新的 `ingressClassName` 字段是该注解的替代品,但并非完全等价。
该注解通常用于引用实现该 Ingress 的控制器的名称,
而这个新的字段则是对一个包含额外 Ingress 配置的 IngressClass 资源的引用,
包括 Ingress 控制器的名称。
该注解通常用于引用实现该 Ingress 的控制器的名称,而这个新的字段则是对一个包含额外
Ingress 配置的 IngressClass 资源的引用,包括 Ingress 控制器的名称。
<!--
### Default IngressClass {#default-ingress-class}
@ -499,6 +624,21 @@ IngressClasess are marked as default in your cluster.
解决这个问题只需确保集群中最多只能有一个 IngressClass 被标记为默认。
{{< /caution >}}
<!--
There are some ingress controllers, that work without the definition of a
default `IngressClass`. For example, the Ingress-NGINX controller can be
configured with a [flag](https://kubernetes.github.io/ingress-nginx/#what-is-the-flag-watch-ingress-without-class)
`--watch-ingress-without-class`. It is [recommended](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-instance-of-the-ingresss-nginx-controller-in-my-cluster-what-should-i-do) though, to specify the
default `IngressClass`:
-->
有一些 Ingress 控制器不需要定义默认的 `IngressClass`。比如Ingress-NGINX
控制器可以通过[参数](https://kubernetes.github.io/ingress-nginx/#what-is-the-flag-watch-ingress-without-class)
`--watch-ingress-without-class` 来配置。
不过仍然[推荐](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-instance-of-the-ingresss-nginx-controller-in-my-cluster-what-should-i-do)
设置默认的 `IngressClass`
{{< codenew file="service/networking/default-ingressclass.yaml" >}}
<!--
## Types of Ingress
@ -682,14 +822,10 @@ Ingress 控制器 IP 地址的任何网络流量,而无需基于名称的虚
<!--
For example, the following Ingress routes traffic
requested for `first.bar.com` to `service1`, `second.bar.com` to `service2`, and any traffic
to the IP address without a hostname defined in request (that is, without a request header being
presented) to `service3`.
requested for `first.bar.com` to `service1`, `second.bar.com` to `service2`, and any traffic whose request host header doesn't match `first.bar.com` and `second.bar.com` to `service3`.
-->
例如,以下 Ingress 会将针对 `first.bar.com` 的请求流量路由到 `service1`
将针对 `second.bar.com` 的请求流量路由到 `service2`
而针对该 IP 地址的、没有在请求中定义主机名的请求流量会被路由(即,不提供请求标头)
`service3`
例如,以下 Ingress 会将请求 `first.bar.com` 的流量路由到 `service1`,将请求
`second.bar.com` 的流量路由到 `service2`,而所有其他流量都会被路由到 `service3`
{{< codenew file="service/networking/name-virtual-host-ingress-no-third-host.yaml" >}}
@ -710,12 +846,12 @@ and private key to use for TLS. For example:
你可以通过设定包含 TLS 私钥和证书的{{< glossary_tooltip text="Secret" term_id="secret" >}}
来保护 Ingress。
Ingress 只支持单个 TLS 端口 443并假定 TLS 连接终止于 Ingress 节点
(与 Service 及其 Pod 之间的流量都以明文传输)。
如果 Ingress 中的 TLS 配置部分指定了不同的主机,那么它们将根据通过 SNI TLS 扩展指定的主机名
(如果 Ingress 控制器支持 SNI在同一端口上进行复用。
TLS Secret 必须包含名为 `tls.crt``tls.key` 的键名
这些数据包含用于 TLS 的证书和私钥。例如:
Ingress 只支持单个 TLS 端口 443并假定 TLS 连接终止于
Ingress 节点(与 Service 及其 Pod 之间的流量都以明文传输)。
如果 Ingress 中的 TLS 配置部分指定了不同的主机,那么它们将根据通过
SNI TLS 扩展指定的主机名(如果 Ingress 控制器支持 SNI在同一端口上进行复用。
TLS Secret 的数据中必须包含用于 TLS 的以键名 `tls.crt` 保存的证书和以键名 `tls.key` 保存的私钥
例如:
```yaml
apiVersion: v1
@ -724,8 +860,8 @@ metadata:
name: testsecret-tls
namespace: default
data:
tls.crt: base64 编码的 cert
tls.key: base64 编码的 key
tls.crt: base64 编码的证书
tls.key: base64 编码的私钥
type: kubernetes.io/tls
```
@ -747,8 +883,7 @@ certificates would have to be issued for all the possible sub-domains. Therefore
section.
-->
注意,默认规则上无法使用 TLS因为需要为所有可能的子域名发放证书。
因此,`tls` 节区的 `hosts` 的取值需要域 `rules` 节区的 `host`
完全匹配。
因此,`tls` 字段中的 `hosts` 的取值需要与 `rules` 字段中的 `host` 完全匹配。
{{< /note >}}
{{< codenew file="service/networking/tls-example-ingress.yaml" >}}
@ -779,8 +914,8 @@ a Service.
-->
### 负载均衡 {#load-balancing}
Ingress 控制器启动引导时使用一些适用于所有 Ingress 的负载均衡策略设置,
例如负载均衡算法、后端权重方案和其他等。
Ingress 控制器启动引导时使用一些适用于所有 Ingress
的负载均衡策略设置,例如负载均衡算法、后端权重方案等。
更高级的负载均衡概念(例如持久会话、动态权重)尚未通过 Ingress 公开。
你可以通过用于服务的负载均衡器来获取这些功能。
@ -797,10 +932,8 @@ specific documentation to see how they handle health checks (
中存在并行的概念,比如
[就绪检查](/zh/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)
允许你实现相同的目的。
请检查特定控制器的说明文档(
[nginx](https://git.k8s.io/ingress-nginx/README.md)
[GCE](https://git.k8s.io/ingress-gce/README.md#health-checks)
以了解它们是怎样处理健康检查的。
请检查特定控制器的说明文档([nginx](https://git.k8s.io/ingress-nginx/README.md)、
[GCE](https://git.k8s.io/ingress-gce/README.md#health-checks))以了解它们是怎样处理健康检查的。
<!--
## Updating an Ingress
@ -916,8 +1049,7 @@ Please check the documentation of the relevant [Ingress controller](/docs/concep
## 跨可用区失败 {#failing-across-availability-zones}
不同的云厂商使用不同的技术来实现跨故障域的流量分布。详情请查阅相关 Ingress 控制器的文档。
请查看相关 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers)
的文档以了解详细信息。
请查看相关 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers)的文档以了解详细信息。
<!--
## Alternatives
@ -938,11 +1070,11 @@ You can expose a Service in multiple ways that don't directly involve the Ingres
## {{% heading "whatsnext" %}}
<!--
* Learn about the [Ingress API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ingress-v1-networking-k8s-io)
* Learn about the [Ingress](/docs/reference/kubernetes-api/service-resources/ingress-v1/) API
* Learn about [Ingress Controllers](/docs/concepts/services-networking/ingress-controllers/)
* [Set up Ingress on Minikube with the NGINX Controller](/docs/tasks/access-application-cluster/ingress-minikube/)
-->
* 进一步了解 [Ingress API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ingress-v1beta1-networking-k8s-io)
* 进一步了解 [Ingress](/docs/reference/kubernetes-api/service-resources/ingress-v1/) API
* 进一步了解 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers/)
* [使用 NGINX 控制器在 Minikube 上安装 Ingress](/zh/docs/tasks/access-application-cluster/ingress-minikube/)

View File

@ -51,7 +51,9 @@ which are configured in the API.
<!--
Admission controllers may be "validating", "mutating", or both. Mutating
controllers may modify the objects they admit; validating controllers may not.
controllers may modify related objects to the requests they admit; validating controllers may not.
Admission controllers limit requests to create, delete, modify objects or connect to proxy. They do not limit requests to read objects.
The admission control process proceeds in two phases. In the first phase,
mutating admission controllers are run. In the second phase, validating
@ -62,7 +64,9 @@ If any of the controllers in either phase reject the request, the entire
request is rejected immediately and an error is returned to the end-user.
-->
准入控制器可以执行 “验证Validating” 和/或 “变更Mutating” 操作。
变更mutating控制器可以修改被其接受的对象验证validating控制器则不行。
变更mutating控制器可以根据被其接受的请求修改相关对象验证validating控制器则不行。
准入控制器限制创建、删除、修改对象或连接到代理的请求,不限制读取对象的请求。
准入控制过程分为两个阶段。第一阶段,运行变更准入控制器。第二阶段,运行验证准入控制器。
再次提醒,某些控制器既是变更准入控制器又是验证准入控制器。
@ -995,10 +999,11 @@ This admission controller implements additional validations for checking incomin
{{< note >}}
<!--
Support for volume resizing is available as an alpha feature. Admins must set the feature gate `ExpandPersistentVolumes`
Support for volume resizing is available as a beta feature. As a cluster administrator,
you must ensure that the feature gate `ExpandPersistentVolumes` is set
to `true` to enable resizing.
-->
对调整卷大小的支持是一种 Alpha 特性。管理员必须将特性门控 `ExpandPersistentVolumes`
对调整卷大小的支持是一种 Beta 特性。作为集群管理员,你必须确保特性门控 `ExpandPersistentVolumes`
设置为 `true` 才能启用调整大小。
{{< /note >}}
@ -1173,7 +1178,7 @@ PodNodeSelector 允许 Pod 强制在特定标签的节点上运行。
### PodSecurity {#podsecurity}
{{< feature-state for_k8s_version="v1.22" state="alpha" >}}
{{< feature-state for_k8s_version="v1.23" state="beta" >}}
<!--
This is the replacement for the deprecated [PodSecurityPolicy](#podsecuritypolicy) admission controller

View File

@ -263,7 +263,7 @@ no
```
<!--
Similarly, to check whether a Service Account named `dev-sa` in Namespace `dev`
Similarly, to check whether a ServiceAccount named `dev-sa` in Namespace `dev`
can list Pods in the Namespace `target`:
-->
类似地,检查名字空间 `dev` 里的 `dev-sa` 服务账号是否可以列举名字空间 `target` 里的 Pod

View File

@ -15,12 +15,12 @@ content_type: concept
<!-- overview -->
<!--
In a Kubernetes cluster, the components on the worker nodes - kubelet and kube-proxy - need to communicate with Kubernetes master components, specifically kube-apiserver.
In a Kubernetes cluster, the components on the worker nodes - kubelet and kube-proxy - need to communicate with Kubernetes control plane components, specifically kube-apiserver.
In order to ensure that communication is kept private, not interfered with, and ensure that each component of the cluster is talking to another trusted component, we strongly
recommend using client TLS certificates on nodes.
-->
在一个 Kubernetes 集群中工作节点上的组件kubelet 和 kube-proxy需要与
Kubernetes 控组件通信,尤其是 kube-apiserver。
Kubernetes 控制平面组件通信,尤其是 kube-apiserver。
为了确保通信本身是私密的、不被干扰,并且确保集群的每个组件都在与另一个
可信的组件通信,我们强烈建议使用节点上的客户端 TLS 证书。
@ -89,7 +89,7 @@ Note that the above process depends upon:
All of the following are responsibilities of whoever sets up and manages the cluster:
1. Creating the CA key and certificate
2. Distributing the CA certificate to the master nodes, where kube-apiserver is running
2. Distributing the CA certificate to the control plane nodes, where kube-apiserver is running
3. Creating a key and certificate for each kubelet; strongly recommended to have a unique one, with a unique CN, for each kubelet
4. Signing the kubelet certificate using the CA key
5. Distributing the kubelet key and signed certificate to the specific node on which the kubelet is running
@ -100,7 +100,7 @@ a cluster.
负责部署和管理集群的人有以下责任:
1. 创建 CA 密钥和证书
2. 将 CA 证书发布到 kube-apiserver 运行所在的控节点上
2. 将 CA 证书发布到 kube-apiserver 运行所在的控制平面节点上
3. 为每个 kubelet 创建密钥和证书;强烈建议为每个 kubelet 使用独一无二的、
CN 取值与众不同的密钥和证书
4. 使用 CA 密钥对 kubelet 证书签名
@ -191,21 +191,21 @@ In addition, you need your Kubernetes Certificate Authority (CA).
## Certificate Authority
As without bootstrapping, you will need a Certificate Authority (CA) key and certificate. As without bootstrapping, these will be used
to sign the kubelet certificate. As before, it is your responsibility to distribute them to master nodes.
to sign the kubelet certificate. As before, it is your responsibility to distribute them to control plane nodes.
-->
## 证书机构 {#certificate-authority}
就像在没有启动引导的情况下你会需要证书机构CA密钥和证书。
这些数据会被用来对 kubelet 证书进行签名。
如前所述,将证书机构密钥和证书发布到控节点是你的责任。
如前所述,将证书机构密钥和证书发布到控制平面节点是你的责任。
<!--
For the purposes of this document, we will assume these have been distributed to master nodes at `/var/lib/kubernetes/ca.pem` (certificate) and `/var/lib/kubernetes/ca-key.pem` (key).
For the purposes of this document, we will assume these have been distributed to control plane nodes at `/var/lib/kubernetes/ca.pem` (certificate) and `/var/lib/kubernetes/ca-key.pem` (key).
We will refer to these as "Kubernetes CA certificate and key".
All Kubernetes components that use these certificates - kubelet, kube-apiserver, kube-controller-manager - assume the key and certificate to be PEM-encoded.
-->
就本文而言,我们假定这些数据被发布到控节点上的
就本文而言,我们假定这些数据被发布到控制平面节点上的
`/var/lib/kubernetes/ca.pem`(证书)和
`/var/lib/kubernetes/ca-key.pem`(密钥)文件中。
我们将这两个文件称作“Kubernetes CA 证书和密钥”。
@ -360,7 +360,7 @@ If you want to use bootstrap tokens, you must enable it on kube-apiserver with t
<!--
#### Token authentication file
kube-apiserver has an ability to accept tokens as authentication.
kube-apiserver has the ability to accept tokens as authentication.
These tokens are arbitrary but should represent at least 128 bits of entropy derived
from a secure random number generator (such as `/dev/urandom` on most modern Linux
systems). There are multiple ways you can generate a token. For example:
@ -475,12 +475,12 @@ In order for the controller-manager to sign certificates, it needs the following
<!--
### Access to key and certificate
As described earlier, you need to create a Kubernetes CA key and certificate, and distribute it to the master nodes.
As described earlier, you need to create a Kubernetes CA key and certificate, and distribute it to the control plane nodes.
These will be used by the controller-manager to sign the kubelet certificates.
-->
### 访问密钥和证书 {#access-to-key-and-certificate}
如前所述,你需要创建一个 Kubernetes CA 密钥和证书,并将其发布到控节点。
如前所述,你需要创建一个 Kubernetes CA 密钥和证书,并将其发布到控制平面节点。
这些数据会被控制器管理器来对 kubelet 证书进行签名。
<!--
@ -614,11 +614,11 @@ collection.
<!--
## kubelet configuration
Finally, with the master nodes properly set up and all of the necessary authentication and authorization in place, we can configure the kubelet.
Finally, with the control plane properly set up and all of the necessary authentication and authorization in place, we can configure the kubelet.
-->
## kubelet 配置 {#kubelet-configuration}
最后,当控节点被正确配置并且所有必要的身份认证和鉴权机制都就绪时,
最后,当控制平面节点被正确配置并且所有必要的身份认证和鉴权机制都就绪时,
我们可以配置 kubelet。
<!--

View File

@ -318,10 +318,10 @@ If set, any request presenting a client certificate signed by one of the authori
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
<!--
The path to the cloud provider configuration file. Empty string for no configuration file. (DEPRECATED: will be removed in 1.23, in favor of removing cloud providers code from Kubelet.)
The path to the cloud provider configuration file. Empty string for no configuration file. (DEPRECATED: will be removed in 1.24 or later, in favor of removing cloud providers code from Kubelet.)
-->
云驱动配置文件的路径。空字符串表示没有配置文件。
已弃用:将在 1.23 版本中移除,以便于从 kubelet 中去除云驱动代码。
已弃用:将在 1.24 或更高版本中移除,以便于从 kubelet 中去除云驱动代码。
</td>
</tr>
@ -331,11 +331,11 @@ The path to the cloud provider configuration file. Empty string for no configura
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
<!--
The provider for cloud services. Set to empty string for running with no cloud provider. If set, the cloud provider determines the name of the node (consult cloud provider documentation to determine if and how the hostname is used). (DEPRECATED: will be removed in 1.23, in favor of removing cloud provider code from Kubelet.)
The provider for cloud services. Set to empty string for running with no cloud provider. If set, the cloud provider determines the name of the node (consult cloud provider documentation to determine if and how the hostname is used). (DEPRECATED: will be removed in 1.24 or later, in favor of removing cloud provider code from Kubelet.)
-->
云服务的提供者。设置为空字符串表示在没有云驱动的情况下运行。
如果设置了此标志,则云驱动负责确定节点的名称(参考云提供商文档以确定是否以及如何使用主机名)。
已弃用:将在 1.23 版本中移除,以便于从 kubelet 中去除云驱动代码。
已弃用:将在 1.24 或更高版本中移除,以便于从 kubelet 中去除云驱动代码。
</td>
</tr>
@ -574,14 +574,17 @@ Use this for the <code>docker</code> endpoint to communicate with. This docker-s
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
<!--
The Kubelet will use this directory for checkpointing downloaded configurations and tracking configuration health. The Kubelet will create this directory if it does not already exist. The path may be absolute or relative; relative paths start at the Kubelet's current working directory. Providing this flag enables dynamic Kubelet configuration. The <code>DynamicKubeletConfig</code> feature gate must be enabled to pass this flag; this gate currently defaults to <code>true</code> because the feature is beta.
The Kubelet will use this directory for checkpointing downloaded configurations and tracking configuration health. The Kubelet will create this directory if it does not already exist. The path may be absolute or relative; relative paths start at the Kubelet's current working directory. Providing this flag enables dynamic Kubelet configuration. The <code>DynamicKubeletConfig</code> feature gate must be enabled to pass this flag. (DEPRECATED: Feature DynamicKubeletConfig is deprecated in 1.22 and will not move to GA. It is planned to be removed from Kubernetes in the version 1.24 or later. Please use alternative ways to update kubelet configuration.)
-->
kubelet 使用此目录来保存所下载的配置,跟踪配置运行状况。
如果目录不存在,则 kubelet 创建该目录。此路径可以是绝对路径,也可以是相对路径。
相对路径从 kubelet 的当前工作目录计算。
设置此参数将启用动态 kubelet 配置。必须启用 <code>DynamicKubeletConfig</code>
特性门控之后才能设置此标志;由于此特性为 beta 阶段,对应的特性门控当前默认为
<code>true</code>
特性门控之后才能设置此标志。
(已弃用DynamicKubeletConfig 功能在 1.22 中已弃用,不会移至 GA。
计划在 1.24 或更高版本中从 Kubernetes 中移除。
请使用其他方式来更新 kubelet 配置。)
</td>
</tr>
@ -781,12 +784,12 @@ Whether kubelet should exit upon lock-file contention.
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
<!--
When set to <code>true</code>, Hard eviction thresholds will be ignored while calculating node allocatable. See https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/ for more details. (DEPRECATED: will be removed in 1.23)
When set to <code>true</code>, Hard eviction thresholds will be ignored while calculating node allocatable. See https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/ for more details. (DEPRECATED: will be removed in 1.24 or later)
-->
设置为 <code>true</code> 表示在计算节点可分配资源数量时忽略硬性逐出阈值设置。
参考<a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/">
相关文档</a>
已启用:将在 1.23 版本中移除。
已启用:将在 1.24 或更高版本中移除。
</td>
</tr>
@ -808,11 +811,11 @@ When set to <code>true</code>, Hard eviction thresholds will be ignored while ca
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
<!--
[Experimental] if set to <code>true</code>, the kubelet will check the underlying node for required components (binaries, etc.) before performing the mount (DEPRECATED: will be removed in 1.23, in favor of using CSI.)
[Experimental] if set to <code>true</code>, the kubelet will check the underlying node for required components (binaries, etc.) before performing the mount (DEPRECATED: will be removed in 1.24 or later, in favor of using CSI.)
-->
[实验性特性] 设置为 <code>true</code> 表示 kubelet 在进行挂载卷操作之前要
在本节点上检查所需的组件(如可执行文件等)是否存在。
已弃用:将在 1.23 版本中移除,以便使用 CSI。
已弃用:将在 1.24 或更高版本中移除,以便使用 CSI。
</td>
</tr>
@ -822,11 +825,11 @@ When set to <code>true</code>, Hard eviction thresholds will be ignored while ca
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
<!--
If enabled, the kubelet will integrate with the kernel memcg notification to determine if memory eviction thresholds are crossed rather than polling. This flag will be removed in 1.23. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
If enabled, the kubelet will integrate with the kernel memcg notification to determine if memory eviction thresholds are crossed rather than polling. This flag will be removed in 1.24 or later. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
-->
设置为 true 表示 kubelet 将会集成内核的 memcg 通知机制而不是使用轮询机制来
判断是否达到了内存驱逐阈值。
此标志将在 1.23 版本移除。
此标志将在 1.24 或更高版本移除。
已弃用:应在 <code>--config</code> 所给的配置文件中进行设置。
<a href="https://kubernetes.io/zh/docs/tasks/administer-cluster/kubelet-config-file/">进一步了解</a>
</td>
@ -853,10 +856,10 @@ If enabled, the kubelet will integrate with the kernel memcg notification to det
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
<!--
[Experimental] Path of mounter binary. Leave empty to use the default <code>mount</code>. (DEPRECATED: will be removed in 1.23, in favor of using CSI.)
[Experimental] Path of mounter binary. Leave empty to use the default <code>mount</code>. (DEPRECATED: will be removed in 1.24 or later, in favor of using CSI.)
-->
[实验性特性] 卷挂载器mounter的可执行文件的路径。设置为空表示使用默认挂载器 <code>mount</code>
已弃用:将在 1.23 版本移除以支持 CSI。
已弃用:将在 1.24 或更高版本移除以支持 CSI。
</td>
</tr>
@ -2216,10 +2219,10 @@ Timeout of all runtime requests except long running request - <code>pull</code>,
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
<!--
&lt;Warning: Alpha feature&gt; Directory path for seccomp profiles. (DEPRECATED: will be removed in 1.23, in favor of using the <code><root-dir>/seccomp</code> directory)
&lt;Warning: Alpha feature&gt; Directory path for seccomp profiles. (DEPRECATED: will be removed in 1.24 or later, in favor of using the <code><root-dir>/seccomp</code> directory)
-->
&lt;警告alpha 特性&gt; seccomp 配置文件目录。
已弃用:将在 1.23 版本中移除,以使用 <code>&lt;root-dir&gt;/seccomp</code> 目录。
已弃用:将在 1.23 或更高版本中移除,以使用 <code>&lt;root-dir&gt;/seccomp</code> 目录。
</td>
</tr>

View File

@ -11,7 +11,11 @@ package: apiserver.config.k8s.io/v1
auto_generated: true
-->
v1 包中包含 API 的 v1 版本。
<!--
<p>Package v1 is the v1 version of the API.</p>
-->
<p>v1 包中包含 API 的 v1 版本。</p>
<!--
## Resource Types
@ -23,9 +27,9 @@ v1 包中包含 API 的 v1 版本。
## `AdmissionConfiguration` {#apiserver-config-k8s-io-v1-AdmissionConfiguration}
<!--
AdmissionConfiguration provides versioned configuration for admission controllers.
<p>AdmissionConfiguration provides versioned configuration for admission controllers.</p>
-->
AdmissionConfiguration 为准入控制器提供版本化的配置。
<p>AdmissionConfiguration 为准入控制器提供版本化的配置。</p>
<table class="table">
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
@ -39,9 +43,9 @@ AdmissionConfiguration 为准入控制器提供版本化的配置。
</td>
<td>
<!--
Plugins allows specifying a configuration per admission control plugin.
<p>Plugins allows specifying a configuration per admission control plugin.</p>
-->
<code>plugins</code> 字段允许为每个准入控制插件设置配置选项。
<p><code>plugins</code> 字段允许为每个准入控制插件设置配置选项。</p>
</td>
</tr>
@ -58,9 +62,9 @@ AdmissionConfiguration 为准入控制器提供版本化的配置。
- [AdmissionConfiguration](#apiserver-config-k8s-io-v1-AdmissionConfiguration)
<!--
AdmissionPluginConfiguration provides the configuration for a single plug-in.
<p>AdmissionPluginConfiguration provides the configuration for a single plug-in.</p>
-->
AdmissionPluginConfiguration 为某个插件提供配置信息。
<p>AdmissionPluginConfiguration 为某个插件提供配置信息。</p>
<table class="table">
<thead><tr><th width="30%"><!-- Field-->字段</th><th><!--Description-->描述</th></tr></thead>
@ -71,10 +75,10 @@ AdmissionPluginConfiguration 为某个插件提供配置信息。
</td>
<td>
<!--
Name is the name of the admission controller.
It must match the registered admission plugin name.
<p>Name is the name of the admission controller.
It must match the registered admission plugin name.</p>
-->
<code>name</code> 是准入控制器的名称。它必须与所注册的准入插件名称匹配。
<p><code>name</code> 是准入控制器的名称。它必须与所注册的准入插件名称匹配。</p>
</td>
</tr>
@ -83,23 +87,23 @@ It must match the registered admission plugin name.
</td>
<td>
<!--
Path is the path to a configuration file that contains the plugin's
configuration
<p>Path is the path to a configuration file that contains the plugin's
configuration</p>
-->
<code>path</code> 是指向包含插件配置信息的配置文件的路径。
<p><code>path</code> 是指向包含插件配置信息的配置文件的路径。</p>
</td>
</tr>
<tr><td><code>configuration</code><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/runtime#Unknown"><code>k8s.io/apimachinery/pkg/runtime.Unknown</code></a>
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/runtime#Unknown"><code>k8s.io/apimachinery/pkg/runtime.Unknown</code></a>
</td>
<td>
<!--
Configuration is an embedded configuration object to be used as the plugin's
configuration. If present, it will be used instead of the path to the configuration file.
<p>Configuration is an embedded configuration object to be used as the plugin's
configuration. If present, it will be used instead of the path to the configuration file.</p>
-->
<code>configuration</code> 是一个内嵌的配置对象,用来保存插件的配置信息。
如果存在,则使用这里的配置信息而不是指向配置文件的路径。
<p><code>configuration</code> 是一个内嵌的配置对象,用来保存插件的配置信息。
如果存在,则使用这里的配置信息而不是指向配置文件的路径。</p>
</td>
</tr>

View File

@ -13,11 +13,11 @@ auto_generated: true
-->
<!--
Package v1 is the v1 version of the API.
<p>Package v1 is the v1 version of the API.</p>
## Resource Types
-->
此 API 的版本是 v1。
<p>此 API 的版本是 v1。</p>
## 资源类型 {#resource-types}
@ -26,9 +26,9 @@ Package v1 is the v1 version of the API.
## `WebhookAdmission` {#apiserver-config-k8s-io-v1-WebhookAdmission}
<!--
WebhookAdmission provides configuration for the webhook admission controller.
<p>WebhookAdmission provides configuration for the webhook admission controller.</p>
-->
WebhookAdmission 为 Webhook 准入控制器提供配置信息。
<p>WebhookAdmission 为 Webhook 准入控制器提供配置信息。</p>
<table class="table">
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
@ -42,8 +42,8 @@ WebhookAdmission 为 Webhook 准入控制器提供配置信息。
<code>string</code>
</td>
<td>
<!--KubeConfigFile is the path to the kubeconfig file.-->
字段 kubeConfigFile 包含指向 kubeconfig 文件的路径。
<!--<p>KubeConfigFile is the path to the kubeconfig file.</p>-->
<p>字段 kubeConfigFile 包含指向 kubeconfig 文件的路径。</p>
</td>
</tr>

View File

@ -1,799 +0,0 @@
---
title: kube-scheduler Policy Configuration (v1)
content_type: tool-reference
package: kubescheduler.config.k8s.io/v1
auto_generated: true
---
## Resource Types
- [Policy](#kubescheduler-config-k8s-io-v1-Policy)
## `Policy` {#kubescheduler-config-k8s-io-v1-Policy}
Policy describes a struct for a policy resource used in api.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>apiVersion</code><br/>string</td><td><code>kubescheduler.config.k8s.io/v1</code></td></tr>
<tr><td><code>kind</code><br/>string</td><td><code>Policy</code></td></tr>
<tr><td><code>predicates</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-PredicatePolicy"><code>[]PredicatePolicy</code></a>
</td>
<td>
Holds the information to configure the fit predicate functions</td>
</tr>
<tr><td><code>priorities</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-PriorityPolicy"><code>[]PriorityPolicy</code></a>
</td>
<td>
Holds the information to configure the priority functions</td>
</tr>
<tr><td><code>extenders</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-LegacyExtender"><code>[]LegacyExtender</code></a>
</td>
<td>
Holds the information to communicate with the extender(s)</td>
</tr>
<tr><td><code>hardPodAffinitySymmetricWeight</code> <B>[Required]</B><br/>
<code>int32</code>
</td>
<td>
RequiredDuringScheduling affinity is not symmetric, but there is an implicit PreferredDuringScheduling affinity rule
corresponding to every RequiredDuringScheduling affinity rule.
HardPodAffinitySymmetricWeight represents the weight of implicit PreferredDuringScheduling affinity rule, in the range 1-100.</td>
</tr>
<tr><td><code>alwaysCheckAllPredicates</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
When AlwaysCheckAllPredicates is set to true, scheduler checks all
the configured predicates even after one or more of them fails.
When the flag is set to false, scheduler skips checking the rest
of the predicates after it finds one predicate that failed.</td>
</tr>
</tbody>
</table>
## `ExtenderManagedResource` {#kubescheduler-config-k8s-io-v1-ExtenderManagedResource}
**Appears in:**
- [Extender](#kubescheduler-config-k8s-io-v1beta1-Extender)
- [LegacyExtender](#kubescheduler-config-k8s-io-v1-LegacyExtender)
ExtenderManagedResource describes the arguments of extended resources
managed by an extender.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>name</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Name is the extended resource name.</td>
</tr>
<tr><td><code>ignoredByScheduler</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
IgnoredByScheduler indicates whether kube-scheduler should ignore this
resource when applying predicates.</td>
</tr>
</tbody>
</table>
## `ExtenderTLSConfig` {#kubescheduler-config-k8s-io-v1-ExtenderTLSConfig}
**Appears in:**
- [Extender](#kubescheduler-config-k8s-io-v1beta1-Extender)
- [LegacyExtender](#kubescheduler-config-k8s-io-v1-LegacyExtender)
ExtenderTLSConfig contains settings to enable TLS with extender
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>insecure</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
Server should be accessed without verifying the TLS certificate. For testing only.</td>
</tr>
<tr><td><code>serverName</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
ServerName is passed to the server for SNI and is used in the client to check server
certificates against. If ServerName is empty, the hostname used to contact the
server is used.</td>
</tr>
<tr><td><code>certFile</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Server requires TLS client certificate authentication</td>
</tr>
<tr><td><code>keyFile</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Server requires TLS client certificate authentication</td>
</tr>
<tr><td><code>caFile</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Trusted root certificates for server</td>
</tr>
<tr><td><code>certData</code> <B>[Required]</B><br/>
<code>[]byte</code>
</td>
<td>
CertData holds PEM-encoded bytes (typically read from a client certificate file).
CertData takes precedence over CertFile</td>
</tr>
<tr><td><code>keyData</code> <B>[Required]</B><br/>
<code>[]byte</code>
</td>
<td>
KeyData holds PEM-encoded bytes (typically read from a client certificate key file).
KeyData takes precedence over KeyFile</td>
</tr>
<tr><td><code>caData</code> <B>[Required]</B><br/>
<code>[]byte</code>
</td>
<td>
CAData holds PEM-encoded bytes (typically read from a root certificates bundle).
CAData takes precedence over CAFile</td>
</tr>
</tbody>
</table>
## `LabelPreference` {#kubescheduler-config-k8s-io-v1-LabelPreference}
**Appears in:**
- [PriorityArgument](#kubescheduler-config-k8s-io-v1-PriorityArgument)
LabelPreference holds the parameters that are used to configure the corresponding priority function
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>label</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Used to identify node "groups"</td>
</tr>
<tr><td><code>presence</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
This is a boolean flag
If true, higher priority is given to nodes that have the label
If false, higher priority is given to nodes that do not have the label</td>
</tr>
</tbody>
</table>
## `LabelsPresence` {#kubescheduler-config-k8s-io-v1-LabelsPresence}
**Appears in:**
- [PredicateArgument](#kubescheduler-config-k8s-io-v1-PredicateArgument)
LabelsPresence holds the parameters that are used to configure the corresponding predicate in scheduler policy configuration.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>labels</code> <B>[Required]</B><br/>
<code>[]string</code>
</td>
<td>
The list of labels that identify node "groups"
All of the labels should be either present (or absent) for the node to be considered a fit for hosting the pod</td>
</tr>
<tr><td><code>presence</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
The boolean flag that indicates whether the labels should be present or absent from the node</td>
</tr>
</tbody>
</table>
## `LegacyExtender` {#kubescheduler-config-k8s-io-v1-LegacyExtender}
**Appears in:**
- [Policy](#kubescheduler-config-k8s-io-v1-Policy)
LegacyExtender holds the parameters used to communicate with the extender. If a verb is unspecified/empty,
it is assumed that the extender chose not to provide that extension.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>urlPrefix</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
URLPrefix at which the extender is available</td>
</tr>
<tr><td><code>filterVerb</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Verb for the filter call, empty if not supported. This verb is appended to the URLPrefix when issuing the filter call to extender.</td>
</tr>
<tr><td><code>preemptVerb</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Verb for the preempt call, empty if not supported. This verb is appended to the URLPrefix when issuing the preempt call to extender.</td>
</tr>
<tr><td><code>prioritizeVerb</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Verb for the prioritize call, empty if not supported. This verb is appended to the URLPrefix when issuing the prioritize call to extender.</td>
</tr>
<tr><td><code>weight</code> <B>[Required]</B><br/>
<code>int64</code>
</td>
<td>
The numeric multiplier for the node scores that the prioritize call generates.
The weight should be a positive integer</td>
</tr>
<tr><td><code>bindVerb</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Verb for the bind call, empty if not supported. This verb is appended to the URLPrefix when issuing the bind call to extender.
If this method is implemented by the extender, it is the extender's responsibility to bind the pod to apiserver. Only one extender
can implement this function.</td>
</tr>
<tr><td><code>enableHttps</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
EnableHTTPS specifies whether https should be used to communicate with the extender</td>
</tr>
<tr><td><code>tlsConfig</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-ExtenderTLSConfig"><code>ExtenderTLSConfig</code></a>
</td>
<td>
TLSConfig specifies the transport layer security config</td>
</tr>
<tr><td><code>httpTimeout</code> <B>[Required]</B><br/>
<a href="https://godoc.org/time#Duration"><code>time.Duration</code></a>
</td>
<td>
HTTPTimeout specifies the timeout duration for a call to the extender. Filter timeout fails the scheduling of the pod. Prioritize
timeout is ignored, k8s/other extenders priorities are used to select the node.</td>
</tr>
<tr><td><code>nodeCacheCapable</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
NodeCacheCapable specifies that the extender is capable of caching node information,
so the scheduler should only send minimal information about the eligible nodes
assuming that the extender already cached full details of all nodes in the cluster</td>
</tr>
<tr><td><code>managedResources</code><br/>
<a href="#kubescheduler-config-k8s-io-v1-ExtenderManagedResource"><code>[]ExtenderManagedResource</code></a>
</td>
<td>
ManagedResources is a list of extended resources that are managed by
this extender.
- A pod will be sent to the extender on the Filter, Prioritize and Bind
(if the extender is the binder) phases iff the pod requests at least
one of the extended resources in this list. If empty or unspecified,
all pods will be sent to this extender.
- If IgnoredByScheduler is set to true for a resource, kube-scheduler
will skip checking the resource in predicates.</td>
</tr>
<tr><td><code>ignorable</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
Ignorable specifies if the extender is ignorable, i.e. scheduling should not
fail when the extender returns an error or is not reachable.</td>
</tr>
</tbody>
</table>
## `PredicateArgument` {#kubescheduler-config-k8s-io-v1-PredicateArgument}
**Appears in:**
- [PredicatePolicy](#kubescheduler-config-k8s-io-v1-PredicatePolicy)
PredicateArgument represents the arguments to configure predicate functions in scheduler policy configuration.
Only one of its members may be specified
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>serviceAffinity</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-ServiceAffinity"><code>ServiceAffinity</code></a>
</td>
<td>
The predicate that provides affinity for pods belonging to a service
It uses a label to identify nodes that belong to the same "group"</td>
</tr>
<tr><td><code>labelsPresence</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-LabelsPresence"><code>LabelsPresence</code></a>
</td>
<td>
The predicate that checks whether a particular node has a certain label
defined or not, regardless of value</td>
</tr>
</tbody>
</table>
## `PredicatePolicy` {#kubescheduler-config-k8s-io-v1-PredicatePolicy}
**Appears in:**
- [Policy](#kubescheduler-config-k8s-io-v1-Policy)
PredicatePolicy describes a struct of a predicate policy.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>name</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Identifier of the predicate policy
For a custom predicate, the name can be user-defined
For the Kubernetes provided predicates, the name is the identifier of the pre-defined predicate</td>
</tr>
<tr><td><code>argument</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-PredicateArgument"><code>PredicateArgument</code></a>
</td>
<td>
Holds the parameters to configure the given predicate</td>
</tr>
</tbody>
</table>
## `PriorityArgument` {#kubescheduler-config-k8s-io-v1-PriorityArgument}
**Appears in:**
- [PriorityPolicy](#kubescheduler-config-k8s-io-v1-PriorityPolicy)
PriorityArgument represents the arguments to configure priority functions in scheduler policy configuration.
Only one of its members may be specified
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>serviceAntiAffinity</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-ServiceAntiAffinity"><code>ServiceAntiAffinity</code></a>
</td>
<td>
The priority function that ensures a good spread (anti-affinity) for pods belonging to a service
It uses a label to identify nodes that belong to the same "group"</td>
</tr>
<tr><td><code>labelPreference</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-LabelPreference"><code>LabelPreference</code></a>
</td>
<td>
The priority function that checks whether a particular node has a certain label
defined or not, regardless of value</td>
</tr>
<tr><td><code>requestedToCapacityRatioArguments</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-RequestedToCapacityRatioArguments"><code>RequestedToCapacityRatioArguments</code></a>
</td>
<td>
The RequestedToCapacityRatio priority function is parametrized with function shape.</td>
</tr>
</tbody>
</table>
## `PriorityPolicy` {#kubescheduler-config-k8s-io-v1-PriorityPolicy}
**Appears in:**
- [Policy](#kubescheduler-config-k8s-io-v1-Policy)
PriorityPolicy describes a struct of a priority policy.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>name</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Identifier of the priority policy
For a custom priority, the name can be user-defined
For the Kubernetes provided priority functions, the name is the identifier of the pre-defined priority function</td>
</tr>
<tr><td><code>weight</code> <B>[Required]</B><br/>
<code>int64</code>
</td>
<td>
The numeric multiplier for the node scores that the priority function generates
The weight should be non-zero and can be a positive or a negative integer</td>
</tr>
<tr><td><code>argument</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-PriorityArgument"><code>PriorityArgument</code></a>
</td>
<td>
Holds the parameters to configure the given priority function</td>
</tr>
</tbody>
</table>
## `RequestedToCapacityRatioArguments` {#kubescheduler-config-k8s-io-v1-RequestedToCapacityRatioArguments}
**Appears in:**
- [PriorityArgument](#kubescheduler-config-k8s-io-v1-PriorityArgument)
RequestedToCapacityRatioArguments holds arguments specific to RequestedToCapacityRatio priority function.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>shape</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-UtilizationShapePoint"><code>[]UtilizationShapePoint</code></a>
</td>
<td>
Array of point defining priority function shape.</td>
</tr>
<tr><td><code>resources</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-ResourceSpec"><code>[]ResourceSpec</code></a>
</td>
<td>
<span class="text-muted">No description provided.</span>
</td>
</tr>
</tbody>
</table>
## `ResourceSpec` {#kubescheduler-config-k8s-io-v1-ResourceSpec}
**Appears in:**
- [RequestedToCapacityRatioArguments](#kubescheduler-config-k8s-io-v1-RequestedToCapacityRatioArguments)
ResourceSpec represents single resource and weight for bin packing of priority RequestedToCapacityRatioArguments.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>name</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Name of the resource to be managed by RequestedToCapacityRatio function.</td>
</tr>
<tr><td><code>weight</code> <B>[Required]</B><br/>
<code>int64</code>
</td>
<td>
Weight of the resource.</td>
</tr>
</tbody>
</table>
## `ServiceAffinity` {#kubescheduler-config-k8s-io-v1-ServiceAffinity}
**Appears in:**
- [PredicateArgument](#kubescheduler-config-k8s-io-v1-PredicateArgument)
ServiceAffinity holds the parameters that are used to configure the corresponding predicate in scheduler policy configuration.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>labels</code> <B>[Required]</B><br/>
<code>[]string</code>
</td>
<td>
The list of labels that identify node "groups"
All of the labels should match for the node to be considered a fit for hosting the pod</td>
</tr>
</tbody>
</table>
## `ServiceAntiAffinity` {#kubescheduler-config-k8s-io-v1-ServiceAntiAffinity}
**Appears in:**
- [PriorityArgument](#kubescheduler-config-k8s-io-v1-PriorityArgument)
ServiceAntiAffinity holds the parameters that are used to configure the corresponding priority function
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>label</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Used to identify node "groups"</td>
</tr>
</tbody>
</table>
## `UtilizationShapePoint` {#kubescheduler-config-k8s-io-v1-UtilizationShapePoint}
**Appears in:**
- [RequestedToCapacityRatioArguments](#kubescheduler-config-k8s-io-v1-RequestedToCapacityRatioArguments)
UtilizationShapePoint represents single point of priority function shape.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>utilization</code> <B>[Required]</B><br/>
<code>int32</code>
</td>
<td>
Utilization (x axis). Valid values are 0 to 100. Fully utilized node maps to 100.</td>
</tr>
<tr><td><code>score</code> <B>[Required]</B><br/>
<code>int32</code>
</td>
<td>
Score assigned to given utilization (y axis). Valid values are 0 to 10.</td>
</tr>
</tbody>
</table>

View File

@ -35,11 +35,14 @@ tags:
<!--more-->
<!--
Kubernetes supports several container runtimes: {{< glossary_tooltip term_id="docker">}},
Kubernetes supports container runtimes such sa
{{< glossary_tooltip term_id="docker">}},
{{< glossary_tooltip term_id="containerd" >}}, {{< glossary_tooltip term_id="cri-o" >}},
and any implementation of the [Kubernetes CRI (Container Runtime
and any other implementation of the [Kubernetes CRI (Container Runtime
Interface)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md).
-->
Kubernetes 支持多个容器运行环境: {{< glossary_tooltip term_id="docker">}}、
Kubernetes 支持容器运行时,例如
{{< glossary_tooltip term_id="docker">}}、
{{< glossary_tooltip term_id="containerd" >}}、{{< glossary_tooltip term_id="cri-o" >}}
以及任何实现 [Kubernetes CRI (容器运行环境接口)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md)。
以及 [Kubernetes CRI (容器运行环境接口)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md)
的其他任何实现。

View File

@ -32,9 +32,10 @@ tags:
<!--more-->
<!--
Most cluster administrators will use a hosted or distribution instance of Kubernetes. As a result, most Kubernetes users will need to install [extensions](/docs/concepts/extend-kubernetes/extend-cluster/#extensions) and fewer will need to author new ones.
Many cluster administrators use a hosted or distribution instance of Kubernetes. These clusters come with extensions pre-installed. As a result, most Kubernetes users will not need to install [extensions](/docs/concepts/extend-kubernetes/extend-cluster/#extensions) and even fewer users will need to author new ones.
-->
大多数集群管理员会使用托管的 Kubernetes 或其某种发行包。因此,大多数 Kubernetes 用户将需要
许多集群管理员会使用托管的 Kubernetes 或其某种发行包,这些集群预装了扩展。
因此,大多数 Kubernetes 用户将不需要
安装[扩展组件](/zh/docs/concepts/extend-kubernetes/extend-cluster/#extensions)
较少用户会需要编写新的扩展组件。
需要编写新的扩展组件的用户就更少了

View File

@ -41,15 +41,16 @@ Finalizer 提醒{{<glossary_tooltip text="控制器" term_id="controller">}}清
<!--
When you tell Kubernetes to delete an object that has finalizers specified for
it, the Kubernetes API marks the object for deletion, putting it into a
read-only state. The target object remains in a terminating state while the
it, the Kubernetes API marks the object for deletion by populating `.metadata.deletionTimestamp`,
and returns a `202` status code (HTTP "Accepted"). The target object remains in a terminating state while the
control plane, or other components, take the actions defined by the finalizers.
After these actions are complete, the controller removes the relevant finalizers
from the target object. When the `metadata.finalizers` field is empty,
Kubernetes considers the deletion complete.
-->
当你告诉 Kubernetes 删除一个指定了 Finalizer 的对象时,
Kubernetes API 会将该对象标记为删除,使其进入只读状态。
Kubernetes API 通过填充 `.metadata.deletionTimestamp` 来标记要删除的对象,
并返回`202`状态码 (HTTP "已接受") 使其进入只读状态。
此时控制平面或其他组件会采取 Finalizer 所定义的行动,
而目标对象仍然处于终止中Terminating的状态。
这些行动完成后,控制器会删除目标对象相关的 Finalizer。

View File

@ -4,7 +4,7 @@ id: flexvolume
date: 2018-06-25
full_link: /zh/docs/concepts/storage/volumes/#flexvolume
short_description: >
Flexvolume 是创建树外卷插件的一种接口
FlexVolume 是一个已弃用的接口,用于创建树外卷插件
{{< glossary_tooltip text="容器存储接口CSI" term_id="csi" >}}
是比 Flexvolume 更新的接口,它解决了 Flexvolumes 的一些问题。
@ -19,15 +19,15 @@ id: flexvolume
date: 2018-06-25
full_link: /docs/concepts/storage/volumes/#flexvolume
short_description: >
FlexVolume is an interface for creating out-of-tree volume plugins. The {{< glossary_tooltip text="Container Storage Interface" term_id="csi" >}} is a newer interface which addresses several problems with FlexVolumes.
FlexVolume is a deprecated interface for creating out-of-tree volume plugins. The {{< glossary_tooltip text="Container Storage Interface" term_id="csi" >}} is a newer interface that addresses several problems with FlexVolume.
aka:
tags:
- storage
-->
<!--
FlexVolume is an interface for creating out-of-tree volume plugins. The {{< glossary_tooltip text="Container Storage Interface" term_id="csi" >}} is a newer interface which addresses several problems with FlexVolumes.
FlexVolume is a deprecated interface for creating out-of-tree volume plugins. The {{< glossary_tooltip text="Container Storage Interface" term_id="csi" >}} is a newer interface that addresses several problems with FlexVolume.
-->
Flexvolume 是创建树外卷插件的一种接口
FlexVolume 是一个已弃用的接口,用于创建树外卷插件
{{< glossary_tooltip text="容器存储接口CSI" term_id="csi" >}}
是比 Flexvolume 更新的接口,它解决了 Flexvolume 的一些问题。

View File

@ -2,7 +2,7 @@
title: kube-scheduler
id: kube-scheduler
date: 2018-04-12
full_link: /docs/reference/generated/kube-scheduler/
full_link: /zh/docs/reference/command-line-tools-reference/kube-scheduler/
short_description: >
控制平面组件,负责监视新创建的、未指定运行节点的 Pod选择节点让 Pod 在上面运行。
@ -17,7 +17,7 @@ tags:
title: kube-scheduler
id: kube-scheduler
date: 2018-04-12
full_link: /docs/reference/generated/kube-scheduler/
full_link: /docs/reference/command-line-tools-reference/kube-scheduler/
short_description: >
Control plane component that watches for newly created pods with no assigned node, and selects a node for them to run on.

View File

@ -2,7 +2,7 @@
title: 抢占Preemption
id: preemption
date: 2019-01-31
full_link: /zh/docs/concepts/configuration/pod-priority-preemption/#preemption
full_link: /zh/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption
short_description: >
Kubernetes 中的抢占逻辑通过驱逐节点上的低优先级 Pod 来帮助悬决的
Pod 找到合适的节点。
@ -16,7 +16,7 @@ tags:
title: Preemption
id: preemption
date: 2019-01-31
full_link: /docs/concepts/configuration/pod-priority-preemption/#preemption
full_link: /docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption
short_description: >
Preemption logic in Kubernetes helps a pending Pod to find a suitable Node by evicting low priority Pods existing on that Node.
@ -34,9 +34,9 @@ Kubernetes 中的抢占逻辑通过驱逐{{< glossary_tooltip term_id="node" >}}
<!--more-->
<!--
If a Pod cannot be scheduled, the scheduler tries to [preempt](/docs/concepts/configuration/pod-priority-preemption/#preemption) lower priority Pods to make scheduling of the pending Pod possible.
If a Pod cannot be scheduled, the scheduler tries to [preempt](/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption) lower priority Pods to make scheduling of the pending Pod possible.
-->
如果一个 Pod 无法调度,调度器会尝试
[抢占](/zh/docs/concepts/configuration/pod-priority-preemption/#preemption)
[抢占](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption)
较低优先级的 Pod以使得悬决的 Pod 有可能被调度。

View File

@ -17,7 +17,7 @@ id: quantity
date: 2018-08-07
full_link:
short_description: >
A whole-number representation of small or large numbers using SI suffixes.
A whole-number representation of small or large numbers using [SI](https://en.wikipedia.org/wiki/International_System_of_Units) suffixes.
aka:
tags:
@ -27,7 +27,7 @@ tags:
<!--
A whole-number representation of small or large numbers using SI suffixes.
-->
使用全数字来表示较小数值或使用 SI 后缀表示较大数值的表示法。
使用全数字来表示较小数值或使用 [SI](https://zh.wikipedia.org/wiki/International_System_of_Units) 后缀表示较大数值的表示法。
<!--more-->

View File

@ -37,9 +37,11 @@ tags:
<!--more-->
<!--
Allows for more control over how sensitive information is used and reduces the risk of accidental exposure, including [encryption](/docs/tasks/administer-cluster/encrypt-data/#ensure-all-secrets-are-encrypted) at rest. A {{< glossary_tooltip text="Pod" term_id="pod" >}} references the secret as a file in a volume mount or by the kubelet pulling images for a pod. Secrets are great for confidential data and [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/) for non-confidential data.
Allows for more control over how sensitive information is used and reduces the risk of accidental exposure. Secret values are encoded as base64 strings and stored unencrypted by default, but can be configured to be [encrypted at rest](/docs/tasks/administer-cluster/encrypt-data/#ensure-all-secrets-are-encrypted). A {{< glossary_tooltip text="Pod" term_id="pod" >}} references the secret as a file in a volume mount or by the kubelet pulling images for a pod. Secrets are great for confidential data and [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/) for non-confidential data.
-->
Secret 允许用户对如何使用敏感信息进行更多的控制,并减少信息意外暴露的风险,包括静态[encryption加密](/zh/docs/tasks/administer-cluster/encrypt-data/#ensure-all-secrets-are-encrypted)。
Secret 允许用户对如何使用敏感信息进行更多的控制,并减少信息意外暴露的风险。
默认情况下Secret 值被编码为 base64 字符串并以非加密的形式存储,但可以配置为
[静态加密Encrypt at rest](/zh/docs/tasks/administer-cluster/encrypt-data/#ensure-all-secrets-are-encrypted)。
{{< glossary_tooltip text="Pod" term_id="pod" >}} 通过挂载卷中的文件的方式引用 Secret或者通过 kubelet 为 pod 拉取镜像时引用。
Secret 非常适合机密数据使用,而 [ConfigMaps](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/) 适用于非机密数据。

View File

@ -0,0 +1,46 @@
---
api_metadata:
apiVersion: ""
import: "k8s.io/api/core/v1"
kind: "LocalObjectReference"
content_type: "api_reference"
description: "LocalObjectReference 包含足够的信息,可以让你在同一命名空间内找到引用的对象。"
title: "LocalObjectReference"
weight: 4
auto_generated: true
---
<!--
api_metadata:
apiVersion: ""
import: "k8s.io/api/core/v1"
kind: "LocalObjectReference"
content_type: "api_reference"
description: "LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace."
title: "LocalObjectReference"
weight: 4
auto_generated: true
-->
`import "k8s.io/api/core/v1"`
<!--
LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace.
-->
LocalObjectReference 包含足够的信息可以让你在同一命名空间namespace内找到引用的对象。
<hr>
<!--
- **name** (string)
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
-->
- **name** (string)
被引用者的名称。
更多信息: https://kubernetes.io/zh/docs/concepts/overview/working-with-objects/names/#names。

View File

@ -16,7 +16,7 @@ content_type: concept
{{< feature-state for_k8s_version="v1.11" state="beta" >}}
{{< glossary_definition term_id="cloud-controller-manager" length="all" prepend="组件 cloud-controller-manager 是">}}
{{< glossary_definition term_id="cloud-controller-manager" length="all">}}
<!-- body -->
@ -27,44 +27,43 @@ Since cloud providers develop and release at a different pace compared to the Ku
-->
## 背景
由于云驱动的开发和发布与 Kubernetes 项目本身步调不同,将特定于云环境
的代码抽象到 `cloud-controller-manager` 二进制组件有助于云厂商独立于
Kubernetes 核心代码推进其驱动开发。
由于云驱动的开发和发布与 Kubernetes 项目本身步调不同,将特定于云环境的代码抽象到
`cloud-controller-manager` 二进制组件有助于云厂商独立于 Kubernetes
核心代码推进其驱动开发。
<!--
The Kubernetes project provides skeleton cloud-controller-manager code with Go interfaces to allow you (or your cloud provider) to plug in your own implementations. This means that a cloud provider can implement a cloud-controller-manager by importing packages from Kubernetes core; each cloudprovider will register their own code by calling `cloudprovider.RegisterCloudProvider` to update a global variable of available cloud providers.
-->
Kubernetes 项目提供 cloud-controller-manager 的框架代码,其中包含 Go
语言的接口,便于你(或者你的云驱动提供者)接驳你自己的实现。
这意味着每个云驱动可以通过从 Kubernetes 核心代码导入软件包来实现一个
cloud-controller-manager每个云驱动会通过调用
`cloudprovider.RegisterCloudProvider` 接口来注册其自身实现代码,从而更新
记录可用云驱动的全局变量。
Kubernetes 项目提供 cloud-controller-manager 的框架代码,其中包含 Go 语言的接口,
便于你(或者你的云驱动提供者)接驳你自己的实现。这意味着每个云驱动可以通过从
Kubernetes 核心代码导入软件包来实现一个 cloud-controller-manager
每个云驱动会通过调用 `cloudprovider.RegisterCloudProvider` 接口来注册其自身实现代码,
从而更新一个用来记录可用云驱动的全局变量。
<!--
## Developing
-->
## 开发
### Out of Tree
### 树外(Out of Tree
<!--
To build an out-of-tree cloud-controller-manager for your cloud, follow these steps:
-->
要为你的云环境构建一个 out-of-tree 云控制器管理器:
要为你的云环境构建一个树外Out-of-Tree云控制器管理器:
<!--
1. Create a go package with an implementation that satisfies [cloudprovider.Interface](https://github.com/kubernetes/cloud-provider/blob/master/cloud.go).
2. Use [main.go in cloud-controller-manager](https://github.com/kubernetes/kubernetes/blob/master/cmd/cloud-controller-manager/main.go) from Kubernetes core as a template for your main.go. As mentioned above, the only difference should be the cloud package that will be imported.
3. Import your cloud package in `main.go`, ensure your package has an `init` block to run [cloudprovider.RegisterCloudProvider](https://github.com/kubernetes/cloud-provider/blob/master/plugins.go).
-->
1. 使用满足 [cloudprovider.Interface](https://github.com/kubernetes/cloud-provider/blob/master/cloud.go)
的实现创建一个 Go 语言包。
1. 使用满足 [`cloudprovider.Interface`](https://github.com/kubernetes/cloud-provider/blob/master/cloud.go)
接口的实现创建一个 Go 语言包。
2. 使用来自 Kubernetes 核心代码库的
[cloud-controller-manager 中的 main.go](https://github.com/kubernetes/kubernetes/blob/master/cmd/cloud-controller-manager/main.go)
作为 main.go 的模板。如上所述,唯一的区别应该是将导入的云包。
作为 `main.go` 的模板。如上所述,唯一的区别应该是将导入的云包不同
3. 在 `main.go` 中导入你的云包,确保你的包有一个 `init` 块来运行
[cloudprovider.RegisterCloudProvider](https://github.com/kubernetes/cloud-provider/blob/master/plugins.go)。
[`cloudprovider.RegisterCloudProvider`](https://github.com/kubernetes/cloud-provider/blob/master/plugins.go)。
<!--
Many cloud providers publish their controller manager code as open source. If you are creating
@ -72,15 +71,15 @@ a new cloud-controller-manager from scratch, you could take an existing out-of-t
controller manager as your starting point.
-->
很多云驱动都将其控制器管理器代码以开源代码的形式公开。
如果你在开发一个新的 cloud-controller-manager你可以选择某个 out-of-tree
如果你在开发一个新的 cloud-controller-manager你可以选择某个树外Out-of-Tree
云控制器管理器作为出发点。
### In Tree
### 树内(In Tree
<!--
For in-tree cloud providers, you can run the in-tree cloud controller manager as a {{< glossary_tooltip term_id="daemonset" >}} in your cluster. See [Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/) for more details.
-->
对于 in-tree 驱动,你可以将 in-tree 云控制器管理器作为群集中的
{{< glossary_tooltip term_id="daemonset" text="Daemonset" >}} 来运行。
对于树内In-Tree驱动你可以将树内云控制器管理器作为集群中的
{{< glossary_tooltip term_id="daemonset" text="DaemonSet" >}} 来运行。
有关详细信息,请参阅[云控制器管理器管理](/zh/docs/tasks/administer-cluster/running-cloud-controller/)。

View File

@ -0,0 +1,10 @@
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
app.kubernetes.io/component: controller
name: nginx-example
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: k8s.io/ingress-nginx