Update localization guidelines (#10485)
* Update localization guidelines for language labels
Continuing work
Continuing work
Continuing work
More work in progress
Add local OWNERS folders
Add an OWNERS file to Chinese
Remove shortcode for repos
Add Japanese
Alphabetize languages, change weights accordingly
More updates
Add Korean in Korean
Add English to languageName
Feedback from gochist
Move Chinese content from cn/ to zh/
Move OWNERS from cn/ to zh/
Resolve merge conflicts by updating from master
Add files back in to prep for resolution
After rebase on upstream/master, remove files
Review and update localization guidelines
Feedback from gochist, tnir, cstoku
Add a trailing newline to content/ja/OWNERS
Add a trailing newline to content/zh/OWNERS
Drop requirement for GH repo project
Clarify language about forks/branches
Edits and typos
Remove a shortcode specific to a multi-repo language setup
Update aliases and owners
Add explicit OWNERS for content/en
Migrate content from Chinese repo, update regex in config.toml
Remove untranslated strings
Add trailing newline to content/en/OWNERS
Add trailing newlines to OWNERS files
add Jaguar project description (#10433)
* add Jaguar project description
[Jaguar](https://gitlab.com/sdnlab/jaguar) is an open source solution for Kubernetes's network based on OpenDaylight.
Jaguar provides overlay network using vxlan and Jaguar CNIPlugin provides one IP address per pod.
* Minor newline tweak
blog post for azure vmss (#10538)
Add microk8s to pick-right-solution.md (#10542)
* Add microk8s to pick-right-solution.md
Microk8s is a single-command installation of upstream Kubernetes on any Linux and should be included in the list of local-machine solutions.
* capitalized Istio
Add microk8s to foundational.md (#10543)
* Add microk8s to foundational.md
Adding microk8s as credible and stable alternative to get started with Kubernetes on a local machine. This is especially attractive for those not wanting to incur the overhead of running a VM for a local cluster.
* Update foundational.md
Thank you for your suggestions! LMK if this works now?
* Rewrote first paragraph
And included a bullet list of features of microk8s
* Copyedit
fix typo (#10545)
Fix the kubectl subcommands links. (#10550)
Signed-off-by: William Zhang <warmchang@outlook.com>
Fix command issue (#10515)
Signed-off-by: mooncake <xcoder@tenxcloud.com>
remove imported community files per issue 10184 (#10501)
networking.md: Markdown fix (#10498)
Fix front matter, federation command-line tools (#10500)
Clean up glossary entry (#10399)
update slack link (#10536)
typo in StatefulSet docs (#10558)
fix discription about horizontal pod autoscale (#10557)
Remove redundant symbols (#10556)
Fix issue #10520 (#10554)
Signed-off-by: William Zhang <warmchang@outlook.com>
Update api-concepts.md (#10534)
Revert "Fix command issue (#10515)"
This reverts commit c02a7fb9f9.
Update memory-constraint-namespace.md (#10530)
update memory request to 100MiB corresponding the yaml content
Blog: Introducing Volume Snapshot Alpha for Kubernetes (#10562)
* blog post for azure vmss
* snapshot blog post
Resolve merge conflicts in OWNERS*
Minor typo fix (#10567)
Not sure what's supposed to be here, proposing removing it.
* Feedback from gochist
Tweaks to feedback
* Feedback from ClaudiaJKang
This commit is contained in:
parent
753f57f0e6
commit
abcee2dccd
|
|
@ -130,6 +130,36 @@ aliases:
|
|||
- rajakavitha1
|
||||
- stewart-yu
|
||||
- xiangpengzhao
|
||||
- zhangxiaoyu
|
||||
sig-docs-ja-owners: #Team: Japanese docs localization; GH: sig-docs-ja-owners
|
||||
- cstoku
|
||||
- nasa9084
|
||||
- tnir
|
||||
sig-docs-ja-reviews: #Team: Japanese docs PR reviews; GH:sig-docs-ja-reviews
|
||||
- cstoku
|
||||
- makocchi-git
|
||||
- MasayaAoyama
|
||||
- nasa9084
|
||||
- tnir
|
||||
sig-docs-ko-owners: #Team Korean docs localization; GH: sig-docs-ko-owners
|
||||
- ClaudiaJKang
|
||||
- gochist
|
||||
sig-docs-ko-reviews: #Team Korean docs reviews; GH: sig-docs-ko-reviews
|
||||
- ClaudiaJKang
|
||||
- gochist
|
||||
- ianychoi
|
||||
sig-docs-zh-owners: #Team Chinese docs localization; GH: sig-docs-zh-owners
|
||||
- dchen1107
|
||||
- haibinxie
|
||||
- hanjiayao
|
||||
- lichuqiang
|
||||
- tengqm
|
||||
- xiangpengzhao
|
||||
- zhangxiaoyu-zidif
|
||||
sig-docs-zh-reviews: #Team Chinese docs reviews; GH: sig-docs-zh-reviews
|
||||
- tengqm
|
||||
- xiangpengzhao
|
||||
|
||||
- zhangxiaoyu-zidif
|
||||
sig-federation: #Team: Federation; e.g. Federated Clusters
|
||||
- csbell
|
||||
|
|
|
|||
26
config.toml
26
config.toml
|
|
@ -7,7 +7,7 @@ enableRobotsTXT = true
|
|||
|
||||
disableKinds = ["taxonomy", "taxonomyTerm"]
|
||||
|
||||
ignoreFiles = [ "^OWNERS$", "README.md", "^node_modules$", "content/en/docs/doc-contributor-tools" ]
|
||||
ignoreFiles = [ "^OWNERS$", "README[-]+[a-z]*\.md", "^node_modules$", "content/en/docs/doc-contributor-tools" ]
|
||||
|
||||
contentDir = "content/en"
|
||||
|
||||
|
|
@ -131,25 +131,29 @@ description = "Production-Grade Container Orchestration"
|
|||
languageName ="English"
|
||||
# Weight used for sorting.
|
||||
weight = 1
|
||||
[languages.cn]
|
||||
|
||||
[languages.zh]
|
||||
title = "Kubernetes"
|
||||
description = "Production-Grade Container Orchestration"
|
||||
languageName = "Chinese"
|
||||
languageName = "中文 Chinese"
|
||||
weight = 2
|
||||
contentDir = "content/cn"
|
||||
contentDir = "content/zh"
|
||||
|
||||
[languages.ko]
|
||||
title = "Kubernetes"
|
||||
description = "Production-Grade Container Orchestration"
|
||||
languageName = "한국어 Korean"
|
||||
weight = 3
|
||||
contentDir = "content/ko"
|
||||
|
||||
[languages.no]
|
||||
title = "Kubernetes"
|
||||
description = "Production-Grade Container Orchestration"
|
||||
languageName ="Norsk"
|
||||
weight = 3
|
||||
weight = 4
|
||||
contentDir = "content/no"
|
||||
|
||||
[languages.no.params]
|
||||
time_format_blog = "02.01.2006"
|
||||
# A list of language codes to look for untranslated content, ordered from left to right.
|
||||
language_alternatives = ["en"]
|
||||
[languages.ko]
|
||||
title = "Kubernetes"
|
||||
description = "Production-Grade Container Orchestration"
|
||||
languageName = "Korean"
|
||||
weight = 4
|
||||
contentDir = "content/ko"
|
||||
|
|
|
|||
|
|
@ -1,6 +0,0 @@
|
|||
You need to either have a dynamic PersistentVolume provisioner with a default
|
||||
[StorageClass](/docs/concepts/storage/storage-classes/),
|
||||
or [statically provision PersistentVolumes](/docs/user-guide/persistent-volumes/#provisioning)
|
||||
yourself to satisfy the [PersistentVolumeClaims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims)
|
||||
used here.
|
||||
|
||||
|
|
@ -1,8 +0,0 @@
|
|||
This guide assumes that you have a running Kubernetes Cluster
|
||||
Federation installation. If not, then head over to the
|
||||
[federation admin guide](/docs/tutorials/federation/set-up-cluster-federation-kubefed/) to learn how to
|
||||
bring up a cluster federation (or have your cluster administrator do
|
||||
this for you).
|
||||
Other tutorials, such as Kelsey Hightower's
|
||||
[Federated Kubernetes Tutorial](https://github.com/kelseyhightower/kubernetes-cluster-federation),
|
||||
might also help you create a Federated Kubernetes cluster.
|
||||
|
|
@ -1,2 +0,0 @@
|
|||
The topics in the [Federation API](/docs/federation/api-reference/) section of the Kubernetes docs
|
||||
are being moved to the [Reference](/docs/reference/) section. The content in this topic has moved to:
|
||||
|
|
@ -1,7 +0,0 @@
|
|||
**Note:** `Federation V1`, the current Kubernetes federation API which reuses the Kubernetes API
|
||||
resources 'as is', is currently considered alpha for many of its features, and there is no clear
|
||||
path to evolve the API to GA. However, there is a `Federation V2` effort in progress to implement
|
||||
a dedicated federation API apart from the Kubernetes API. The details can be found at
|
||||
[sig-multicluster community page](https://github.com/kubernetes/community/tree/master/sig-multicluster).
|
||||
{: .note}
|
||||
|
||||
|
|
@ -1,3 +0,0 @@
|
|||
---
|
||||
headless: true
|
||||
---
|
||||
|
|
@ -1,8 +0,0 @@
|
|||
You need to have a Kubernetes cluster, and the kubectl command-line tool must
|
||||
be configured to communicate with your cluster. If you do not already have a
|
||||
cluster, you can create one by using
|
||||
[Minikube](/docs/getting-started-guides/minikube),
|
||||
or you can use one of these Kubernetes playgrounds:
|
||||
|
||||
* [Katacoda](https://www.katacoda.com/courses/kubernetes/playground)
|
||||
* [Play with Kubernetes](http://labs.play-with-k8s.com/)
|
||||
|
|
@ -1,3 +0,0 @@
|
|||
The topics in the [User Guide](/docs/user-guide/) section of the Kubernetes docs
|
||||
are being moved to the [Tasks](/docs/tasks/), [Tutorials](/docs/tutorials/), and
|
||||
[Concepts](/docs/concepts) sections. The content in this topic has moved to:
|
||||
|
|
@ -1,12 +0,0 @@
|
|||
|
||||
|
||||
<table style="background-color:#eeeeee">
|
||||
<tr>
|
||||
<td>
|
||||
<p><b>NOTICE</b></p>
|
||||
<p>As of March 14, 2017, the Kubernetes SIG-Docs-Maintainers group have begun migration of the User Guide content as announced previously to the <a href="https://git.k8s.io/community/sig-docs">SIG Docs community</a> through the <a href="https://groups.google.com/forum/#!forum/kubernetes-sig-docs">kubernetes-sig-docs</a> group and <a href="https://kubernetes.slack.com/messages/sig-docs/">kubernetes.slack.com #sig-docs</a> channel.</p>
|
||||
<p>The user guides within this section are being refactored into topics within Tutorials, Tasks, and Concepts. Anything that has been moved will have a notice placed in its previous location as well as a link to its new location. The reorganization implements a new table of contents and should improve the documentation's findability and readability for a wider range of audiences.</p>
|
||||
<p>For any questions, please contact: <a href="mailto:kubernetes-sig-docs@googlegroups.com">kubernetes-sig-docs@googlegroups.com</a></p>
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
|
@ -0,0 +1,11 @@
|
|||
# This is the directory for English source content.
|
||||
# Teams and members are visible at https://github.com/orgs/kubernetes/teams.
|
||||
|
||||
reviewers:
|
||||
- sig-docs-en-reviews
|
||||
|
||||
approvers:
|
||||
- sig-docs-en-owners
|
||||
|
||||
labels:
|
||||
- language/en
|
||||
|
|
@ -4,17 +4,19 @@ content_template: templates/concept
|
|||
approvers:
|
||||
- chenopis
|
||||
- zacharysarah
|
||||
- zparnold
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
The Kubernetes documentation is currently available in [multiple languages](#supported-languages) and we encourage you to add new localizations ([l10n](https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/))!
|
||||
Documentation for Kubernetes is available in multiple languages:
|
||||
|
||||
Currently available languages:
|
||||
- English
|
||||
- Chinese
|
||||
- Japanese
|
||||
- Korean
|
||||
|
||||
{{< language-repos-list >}}
|
||||
|
||||
In order for localizations to be accepted, however, they must fulfill some requirements related to workflow (*how* to localize) and output (*what* to localize).
|
||||
We encourage you to add new [localizations](https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/)!
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
@ -22,42 +24,51 @@ In order for localizations to be accepted, however, they must fulfill some requi
|
|||
|
||||
{{% capture body %}}
|
||||
|
||||
## Workflow
|
||||
|
||||
The Kubernetes documentation for all languages is built from the [kubernetes/website](https://github.com/kubernetes/website) repository on GitHub. Most day-to-work work on translations, however, happens in separate translation repositories. Changes to those repositories are then [periodically](#upstream-contributions) synced to the main kubernetes/website repository via [pull request](../create-pull-request).
|
||||
|
||||
Work on the Chinese translation, for example, happens in the [kubernetes/kubernetes-docs-zh](https://github.com/kubernetes/kubernetes-docs-zh) repository.
|
||||
|
||||
{{< note >}}
|
||||
**Note**: For an example localization-related [pull request](../create-pull-request), see [this pull request](https://github.com/kubernetes/website/pull/8636) to the [Kubernetes website repo](https://github.com/kubernetes/website) adding Korean localization to the Kubernetes docs.
|
||||
{{< /note >}}
|
||||
|
||||
## Source Files
|
||||
|
||||
Localizations must use English files from the most recent major release as sources. To find the most recent release's documentation source files:
|
||||
|
||||
1. Navigate to the Kubernetes website repository at https://github.com/kubernetes/website.
|
||||
2. Select the `release-1.X` branch for the most recent version, which is currently **{{< latest-version >}}**, making the most recent release branch [`{{< release-branch >}}`](https://github.com/kubernetes/website/tree/{{< release-branch >}}).
|
||||
|
||||
## Getting started
|
||||
|
||||
In order to add a new localization of the Kubernetes documentation, you'll need to make a few modifications to the site's [configuration](#configuration) and [directory structure](#new-directory), and then you can get to work [translating documents](#translating-documents)!
|
||||
Localizations must meet some requirements for workflow (*how* to localize) and output (*what* to localize).
|
||||
|
||||
To get started, clone the website repo and `cd` into it:
|
||||
To add a new localization of the Kubernetes documentation, you'll need to update the website by modifying the [site configuration](#modify-the-site-configuration) and [directory structure](#add-a-new-localization-directory). Then you can start [translating documents](#translating-documents)!
|
||||
|
||||
Let Kubernetes SIG Docs know you're interested in creating a localization! Join the [SIG Docs Slack channel](https://kubernetes.slack.com/messages/C1J0BPD2M/). We're happy to help you get started and answer any questions you have.
|
||||
|
||||
All localization teams must be self-sustaining with their own resources. We're happy to host your work, but we can't translate it for you.
|
||||
|
||||
### Fork and clone the repo
|
||||
|
||||
First, [create your own fork](https://help.github.com/articles/fork-a-repo/) of the [kubernetes/website](https://github.com/kubernetes/website).
|
||||
|
||||
Then, clone the website repo and `cd` into it:
|
||||
|
||||
```shell
|
||||
git clone https://github.com/kubernetes/website
|
||||
cd website
|
||||
git checkout {{< release-branch >}}
|
||||
```
|
||||
|
||||
## Configuration
|
||||
{{< note >}}
|
||||
Contributors to `k/website` must [create a fork](https://kubernetes.io/docs/contribute/start/#improve-existing-content) from which to open pull requests. For localizations, we ask additionally that:
|
||||
|
||||
We'll walk you through the configuration process using the German language (language code `de`) as an example.
|
||||
1. Team approvers open development branches directly from https://github.com/kubernetes/website.
|
||||
2. Localization contributors work from forks, with branches based on the current development branch.
|
||||
|
||||
There's currently no translation for German, but you're welcome to create one using the instructions here.
|
||||
This is because localization projects are collaborative efforts on long-running branches, similar to the development branches for the Kubernetes release cycle. For information about localization pull requests, see ["branching strategy"](#branching-strategy).
|
||||
{{< /note >}}
|
||||
|
||||
The Kubernetes website's configuration is in the [`config.toml`](https://github.com/kubernetes/website/tree/master/config.toml) file. You need to add a configuration block for the new language to that file, under the existing `[languages]` block. The German block, for example, looks like this:
|
||||
### Find your two-letter language code
|
||||
|
||||
Consult the [ISO 639-1 standard](https://www.loc.gov/standards/iso639-2/php/code_list.php) for your localization's two-letter country code. For example, the two-letter code for German is `de`.
|
||||
|
||||
{{< note >}}
|
||||
These instructions use the [ISO 639-1](https://www.loc.gov/standards/iso639-2/php/code_list.php) language code for German (`de`) as an example.
|
||||
|
||||
There's currently no Kubernetes localization for German, but you're welcome to create one!
|
||||
{{< /note >}}
|
||||
|
||||
### Modify the site configuration
|
||||
|
||||
The Kubernetes website uses Hugo as its web framework. The website's Hugo configuration resides in the [`config.toml`](https://github.com/kubernetes/website/tree/master/config.toml) file. To support a new localization, you'll need to modify `config.toml`.
|
||||
|
||||
Add a configuration block for the new language to `config.toml`, under the existing `[languages]` block. The German block, for example, looks like:
|
||||
|
||||
```toml
|
||||
[languages.de]
|
||||
|
|
@ -68,74 +79,128 @@ contentDir = "content/de"
|
|||
weight = 3
|
||||
```
|
||||
|
||||
When assigning a `weight` parameter, see which of the current languages has the highest weight and add 1 to that value.
|
||||
When assigning a `weight` parameter for your block, find the language block with the highest weight and add 1 to that value.
|
||||
|
||||
Now add a language-specific subdirectory to the [`content`](https://github.com/kubernetes/website/tree/master/content) folder. The two-letter code for German is `de`, so add a `content/de` directory:
|
||||
For more information about Hugo's multilingual support, see "[Multilingual Mode](https://gohugo.io/content-management/multilingual/)".
|
||||
|
||||
### Add a new localization directory
|
||||
|
||||
Add a language-specific subdirectory to the [`content`](https://github.com/kubernetes/website/tree/master/content) folder in the repository. For example, the two-letter code for German is `de`:
|
||||
|
||||
```shell
|
||||
mkdir content/de
|
||||
```
|
||||
|
||||
### Add a localized README
|
||||
|
||||
To guide other localization contributors, add a new [`README-**.md`](https://help.github.com/articles/about-readmes/) to the top level of k/website, where `**` is the two-letter language code. For example, a German README file would be `README-de.md`.
|
||||
|
||||
Provide guidance to localization contributors in the localized `README-**.md` file. Include the same information contained in `README.md` as well as:
|
||||
|
||||
- A point of contact for the localization project
|
||||
- Any information specific to the localization
|
||||
|
||||
After you create the localized README, add a link to the file from the main English file, [`README.md`] and include contact information in English. You can provide a GitHub ID, email address, [Slack channel](https://slack.com/), or other method of contact.
|
||||
|
||||
## Translating documents
|
||||
|
||||
We understand that localizing *all* of the Kubernetes documentation would be an enormous task. We're okay with localizations smarting small and expanding over time.
|
||||
Localizing *all* of the Kubernetes documentation is an enormous task. It's okay to start small and expand over time.
|
||||
|
||||
As an initial requirement, all localizations must include the following documentation at a minimum:
|
||||
At a minimum, all localizations must include:
|
||||
|
||||
Description | URLs
|
||||
-----|-----
|
||||
Home | [All heading and subheading URLs](https://kubernetes.io/docs/home/)
|
||||
Setup | [All heading and subheading URLs](https://kubernetes.io/docs/setup/)
|
||||
Tutorials | [Kubernetes Basics](https://kubernetes.io/docs/tutorials/kubernetes-basics/), [Hello Minikube](https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/)
|
||||
Site strings | [All site strings in a new localized TOML file](https://github.com/kubernetes/website/tree/master/i18n)
|
||||
|
||||
Translated documents should have the same URL endpoint as the English docs (substituting the subdirectory of the `content` folder). To translate the [Kubernetes Basics](https://kubernetes.io/docs/tutorials/kubernetes-basics/) doc into German, for example, create the proper subfolder under the `content/de` folder and copy the English doc:
|
||||
Translated documents must reside in their own `content/**/` subdirectory, but otherwise follow the same URL path as the English source. For example, to prepare the [Kubernetes Basics](https://kubernetes.io/docs/tutorials/kubernetes-basics/) tutorial for translation into German, create a subfolder under the `content/de/` folder and copy the English source:
|
||||
|
||||
```shell
|
||||
mkdir -p content/de/docs/tutorials
|
||||
cp content/en/docs/tutorials/kubernetes-basics.md content/de/docs/tutorials/kubernetes-basics.md
|
||||
```
|
||||
|
||||
For an example of a localization-related [pull request](../create-pull-request), [this pull request](https://github.com/kubernetes/website/pull/10471) to the [Kubernetes website repo](https://github.com/kubernetes/website) added Korean localization to the Kubernetes docs.
|
||||
|
||||
### Source Files
|
||||
|
||||
Localizations must use English files from the most recent release as their source. The most recent version is **{{< latest-version >}}**.
|
||||
|
||||
To find source files for the most recent release:
|
||||
|
||||
1. Navigate to the Kubernetes website repository at https://github.com/kubernetes/website.
|
||||
2. Select the `release-1.X` branch for the most recent version.
|
||||
|
||||
The latest version is **{{< latest-version >}}**, so the most recent release branch is [`{{< release-branch >}}`](https://github.com/kubernetes/website/tree/{{< release-branch >}}).
|
||||
|
||||
### Site strings in i18n/
|
||||
|
||||
Localizations must include the contents of [`i18n/en.toml`](https://github.com/kubernetes/website/blob/master/i18n/en.toml) in a new language-specific file. Using German as an example: `i18n/de.toml`.
|
||||
|
||||
Add a new localization file to `i18n/`. For example, with German (`de`):
|
||||
|
||||
```shell
|
||||
cp i18n/en.toml i18n/de.toml
|
||||
```
|
||||
|
||||
Then translate the value of each string:
|
||||
|
||||
```TOML
|
||||
[docs_label_i_am]
|
||||
other = "ICH BIN..."
|
||||
```
|
||||
|
||||
Localizing site strings lets you customize site-wide text and features: for example, the legal copyright text in the footer on each page.
|
||||
|
||||
## Project logistics
|
||||
|
||||
### Contact with project chairs
|
||||
### Contact the SIG Docs chairs
|
||||
|
||||
When starting a new localization effort, you should get in touch with one of the chairs of the Kubernetes [SIG Docs](https://github.com/kubernetes/community/tree/master/sig-docs) organization. The current chairs are listed [here](https://github.com/kubernetes/community/tree/master/sig-docs#chairs).
|
||||
|
||||
### Project information
|
||||
|
||||
Teams working on localization efforts must provide a single point of contact, including the name and contact information of a person who can respond to or redirect questions or concerns, listed in the translation repository's main [`README`](https://help.github.com/articles/about-readmes/). You can provide an email address, email list, [Slack channel](https://slack.com/), or some other method of contact.
|
||||
Contact one of the chairs of the Kubernetes [SIG Docs](https://github.com/kubernetes/community/tree/master/sig-docs#chairs) chairs when you start a new localization.
|
||||
|
||||
### Maintainers
|
||||
|
||||
Each localization repository must select its own maintainers. Maintainers can be from a single organization or multiple organizations.
|
||||
Each localization repository must provide its own maintainers. Maintainers can be from a single organization or multiple organizations. Whenever possible, localization pull requests should be approved by a reviewer from a different organization than the translator.
|
||||
|
||||
In addition, all l10n work must be self-sustaining with the team's own resources.
|
||||
A localization must provide a minimum of two maintainers. (It's not possible to review and approve one's own work.)
|
||||
|
||||
Wherever possible, every localized page must be approved by a reviewer from a different company than the translator.
|
||||
### Branching strategy
|
||||
|
||||
### GitHub project
|
||||
Because localization projects are highly collaborative efforts, we encourage teams to work from a shared development branch.
|
||||
|
||||
Each Kubernetes localization repository must track its overall progress with a [GitHub project](https://help.github.com/articles/creating-a-project-board/).
|
||||
To collaborate on a development branch:
|
||||
|
||||
Projects must include at least these columns:
|
||||
1. A team member opens a development branch, usually by opening a new pull request against a source branch on https://github.com/kubernetes/website.
|
||||
|
||||
- To Do
|
||||
- In Progress
|
||||
- Done
|
||||
We recommend the following branch naming scheme:
|
||||
|
||||
{{< note >}}
|
||||
**Note**: For an example GitHub project, see the [Chinese localization project](https://github.com/kubernetes/kubernetes-docs-zh/projects/1).
|
||||
{{< /note >}}
|
||||
`dev-<source version>-<language code>.<team milestone>`
|
||||
|
||||
### Repository structure
|
||||
For example, an approver on a German localization team opens the development branch `dev-1.12-de.1` directly against the k/website repository, based on the source branch for Kubernetes v1.12.
|
||||
|
||||
Each l10n repository must have branches for the different Kubernetes documentation release versions, matching the branches in the main [kubernetes/website](https://github.com/kubernetes/website) documentation repository. For example, the kubernetes/website `release-1.10` branch (https://github.com/kubernetes/website/tree/release-1.10) has a corresponding branch in the kubernetes/kubernetes-docs-zh repository (https://github.com/kubernetes/kubernetes-docs-zh/tree/release-1.10). These version branches keep track of the differences in the documentation between Kubernetes versions.
|
||||
2. Individual contributors open feature branches based on the development branch.
|
||||
|
||||
For example, a German contributor opens a pull request with changes to `kubernetes:dev-1.12-de.1` from `username:local-branch-name`.
|
||||
|
||||
3. Approvers review and merge feature branches into the development branch.
|
||||
|
||||
4. Periodically, an approver merges the development branch to its source branch.
|
||||
|
||||
Repeat steps 1-4 as needed until the localization is complete. For example, subsequent German development branches would be: `dev-1.12-de.2`, `dev-1.12-de.3`, etc.
|
||||
|
||||
Teams must merge localized content into the same release branch from which the content was sourced. For example, a development branch sourced from {{< release-branch >}} must be based on {{< release-branch >}}.
|
||||
|
||||
An approver must maintain a development branch by keeping it current with its source branch and resolving merge conflicts. The longer a development branch stays open, the more maintenance it typically requires. Consider periodically merging development branches and opening new ones, rather than maintaining one extremely long-running development branch.
|
||||
|
||||
While only approvers can merge pull requests, anyone can open a pull request for a new development branch. No special permissions are required.
|
||||
|
||||
For more information about working from forks or directly from the repository, see ["fork and clone the repo"](#fork-and-clone-the-repo).
|
||||
|
||||
### Upstream contributions
|
||||
|
||||
Upstream contributions are welcome and encouraged!
|
||||
|
||||
For the sake of efficiency, limit upstream contributions to a single pull request per week, containing a single [squashed commit](https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit).
|
||||
SIG Docs welcomes upstream contributions and corrections to the English source! Open a [pull request](https://kubernetes.io/docs/contribute/start/#improve-existing-content) (from a fork) with any updates.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
@ -143,7 +208,7 @@ For the sake of efficiency, limit upstream contributions to a single pull reques
|
|||
|
||||
Once a l10n meets requirements for workflow and minimum output, SIG docs will:
|
||||
|
||||
- Work with the localization team to implement language selection on the website.
|
||||
- Publicize availability through [Cloud Native Computing Foundation](https://www.cncf.io/) (CNCF) channels.
|
||||
- Enable language selection on the website
|
||||
- Publicize the localization's availability through [Cloud Native Computing Foundation](https://www.cncf.io/) (CNCF) channels, including the [Kubernetes blog](https://kubernetes.io/blog/).
|
||||
|
||||
{{% /capture %}}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,11 @@
|
|||
# This is the localization project for Japanese.
|
||||
# Teams and members are visible at https://github.com/orgs/kubernetes/teams.
|
||||
|
||||
reviewers:
|
||||
- sig-docs-ja-reviews
|
||||
|
||||
approvers:
|
||||
- sig-docs-ja-owners
|
||||
|
||||
labels:
|
||||
- language/ja
|
||||
|
|
@ -0,0 +1,11 @@
|
|||
# This is the localization project for Korean.
|
||||
# Teams and members are visible at https://github.com/orgs/kubernetes/teams.
|
||||
|
||||
reviewers:
|
||||
- sig-docs-ko-reviews
|
||||
|
||||
approvers:
|
||||
- sig-docs-ko-owners
|
||||
|
||||
labels:
|
||||
- language/ko
|
||||
|
|
@ -0,0 +1,11 @@
|
|||
# This is the localization project for Chinese.
|
||||
# Teams and members are visible at https://github.com/orgs/kubernetes/teams.
|
||||
|
||||
reviewers:
|
||||
- sig-docs-zh-reviews
|
||||
|
||||
approvers:
|
||||
- sig-docs-zh-owners
|
||||
|
||||
labels:
|
||||
- language/zh
|
||||
|
|
@ -0,0 +1,182 @@
|
|||
<!-- ---
|
||||
title: " SIG-Networking: Kubernetes Network Policy APIs Coming in 1.3 "
|
||||
date: 2016-04-18
|
||||
slug: kubernetes-network-policy-apis
|
||||
url: /blog/2016/04/Kubernetes-Network-Policy-APIs
|
||||
--- -->
|
||||
|
||||
---
|
||||
title: "SIG-Networking: Kubernetes Network Policy APIs Coming in 1.3 "
|
||||
date: 2016-04-18
|
||||
slug: kubernetes-network-policy-apis
|
||||
url: /blog/2016/04/Kubernetes-Network-Policy-APIs
|
||||
---
|
||||
|
||||
<!-- _Editor’s note: This week we’re featuring [Kubernetes Special Interest Groups](https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs)); Today’s post is by the Network-SIG team describing network policy APIs coming in 1.3 - policies for security, isolation and multi-tenancy._ -->
|
||||
|
||||
编者按:这一周,我们的封面主题是 [Kubernetes 特别兴趣小组](https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs));今天的文章由网络兴趣小组撰写,来谈谈 1.3 版本中即将出现的网络策略 API - 针对安全,隔离和多租户的策略。
|
||||
|
||||
<!-- The [Kubernetes network SIG](https://kubernetes.slack.com/messages/sig-network/) has been meeting regularly since late last year to work on bringing network policy to Kubernetes and we’re starting to see the results of this effort. -->
|
||||
|
||||
自去年下半年起,[Kubernetes 网络特别兴趣小组](https://kubernetes.slack.com/messages/sig-network/)经常定期开会,讨论如何将网络策略带入到 Kubernetes 之中,现在,我们也将慢慢看到这些工作的成果。
|
||||
|
||||
<!-- One problem many users have is that the open access network policy of Kubernetes is not suitable for applications that need more precise control over the traffic that accesses a pod or service. Today, this could be a multi-tier application where traffic is only allowed from a tier’s neighbor. But as new Cloud Native applications are built by composing microservices, the ability to control traffic as it flows among these services becomes even more critical. -->
|
||||
|
||||
很多用户经常会碰到的一个问题是, Kubernetes 的开放访问网络策略并不能很好地满足那些需要对 pod 或服务( service )访问进行更为精确控制的场景。今天,这个场景可以是在多层应用中,只允许临近层的访问。然而,随着组合微服务构建原生应用程序潮流的发展,如何控制流量在不同服务之间的流动会别的越发的重要。
|
||||
|
||||
<!-- In most IaaS environments (both public and private) this kind of control is provided by allowing VMs to join a ‘security group’ where traffic to members of the group is defined by a network policy or Access Control List (ACL) and enforced by a network packet filter. -->
|
||||
|
||||
在大多数的(公共的或私有的) IaaS 环境中,这种网络控制通常是将 VM 和“安全组”结合,其中安全组中成员的通信都是通过一个网络策略或者访问控制表( Access Control List, ACL )来定义,以及借助于网络包过滤器来实现。
|
||||
|
||||
<!-- The Network SIG started the effort by identifying [specific use case scenarios](https://docs.google.com/document/d/1blfqiH4L_fpn33ZrnQ11v7LcYP0lmpiJ_RaapAPBbNU/edit?pref=2&pli=1#) that require basic network isolation for enhanced security. Getting the API right for these simple and common use cases is important because they are also the basis for the more sophisticated network policies necessary for multi-tenancy within Kubernetes. -->
|
||||
|
||||
“网络特别兴趣小组”刚开始的工作是确定 [特定的使用场景](https://docs.google.com/document/d/1blfqiH4L_fpn33ZrnQ11v7LcYP0lmpiJ_RaapAPBbNU/edit?pref=2&pli=1#) ,这些用例需要基本的网络隔离来提升安全性。
|
||||
让这些API恰如其分地满足简单、共通的用例尤其重要,因为它们将为那些服务于 Kubernetes 内多租户,更为复杂的网络策略奠定基础。
|
||||
|
||||
<!-- From these scenarios several possible approaches were considered and a minimal [policy specification](https://docs.google.com/document/d/1qAm-_oSap-f1d6a-xRTj6xaH1sYQBfK36VyjB5XOZug/edit) was defined. The basic idea is that if isolation were enabled on a per namespace basis, then specific pods would be selected where specific traffic types would be allowed. -->
|
||||
|
||||
根据这些应用场景,我们考虑了集中不同的方法,然后定义了一个最简[策略规范](https://docs.google.com/document/d/1qAm-_oSap-f1d6a-xRTj6xaH1sYQBfK36VyjB5XOZug/edit)。
|
||||
基本的想法是,如果是根据命名空间的不同来进行隔离,那么就会根据所被允许的流量类型的不同,来选择特定的 pods 。
|
||||
|
||||
<!-- The simplest way to quickly support this experimental API is in the form of a ThirdPartyResource extension to the API Server, which is possible today in Kubernetes 1.2. -->
|
||||
|
||||
快速支持这个实验性 API 的办法是往 API 服务器上加入一个 `ThirdPartyResource` 扩展,这在 Kubernetes 1.2 就能办到。
|
||||
|
||||
<!-- If you’re not familiar with how this works, the Kubernetes API can be extended by defining ThirdPartyResources that create a new API endpoint at a specified URL. -->
|
||||
|
||||
如果你还不是很熟悉这其中的细节, Kubernetes API 是可以通过定义 `ThirdPartyResources` 扩展在特定的 URL 上创建一个新的 API 端点。
|
||||
|
||||
#### third-party-res-def.yaml
|
||||
|
||||
```
|
||||
kind: ThirdPartyResource
|
||||
apiVersion: extensions/v1beta1
|
||||
metadata:
|
||||
- name: network-policy.net.alpha.kubernetes.io
|
||||
description: "Network policy specification"
|
||||
versions:
|
||||
- name: v1alpha1
|
||||
```
|
||||
|
||||
```
|
||||
$kubectl create -f third-party-res-def.yaml
|
||||
```
|
||||
|
||||
<!-- This will create an API endpoint (one for each namespace): -->
|
||||
这条命令会创建一个 API 端点(每个命名空间各一个):
|
||||
|
||||
```
|
||||
/net.alpha.kubernetes.io/v1alpha1/namespace/default/networkpolicys/
|
||||
```
|
||||
|
||||
<!-- Third party network controllers can now listen on these endpoints and react as necessary when resources are created, modified or deleted. _Note: With the upcoming release of Kubernetes 1.3 - when the Network Policy API is released in beta form - there will be no need to create a ThirdPartyResource API endpoint as shown above._ -->
|
||||
|
||||
|
||||
第三方网络控制器可以监听这些端点,根据资源的创建,修改或者删除作出必要的响应。
|
||||
_注意:在接下来的 Kubernetes 1.3 发布中, Network Policy API 会以 beta API 的形式出现,这也就不需要像上面那样,创建一个 `ThirdPartyResource` API 端点了。_
|
||||
|
||||
<!-- Network isolation is off by default so that all pods can communicate as they normally do. However, it’s important to know that once network isolation is enabled, all traffic to all pods, in all namespaces is blocked, which means that enabling isolation is going to change the behavior of your pods -->
|
||||
|
||||
网络隔离默认是关闭的,因而,所有的 pods 之间可以自由地通信。
|
||||
然而,很重要的一点是,一旦开通了网络隔离,所有命名空间下的所有 pods 之间的通信都会被阻断,换句话说,开通隔离会改变 pods 的行为。
|
||||
|
||||
<!-- Network isolation is enabled by defining the _network-isolation_ annotation on namespaces as shown below: -->
|
||||
|
||||
网络隔离可以通过定义命名空间, `net.alpha.kubernetes.io` 里的 `network-isolation` 注释来开通关闭:
|
||||
|
||||
```
|
||||
net.alpha.kubernetes.io/network-isolation: [on | off]
|
||||
```
|
||||
|
||||
<!-- Once network isolation is enabled, explicit network policies **must be applied** to enable pod communication. -->
|
||||
|
||||
一旦开通了网络隔离,**一定需要使用** 显示的网络策略来允许 pod 间的通信。
|
||||
|
||||
<!-- A policy specification can be applied to a namespace to define the details of the policy as shown below: -->
|
||||
|
||||
一个策略规范可以被用到一个命名空间中,来定义策略的细节(如下所示):
|
||||
|
||||
```
|
||||
POST /apis/net.alpha.kubernetes.io/v1alpha1/namespaces/tenant-a/networkpolicys/
|
||||
{
|
||||
"kind": "NetworkPolicy",
|
||||
"metadata": {
|
||||
"name": "pol1"
|
||||
},
|
||||
"spec": {
|
||||
"allowIncoming": {
|
||||
"from": [
|
||||
{
|
||||
"pods": {
|
||||
"segment": "frontend"
|
||||
}
|
||||
}
|
||||
],
|
||||
"toPorts": [
|
||||
{
|
||||
"port": 80,
|
||||
"protocol": "TCP"
|
||||
}
|
||||
]
|
||||
},
|
||||
"podSelector": {
|
||||
"segment": "backend"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
<!-- In this example, the ‘ **tenant-a** ’ namespace would get policy ‘ **pol1** ’ applied as indicated. Specifically, pods with the **segment** label ‘ **backend** ’ would allow TCP traffic on port 80 from pods with the **segment** label ‘ **frontend** ’ to be received. -->
|
||||
|
||||
在这个例子中,**tenant-a** 空间将会使用 **pol1** 策略。
|
||||
具体而言,带有 **segment** 标签为 **backend** 的 pods 会允许 **segment** 标签为 **frontend** 的 pods 访问其端口 80 。
|
||||
|
||||
|
||||
<!-- Today, [Romana](http://romana.io/), [OpenShift](https://www.openshift.com/), [OpenContrail](http://www.opencontrail.org/) and [Calico](http://projectcalico.org/) support network policies applied to namespaces and pods. Cisco and VMware are working on implementations as well. Both Romana and Calico demonstrated these capabilities with Kubernetes 1.2 recently at KubeCon. You can watch their presentations here: [Romana](https://www.youtube.com/watch?v=f-dLKtK6qCs) ([slides](http://www.slideshare.net/RomanaProject/kubecon-london-2016-ronana-cloud-native-sdn)), [Calico](https://www.youtube.com/watch?v=p1zfh4N4SX0) ([slides](http://www.slideshare.net/kubecon/kubecon-eu-2016-secure-cloudnative-networking-with-project-calico)). -->
|
||||
|
||||
|
||||
今天,[Romana](http://romana.io/), [OpenShift](https://www.openshift.com/), [OpenContrail](http://www.opencontrail.org/) 以及 [Calico](http://projectcalico.org/) 都已经支持在命名空间和pods中使用网络策略。
|
||||
而 Cisco 和 VMware 也在努力实现支持之中。
|
||||
Romana 和 Calico 已经在最近的 KubeCon 中展示了如何在 Kubernetes 1.2 下使用这些功能。
|
||||
你可以在这里看到他们的演讲:
|
||||
[Romana](https://www.youtube.com/watch?v=f-dLKtK6qCs) ([幻灯片](http://www.slideshare.net/RomanaProject/kubecon-london-2016-ronana-cloud-native-sdn)),
|
||||
[Calico](https://www.youtube.com/watch?v=p1zfh4N4SX0) ([幻灯片](http://www.slideshare.net/kubecon/kubecon-eu-2016-secure-cloudnative-networking-with-project-calico)).
|
||||
|
||||
<!-- **How does it work?** -->
|
||||
|
||||
**这是如何工作的**
|
||||
|
||||
<!-- Each solution has their their own specific implementation details. Today, they rely on some kind of on-host enforcement mechanism, but future implementations could also be built that apply policy on a hypervisor, or even directly by the network itself. -->
|
||||
|
||||
每套解决方案都有自己不同的具体实现。尽管今天,他们都借助于每种主机上( on-host )的实现机制,但未来的实现可以通过将策略使用在 hypervisor 上,亦或是直接使用到网络本身上来达到同样的目的。
|
||||
|
||||
<!-- External policy control software (specifics vary across implementations) will watch the new API endpoint for pods being created and/or new policies being applied. When an event occurs that requires policy configuration, the listener will recognize the change and a controller will respond by configuring the interface and applying the policy. The diagram below shows an API listener and policy controller responding to updates by applying a network policy locally via a host agent. The network interface on the pods is configured by a CNI plugin on the host (not shown). -->
|
||||
|
||||
外部策略控制软件(不同实现各有不同)可以监听 pods 创建以及新加载策略的 API 端点。
|
||||
当产生一个需要策略配置的事件之后,监听器会确认这个请求,相应的,控制器会配置接口,使用该策略。
|
||||
下面的图例展示了 API 监视器和策略控制器是如何通过主机代理在本地应用网络策略的。
|
||||
这些 pods 的网络接口是使用过主机上的 CNI 插件来进行配置的(并未在图中注明)。
|
||||
|
||||

|
||||
|
||||
|
||||
<!-- If you’ve been holding back on developing applications with Kubernetes because of network isolation and/or security concerns, these new network policies go a long way to providing the control you need. No need to wait until Kubernetes 1.3 since network policy is available now as an experimental API enabled as a ThirdPartyResource. -->
|
||||
|
||||
如果你一直受网络隔离或安全考虑的困扰,而犹豫要不要使用 Kubernetes 来开发应用程序,这些新的网络策略将会极大地解决你这方面的需求。并不需要等到 Kubernetes 1.3 ,现在就可以通过 `ThirdPartyResource` 的方式来使用这个实现性 API 。
|
||||
|
||||
|
||||
<!-- If you’re interested in Kubernetes and networking, there are several ways to participate - join us at:
|
||||
|
||||
- Our [Networking slack channel](https://kubernetes.slack.com/messages/sig-network/)
|
||||
- Our [Kubernetes Networking Special Interest Group](https://groups.google.com/forum/#!forum/kubernetes-sig-network) email list -->
|
||||
|
||||
如果你对 Kubernetes 和网络感兴趣,可以通过下面的方式参与、加入其中:
|
||||
|
||||
- 我们的[网络 slack channel](https://kubernetes.slack.com/messages/sig-network/)
|
||||
- 我们的[Kubernetes 特别网络兴趣小组](https://groups.google.com/forum/#!forum/kubernetes-sig-network) 邮件列表
|
||||
|
||||
<!-- The Networking “Special Interest Group,” which meets bi-weekly at 3pm (15h00) Pacific Time at [SIG-Networking hangout](https://zoom.us/j/5806599998). -->
|
||||
|
||||
网络“特别兴趣小组”每两周下午三点(太平洋时间)开会,地址是[SIG-Networking hangout](https://zoom.us/j/5806599998).
|
||||
|
||||
_--Chris Marino, Co-Founder, Pani Networks_
|
||||
|
|
@ -0,0 +1,129 @@
|
|||
<!-- ---
|
||||
title: " How to deploy secure, auditable, and reproducible Kubernetes clusters on AWS "
|
||||
date: 2016-04-15
|
||||
slug: kubernetes-on-aws_15
|
||||
url: /blog/2016/04/Kubernetes-On-Aws_15
|
||||
--- -->
|
||||
|
||||
---
|
||||
title: " 如何在AWS上部署安全,可审计,可复现的k8s集群 "
|
||||
date: 2016-04-15
|
||||
slug: kubernetes-on-aws_15
|
||||
url: /blog/2016/04/Kubernetes-On-Aws_15
|
||||
---
|
||||
|
||||
<!-- _Today’s guest post is written by Colin Hom, infrastructure engineer at [CoreOS](https://coreos.com/), the company delivering Google’s Infrastructure for Everyone Else (#GIFEE) and running the world's containers securely on CoreOS Linux, Tectonic and Quay._
|
||||
|
||||
_Join us at [CoreOS Fest Berlin](https://coreos.com/fest/), the Open Source Distributed Systems Conference, and learn more about CoreOS and Kubernetes._ -->
|
||||
|
||||
_今天的客座文章是由Colin Hom撰写,[CoreOS](https://coreos.com/)的基础架构工程师。CoreOS致力于推广谷歌的基础架构模式(Google’s Infrastructure for Everyone Else, #GIFEE),让全世界的容器都能在CoreOS Linux, Tectonic 和 Quay上安全运行。_
|
||||
|
||||
_加入到我们的[柏林CoreOS盛宴](https://coreos.com/fest/),这是一个开源分布式系统主题的会议,在这里可以了解到更多关于CoreOS和Kubernetes的信息。_
|
||||
|
||||
<!-- At CoreOS, we're all about deploying Kubernetes in production at scale. Today we are excited to share a tool that makes deploying Kubernetes on Amazon Web Services (AWS) a breeze. Kube-aws is a tool for deploying auditable and reproducible Kubernetes clusters to AWS, currently used by CoreOS to spin up production clusters. -->
|
||||
|
||||
在CoreOS, 我们一直都是在生产环境中大规模部署Kubernetes。今天我们非常兴奋地想分享一款工具,它能让你的Kubernetes生产环境大规模部署更加的轻松。Kube-aws这个工具可以用来在AWS上部署可审计,可复现的k8s集群,而CoreOS本身就在生产环境中使用它。
|
||||
|
||||
<!-- Today you might be putting the Kubernetes components together in a more manual way. With this helpful tool, Kubernetes is delivered in a streamlined package to save time, minimize interdependencies and quickly create production-ready deployments. -->
|
||||
|
||||
也许今天,你更多的可能是用手工的方式来拼接Kubernetes组件。但有了这个工具之后,Kubernetes可以流水化地打包、交付,节省时间,减少了相互间的依赖,更加快捷地实现生产环境的部署。
|
||||
|
||||
<!-- A simple templating system is leveraged to generate cluster configuration as a set of declarative configuration templates that can be version controlled, audited and re-deployed. Since the entirety of the provisioning is by [AWS CloudFormation](https://aws.amazon.com/cloudformation/) and cloud-init, there’s no need for external configuration management tools on your end. Batteries included! -->
|
||||
|
||||
借助于一个简单的模板系统,来生成集群配置,这么做是因为一套声明式的配置模板可以版本控制,审计以及重复部署。而且,由于整个创建过程只用到了[AWS CloudFormation](https://aws.amazon.com/cloudformation/) 和 cloud-init,你也就不需要额外用到其它的配置管理工具。开箱即用!
|
||||
|
||||
<!-- To skip the talk and go straight to the project, check out [the latest release of kube-aws](https://github.com/coreos/coreos-kubernetes/releases), which supports Kubernetes 1.2.x. To get your cluster running, [check out the documentation](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html). -->
|
||||
|
||||
如果要跳过演讲,直接了解这个项目,可以看看[kube-aws的最新发布](https://github.com/coreos/coreos-kubernetes/releases),支持Kubernetes 1.2.x。如果要部署集群,可以参考[文档]](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html).
|
||||
|
||||
<!-- **Why kube-aws? Security, auditability and reproducibility** -->
|
||||
**为什么是kube-aws?安全,可审计,可复现**
|
||||
|
||||
<!-- Kube-aws is designed with three central goals in mind. -->
|
||||
Kube-aws设计初衷有三个目标。
|
||||
|
||||
<!-- **Secure** : TLS assets are encrypted via the [AWS Key Management Service (KMS)](https://aws.amazon.com/kms/) before being embedded in the CloudFormation JSON. By managing [IAM policy](http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) for the KMS key independently, an operator can decouple operational access to the CloudFormation stack from access to the TLS secrets. -->
|
||||
|
||||
**安全** : TLS 资源在嵌入到CloudFormation JSON之前,通过[AWS 秘钥管理服务](https://aws.amazon.com/kms/)加密。通过单独管理KMS密钥的[IAM 策略](http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html),可以将CloudFormation栈的访问与TLS秘钥的访问分离开。
|
||||
|
||||
<!-- **Auditable** : kube-aws is built around the concept of cluster assets. These configuration and credential assets represent the complete description of the cluster. Since KMS is used to encrypt TLS assets, you can feel free to check your unencrypted stack JSON into version control as well! -->
|
||||
|
||||
**可审计** : kube-aws是围绕集群资产的概念来创建。这些配置和账户资产是对集群的完全描述。由于KMS被用来加密TLS资产,因而可以无所顾忌地将未加密的CloudFormation栈 JSON签入到版本控制服务中。
|
||||
|
||||
<!-- **Reproducible** : The _--export_ option packs your parameterized cluster definition into a single JSON file which defines a CloudFormation stack. This file can be version controlled and submitted directly to the CloudFormation API via existing deployment tooling, if desired. -->
|
||||
|
||||
**可重复** : _--export_ 选项将参数化的集群定义打包成一整个JSON文件,对应一个CloudFormation栈。这个文件可以版本控制,然后,如果需要的话,通过现有的部署工具直接提交给CloudFormation API。
|
||||
|
||||
<!-- **How to get started with kube-aws** -->
|
||||
**如何开始用kube-aws**
|
||||
|
||||
<!-- On top of this foundation, kube-aws implements features that make Kubernetes deployments on AWS easier to manage and more flexible. Here are some examples. -->
|
||||
在此基础之上,kube-aws也实现了一些功能,使得在AWS上部署Kubernetes集群更加容易,灵活。下面是一些例子。
|
||||
|
||||
<!-- **Route53 Integration** : Kube-aws can manage your cluster DNS records as part of the provisioning process. -->
|
||||
**Route53集成** : Kube-aws 可以管理你的集群DNS记录,作为配置过程的一部分。
|
||||
|
||||
cluster.yaml
|
||||
```
|
||||
externalDNSName: my-cluster.kubernetes.coreos.com
|
||||
|
||||
createRecordSet: true
|
||||
|
||||
hostedZone: kubernetes.coreos.com
|
||||
|
||||
recordSetTTL: 300
|
||||
```
|
||||
|
||||
<!-- **Existing VPC Support** : Deploy your cluster to an existing VPC. -->
|
||||
**现有VPC支持** : 将集群部署到现有的VPC上。
|
||||
|
||||
cluster.yaml
|
||||
```
|
||||
vpcId: vpc-xxxxx
|
||||
|
||||
routeTableId: rtb-xxxxx
|
||||
```
|
||||
|
||||
<!-- **Validation** : Kube-aws supports validation of cloud-init and CloudFormation definitions, along with any external resources that the cluster stack will integrate with. For example, here’s a cloud-config with a misspelled parameter: -->
|
||||
**验证** : kube-aws 支持验证 cloud-init 和 CloudFormation定义,以及集群栈会集成用到的外部资源。例如,下面就是一个cloud-config,外带一个拼写错误的参数:
|
||||
|
||||
userdata/cloud-config-worker
|
||||
```
|
||||
#cloud-config
|
||||
|
||||
coreos:
|
||||
|
||||
flannel:
|
||||
interrface: $private\_ipv4
|
||||
etcd\_endpoints: {{ .ETCDEndpoints }}
|
||||
```
|
||||
|
||||
$ kube-aws validate
|
||||
|
||||
\> Validating UserData...
|
||||
Error: cloud-config validation errors:
|
||||
UserDataWorker: line 4: warning: unrecognized key "interrface"
|
||||
|
||||
<!-- To get started, check out the [kube-aws documentation](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html). -->
|
||||
考虑如何起步?看看[kube-aws 文档](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html)!
|
||||
|
||||
<!-- **Future Work** -->
|
||||
**未来的工作**
|
||||
|
||||
<!-- As always, the goal with kube-aws is to make deployments that are production ready. While we use kube-aws in production on AWS today, this project is pre-1.0 and there are a number of areas in which kube-aws needs to evolve. -->
|
||||
一如既往,kube-aws的目标是让生产环境部署更加的简单。尽管我们现在在AWS下使用kube-aws进行生产环境部署,但是这个项目还是pre-1.0,所以还有很多的地方,kube-aws需要考虑、扩展。
|
||||
|
||||
<!-- **Fault tolerance** : At CoreOS we believe Kubernetes on AWS is a potent platform for fault-tolerant and self-healing deployments. In the upcoming weeks, kube-aws will be rising to a new challenge: surviving the [Chaos Monkey](https://github.com/Netflix/SimianArmy/wiki/Chaos-Monkey) – control plane and all! -->
|
||||
**容错** : CoreOS坚信 Kubernetes on AWS是强健的平台,适于容错、自恢复部署。在接下来的几个星期,kube-aws将会迎接新的考验:混世猴子([Chaos Monkey](https://github.com/Netflix/SimianArmy/wiki/Chaos-Monkey))测试 - 控制平面以及全部!
|
||||
|
||||
<!-- **Zero-downtime updates** : Updating CoreOS nodes and Kubernetes components can be done without downtime and without interdependency with the correct instance replacement strategy. -->
|
||||
**零停机更新** : 更新CoreOS节点和Kubernetes组件不需要停机,也不需要考虑实例更新策略(instance replacement strategy)的影响。
|
||||
|
||||
<!-- A [github issue](https://github.com/coreos/coreos-kubernetes/issues/340) tracks the work towards this goal. We look forward to seeing you get involved with the project by filing issues or contributing directly. -->
|
||||
有一个[github issue](https://github.com/coreos/coreos-kubernetes/issues/340)来追踪这些工作进展。我们期待你的参与,提交issue,或是直接贡献。
|
||||
|
||||
<!-- _Learn more about Kubernetes and meet the community at [CoreOS Fest Berlin](https://coreos.com/fest/) - May 9-10, 2016_ -->
|
||||
_想要更多地了解Kubernetes,来[柏林CoreOS盛宴](https://coreos.com/fest/)看看,- 五月 9-10, 2016_
|
||||
|
||||
<!-- _– Colin Hom, infrastructure engineer, CoreOS_ -->
|
||||
_– Colin Hom, 基础架构工程师, CoreOS_
|
||||
|
|
@ -0,0 +1,81 @@
|
|||
---
|
||||
title: "Principles of Container-based Application Design"
|
||||
date: 2018-03-15
|
||||
slug: principles-of-container-app-design
|
||||
url: /blog/2018/03/Principles-Of-Container-App-Design
|
||||
---
|
||||
|
||||
<!-- It's possible nowadays to put almost any application in a container and run it. Creating cloud-native applications, however—containerized applications that are automated and orchestrated effectively by a cloud-native platform such as Kubernetes—requires additional effort. Cloud-native applications anticipate failure; they run and scale reliably even when their infrastructure experiences outages. To offer such capabilities, cloud-native platforms like Kubernetes impose a set of contracts and constraints on applications. These contracts ensure that applications they run conform to certain constraints and allow the platform to automate application management. -->
|
||||
|
||||
现如今,几乎所有的的应用程序都可以在容器中运行。但创建云原生应用,通过诸如 Kubernetes 的云原生平台更有效地自动化运行、管理容器化的应用却需要额外的工作。
|
||||
云原生应用需要考虑故障;即使是在底层架构发生故障时也需要可靠地运行。
|
||||
为了提供这样的功能,像 Kubernetes 这样的云原生平台需要向运行的应用程序强加一些契约和约束。
|
||||
这些契约确保应用可以在符合某些约束的条件下运行,从而使得平台可以自动化应用管理。
|
||||
|
||||
<!-- I've outlined [seven principles][1]for containerized applications to follow in order to be fully cloud-native. -->
|
||||
|
||||
我已经为容器化应用如何之为云原生应用概括出了[七项原则][1]。
|
||||
|
||||
| ----- |
|
||||
| ![][2] |
|
||||
| Container Design Principles |
|
||||
|
||||
|
||||
<!-- These seven principles cover both build time and runtime concerns. -->
|
||||
|
||||
这里所述的七项原则涉及到构建时和运行时,两类关注点。
|
||||
|
||||
<!-- #### Build time -->
|
||||
#### 构建时
|
||||
|
||||
<!-- * **Single Concern:** Each container addresses a single concern and does it well.
|
||||
* **Self-Containment:** A container relies only on the presence of the Linux kernel. Additional libraries are added when the container is built.
|
||||
* **Image Immutability:** Containerized applications are meant to be immutable, and once built are not expected to change between different environments. -->
|
||||
|
||||
* **单一关注点:** 每个容器只解决一个关注点,并且完成的很好。
|
||||
* **自包含:** 一个容器只依赖Linux内核。额外的库要求可以在构建容器时加入。
|
||||
* **镜像不变性:** 容器化的应用意味着不变性,一旦构建完成,不需要根据环境的不同而重新构建。
|
||||
|
||||
<!-- #### Runtime -->
|
||||
#### 运行时
|
||||
|
||||
<!-- * **High Observability:** Every container must implement all necessary APIs to help the platform observe and manage the application in the best way possible.
|
||||
* **Lifecycle Conformance:** A container must have a way to read events coming from the platform and conform by reacting to those events.
|
||||
* **Process Disposability:** Containerized applications must be as ephemeral as possible and ready to be replaced by another container instance at any point in time.
|
||||
* **Runtime Confinement:** Every container must declare its resource requirements and restrict resource use to the requirements indicated. -->
|
||||
|
||||
* **高可观测性:** 每个容器必须实现所有必要的 API 来帮助平台以最好的方式来观测、管理应用。
|
||||
* **生命周期一致性:** 一个容器必须要能从平台中获取事件信息,并作出相应的反应。
|
||||
* **进程易处理性:** 容器化应用的寿命一定要尽可能的短暂,这样,可以随时被另一个容器所替换。
|
||||
* **运行时限制:** 每个容器都必须要声明自己的资源需求,并将资源使用限制在所需要的范围之内。
|
||||
|
||||
<!-- The build time principles ensure that containers have the right granularity, consistency, and structure in place. The runtime principles dictate what functionalities must be implemented in order for containerized applications to possess cloud-native function. Adhering to these principles helps ensure that your applications are suitable for automation in Kubernetes. -->
|
||||
|
||||
编译时原则保证了容器拥有合适的粒度,一致性以及结构。运行时原则明确了容器化必须要实现那些功能才能成为云原生函数。遵循这些原则可以帮助你的应用适应 Kubernetes 上的自动化。
|
||||
|
||||
<!-- The white paper is freely available for download: -->
|
||||
|
||||
白皮书可以免费下载:
|
||||
|
||||
<!-- To read more about designing cloud-native applications for Kubernetes, check out my [Kubernetes Patterns][3] book. -->
|
||||
|
||||
想要了解更多关于如何面向 Kubernetes 设计云原生应用,可以看看我的 [Kubernetes 模式][3] 一书。
|
||||
|
||||
<!-- — [Bilgin Ibryam][4], Principal Architect, Red Hat -->
|
||||
|
||||
— [Bilgin Ibryam][4], 首席架构师, Red Hat
|
||||
|
||||
Twitter:
|
||||
Blog: [http://www.ofbizian.com][5]
|
||||
Linkedin:
|
||||
|
||||
<!-- Bilgin Ibryam (@bibryam) is a principal architect at Red Hat, open source committer at ASF, blogger, author, and speaker. He is the author of Camel Design Patterns and Kubernetes Patterns books. In his day-to-day job, Bilgin enjoys mentoring, training and leading teams to be successful with distributed systems, microservices, containers, and cloud-native applications in general. -->
|
||||
|
||||
Bilgin Ibryam (@bibryam) 是 Red Hat 的一名首席架构师, ASF 的开源贡献者,博主,作者以及演讲者。
|
||||
他是 Camel 设计模式、 Kubernetes 模式的作者。在他的日常生活中,他非常享受指导、培训以及帮助各个团队更加成功地使用分布式系统、微服务、容器,以及云原生应用。
|
||||
|
||||
[1]: https://www.redhat.com/en/resources/cloud-native-container-design-whitepaper
|
||||
[2]: https://lh5.googleusercontent.com/1XqojkVC0CET1yKCJqZ3-0VWxJ3W8Q74zPLlqnn6eHSJsjHOiBTB7EGUX5o_BOKumgfkxVdgBeLyoyMfMIXwVm9p2QXkq_RRy2mDJG1qEExJDculYL5PciYcWfPAKxF2-DGIdiLw
|
||||
[3]: http://leanpub.com/k8spatterns/
|
||||
[4]: http://twitter.com/bibryam
|
||||
[5]: http://www.ofbizian.com/
|
||||
|
|
@ -0,0 +1,676 @@
|
|||
---
|
||||
|
||||
layout: blog
|
||||
|
||||
title: 'Airflow on Kubernetes (Part 1): A Different Kind of Operator'
|
||||
|
||||
date: 2018-06-28
|
||||
|
||||
title: 'Airflow在Kubernetes中的使用(第一部分):一种不同的操作器'
|
||||
|
||||
cn-approvers:
|
||||
|
||||
- congfairy
|
||||
|
||||
---
|
||||
|
||||
|
||||
<!--
|
||||
|
||||
Author: Daniel Imberman (Bloomberg LP)
|
||||
|
||||
-->
|
||||
|
||||
|
||||
|
||||
作者: Daniel Imberman (Bloomberg LP)
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
|
||||
## Introduction
|
||||
|
||||
|
||||
|
||||
As part of Bloomberg's continued commitment to developing the Kubernetes ecosystem, we are excited to announce the Kubernetes Airflow Operator; a mechanism for Apache Airflow, a popular workflow orchestration framework to natively launch arbitrary Kubernetes Pods using the Kubernetes API.
|
||||
|
||||
-->
|
||||
|
||||
|
||||
|
||||
## 介绍
|
||||
|
||||
|
||||
|
||||
作为Bloomberg [继续致力于开发Kubernetes生态系统]的一部分(https://www.techatbloomberg.com/blog/bloomberg-awarded-first-cncf-end-user-award-contributions-kubernetes/),我们很高兴能够宣布Kubernetes Airflow Operator的发布; [Apache Airflow](https://airflow.apache.org/)的机制,一种流行的工作流程编排框架,使用Kubernetes API可以在本机启动任意的Kubernetes Pod。
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
|
||||
## What Is Airflow?
|
||||
|
||||
|
||||
|
||||
Apache Airflow is one realization of the DevOps philosophy of "Configuration As Code." Airflow allows users to launch multi-step pipelines using a simple Python object DAG (Directed Acyclic Graph). You can define dependencies, programmatically construct complex workflows, and monitor scheduled jobs in an easy to read UI.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
-->
|
||||
|
||||
|
||||
|
||||
## 什么是Airflow?
|
||||
|
||||
|
||||
|
||||
Apache Airflow是DevOps“Configuration As Code”理念的一种实现。 Airflow允许用户使用简单的Python对象DAG(有向无环图)启动多步骤流水线。 您可以在易于阅读的UI中定义依赖关系,以编程方式构建复杂的工作流,并监视调度的作业。
|
||||
|
||||
|
||||
|
||||
<img src =“/ images / blog / 2018-05-25-Airflow-Kubernetes-Operator / 2018-05-25-airflow_dags.png”width =“85%”alt =“Airflow DAGs”/>
|
||||
|
||||
<img src =“/ images / blog / 2018-05-25-Airflow-Kubernetes-Operator / 2018-05-25-airflow.png”width =“85%”alt =“Airflow UI”/>
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
|
||||
## Why Airflow on Kubernetes?
|
||||
|
||||
|
||||
|
||||
Since its inception, Airflow's greatest strength has been its flexibility. Airflow offers a wide range of integrations for services ranging from Spark and HBase, to services on various cloud providers. Airflow also offers easy extensibility through its plug-in framework. However, one limitation of the project is that Airflow users are confined to the frameworks and clients that exist on the Airflow worker at the moment of execution. A single organization can have varied Airflow workflows ranging from data science pipelines to application deployments. This difference in use-case creates issues in dependency management as both teams might use vastly different libraries for their workflows.
|
||||
|
||||
|
||||
|
||||
To address this issue, we've utilized Kubernetes to allow users to launch arbitrary Kubernetes pods and configurations. Airflow users can now have full power over their run-time environments, resources, and secrets, basically turning Airflow into an "any job you want" workflow orchestrator.
|
||||
|
||||
-->
|
||||
|
||||
|
||||
|
||||
## 为什么在Kubernetes上使用Airflow?
|
||||
|
||||
|
||||
|
||||
自成立以来,Airflow的最大优势在于其灵活性。 Airflow提供广泛的服务集成,包括Spark和HBase,以及各种云提供商的服务。 Airflow还通过其插件框架提供轻松的可扩展性。但是,该项目的一个限制是Airflow用户仅限于执行时Airflow站点上存在的框架和客户端。单个组织可以拥有各种Airflow工作流程,范围从数据科学流到应用程序部署。用例中的这种差异会在依赖关系管理中产生问题,因为两个团队可能会在其工作流程使用截然不同的库。
|
||||
|
||||
|
||||
|
||||
为了解决这个问题,我们使Kubernetes允许用户启动任意Kubernetes pod和配置。 Airflow用户现在可以在其运行时环境,资源和机密上拥有全部权限,基本上将Airflow转变为“您想要的任何工作”工作流程协调器。
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
|
||||
## The Kubernetes Operator
|
||||
|
||||
|
||||
|
||||
Before we move any further, we should clarify that an Operator in Airflow is a task definition. When a user creates a DAG, they would use an operator like the "SparkSubmitOperator" or the "PythonOperator" to submit/monitor a Spark job or a Python function respectively. Airflow comes with built-in operators for frameworks like Apache Spark, BigQuery, Hive, and EMR. It also offers a Plugins entrypoint that allows DevOps engineers to develop their own connectors.
|
||||
|
||||
|
||||
|
||||
Airflow users are always looking for ways to make deployments and ETL pipelines simpler to manage. Any opportunity to decouple pipeline steps, while increasing monitoring, can reduce future outages and fire-fights. The following is a list of benefits provided by the Airflow Kubernetes Operator:
|
||||
|
||||
-->
|
||||
|
||||
|
||||
|
||||
## Kubernetes运营商
|
||||
|
||||
|
||||
|
||||
在进一步讨论之前,我们应该澄清Airflow中的[Operator](https://airflow.apache.org/concepts.html#operators)是一个任务定义。 当用户创建DAG时,他们将使用像“SparkSubmitOperator”或“PythonOperator”这样的operator分别提交/监视Spark作业或Python函数。 Airflow附带了Apache Spark,BigQuery,Hive和EMR等框架的内置运算符。 它还提供了一个插件入口点,允许DevOps工程师开发自己的连接器。
|
||||
|
||||
|
||||
|
||||
Airflow用户一直在寻找更易于管理部署和ETL流的方法。 在增加监控的同时,任何解耦流程的机会都可以减少未来的停机等问题。 以下是Airflow Kubernetes Operator提供的好处:
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
|
||||
* Increased flexibility for deployments:
|
||||
|
||||
Airflow's plugin API has always offered a significant boon to engineers wishing to test new functionalities within their DAGs. On the downside, whenever a developer wanted to create a new operator, they had to develop an entirely new plugin. Now, any task that can be run within a Docker container is accessible through the exact same operator, with no extra Airflow code to maintain.
|
||||
|
||||
-->
|
||||
|
||||
|
||||
|
||||
* 提高部署灵活性:
|
||||
|
||||
Airflow的插件API一直为希望在其DAG中测试新功能的工程师提供了重要的福利。 不利的一面是,每当开发人员想要创建一个新的operator时,他们就必须开发一个全新的插件。 现在,任何可以在Docker容器中运行的任务都可以通过完全相同的运算符访问,而无需维护额外的Airflow代码。
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
|
||||
* Flexibility of configurations and dependencies:
|
||||
|
||||
For operators that are run within static Airflow workers, dependency management can become quite difficult. If a developer wants to run one task that requires SciPy and another that requires NumPy, the developer would have to either maintain both dependencies within all Airflow workers or offload the task to an external machine (which can cause bugs if that external machine changes in an untracked manner). Custom Docker images allow users to ensure that the tasks environment, configuration, and dependencies are completely idempotent.
|
||||
|
||||
-->
|
||||
|
||||
|
||||
|
||||
* 配置和依赖的灵活性:
|
||||
|
||||
对于在静态Airflow工作程序中运行的operator,依赖关系管理可能变得非常困难。 如果开发人员想要运行一个需要[SciPy](https://www.scipy.org) 的任务和另一个需要[NumPy](http://www.numpy.org) 的任务,开发人员必须维护所有Airflow节点中的依赖关系或将任务卸载到其他计算机(如果外部计算机以未跟踪的方式更改,则可能导致错误)。 自定义Docker镜像允许用户确保任务环境,配置和依赖关系完全是幂等的。
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
|
||||
* Usage of kubernetes secrets for added security:
|
||||
|
||||
Handling sensitive data is a core responsibility of any DevOps engineer. At every opportunity, Airflow users want to isolate any API keys, database passwords, and login credentials on a strict need-to-know basis. With the Kubernetes operator, users can utilize the Kubernetes Vault technology to store all sensitive data. This means that the Airflow workers will never have access to this information, and can simply request that pods be built with only the secrets they need.
|
||||
|
||||
-->
|
||||
|
||||
|
||||
|
||||
* 使用kubernetes Secret以增加安全性:
|
||||
|
||||
处理敏感数据是任何开发工程师的核心职责。 Airflow用户总有机会在严格条款的基础上隔离任何API密钥,数据库密码和登录凭据。 使用Kubernetes运算符,用户可以利用Kubernetes Vault技术存储所有敏感数据。 这意味着Airflow工作人员将永远无法访问此信息,并且可以容易地请求仅使用他们需要的密码信息构建pod。
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
|
||||
# Architecture
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
The Kubernetes Operator uses the Kubernetes Python Client to generate a request that is processed by the APIServer (1). Kubernetes will then launch your pod with whatever specs you've defined (2). Images will be loaded with all the necessary environment variables, secrets and dependencies, enacting a single command. Once the job is launched, the operator only needs to monitor the health of track logs (3). Users will have the choice of gathering logs locally to the scheduler or to any distributed logging service currently in their Kubernetes cluster.
|
||||
|
||||
-->
|
||||
|
||||
|
||||
|
||||
#架构
|
||||
|
||||
|
||||
|
||||
<img src =“/ images / blog / 2018-05-25-Airflow-Kubernetes-Operator / 2018-05-25-airflow-architecture.png”width =“85%”alt =“Airflow Architecture”/>
|
||||
|
||||
|
||||
|
||||
Kubernetes Operator使用[Kubernetes Python客户端](https://github.com/kubernetes-client/Python)生成由APIServer处理的请求(1)。 然后,Kubernetes将使用您定义的需求启动您的pod(2)。映像文件中将加载环境变量,Secret和依赖项,执行单个命令。 一旦启动作业,operator只需要监视跟踪日志的状况(3)。 用户可以选择将日志本地收集到调度程序或当前位于其Kubernetes集群中的任何分布式日志记录服务。
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
|
||||
# Using the Kubernetes Operator
|
||||
|
||||
|
||||
|
||||
## A Basic Example
|
||||
|
||||
|
||||
|
||||
The following DAG is probably the simplest example we could write to show how the Kubernetes Operator works. This DAG creates two pods on Kubernetes: a Linux distro with Python and a base Ubuntu distro without it. The Python pod will run the Python request correctly, while the one without Python will report a failure to the user. If the Operator is working correctly, the passing-task pod should complete, while the failing-task pod returns a failure to the Airflow webserver.
|
||||
|
||||
-->
|
||||
|
||||
|
||||
|
||||
#使用Kubernetes Operator
|
||||
|
||||
|
||||
|
||||
##一个基本的例子
|
||||
|
||||
|
||||
|
||||
以下DAG可能是我们可以编写的最简单的示例,以显示Kubernetes Operator的工作原理。 这个DAG在Kubernetes上创建了两个pod:一个带有Python的Linux发行版和一个没有它的基本Ubuntu发行版。 Python pod将正确运行Python请求,而没有Python的那个将向用户报告失败。 如果Operator正常工作,则应该完成“passing-task”pod,而“falling-task”pod则向Airflow网络服务器返回失败。
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
```Python
|
||||
|
||||
from airflow import DAG
|
||||
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator
|
||||
|
||||
from airflow.operators.dummy_operator import DummyOperator
|
||||
|
||||
|
||||
default_args = {
|
||||
|
||||
'owner': 'airflow',
|
||||
|
||||
'depends_on_past': False,
|
||||
|
||||
'start_date': datetime.utcnow(),
|
||||
|
||||
'email': ['airflow@example.com'],
|
||||
|
||||
'email_on_failure': False,
|
||||
|
||||
'email_on_retry': False,
|
||||
|
||||
'retries': 1,
|
||||
|
||||
'retry_delay': timedelta(minutes=5)
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
dag = DAG(
|
||||
|
||||
'kubernetes_sample', default_args=default_args, schedule_interval=timedelta(minutes=10))
|
||||
|
||||
start = DummyOperator(task_id='run_this_first', dag=dag)
|
||||
passing = KubernetesPodOperator(namespace='default',
|
||||
|
||||
image="Python:3.6",
|
||||
|
||||
cmds=["Python","-c"],
|
||||
|
||||
arguments=["print('hello world')"],
|
||||
|
||||
labels={"foo": "bar"},
|
||||
|
||||
name="passing-test",
|
||||
|
||||
task_id="passing-task",
|
||||
|
||||
get_logs=True,
|
||||
|
||||
dag=dag
|
||||
|
||||
)
|
||||
|
||||
failing = KubernetesPodOperator(namespace='default',
|
||||
|
||||
image="ubuntu:1604",
|
||||
|
||||
cmds=["Python","-c"],
|
||||
|
||||
arguments=["print('hello world')"],
|
||||
|
||||
labels={"foo": "bar"},
|
||||
|
||||
name="fail",
|
||||
|
||||
task_id="failing-task",
|
||||
|
||||
get_logs=True,
|
||||
|
||||
dag=dag
|
||||
|
||||
)
|
||||
|
||||
passing.set_upstream(start)
|
||||
|
||||
failing.set_upstream(start)
|
||||
|
||||
```
|
||||
|
||||
<!--
|
||||
|
||||
## But how does this relate to my workflow?
|
||||
|
||||
|
||||
|
||||
While this example only uses basic images, the magic of Docker is that this same DAG will work for any image/command pairing you want. The following is a recommended CI/CD pipeline to run production-ready code on an Airflow DAG.
|
||||
|
||||
|
||||
|
||||
### 1: PR in github
|
||||
|
||||
Use Travis or Jenkins to run unit and integration tests, bribe your favorite team-mate into PR'ing your code, and merge to the master branch to trigger an automated CI build.
|
||||
|
||||
|
||||
|
||||
### 2: CI/CD via Jenkins -> Docker Image
|
||||
|
||||
|
||||
|
||||
Generate your Docker images and bump release version within your Jenkins build.
|
||||
|
||||
|
||||
|
||||
### 3: Airflow launches task
|
||||
|
||||
|
||||
|
||||
Finally, update your DAGs to reflect the new release version and you should be ready to go!
|
||||
|
||||
-->
|
||||
|
||||
##但这与我的工作流程有什么关系?
|
||||
|
||||
|
||||
|
||||
虽然这个例子只使用基本映像,但Docker的神奇之处在于,这个相同的DAG可以用于您想要的任何图像/命令配对。 以下是推荐的CI / CD管道,用于在Airflow DAG上运行生产就绪代码。
|
||||
|
||||
|
||||
|
||||
### 1:github中的PR
|
||||
|
||||
使用Travis或Jenkins运行单元和集成测试,请您的朋友PR您的代码,并合并到主分支以触发自动CI构建。
|
||||
|
||||
|
||||
|
||||
### 2:CI / CD构建Jenkins - > Docker Image
|
||||
|
||||
|
||||
|
||||
[在Jenkins构建中生成Docker镜像和缓冲版本](https://getintodevops.com/blog/building-your-first-Docker-image-with-jenkins-2-guide-for-developers)。
|
||||
|
||||
|
||||
|
||||
### 3:Airflow启动任务
|
||||
|
||||
|
||||
|
||||
最后,更新您的DAG以反映新版本,您应该准备好了!
|
||||
|
||||
|
||||
|
||||
```Python
|
||||
|
||||
production_task = KubernetesPodOperator(namespace='default',
|
||||
|
||||
# image="my-production-job:release-1.0.1", <-- old release
|
||||
|
||||
image="my-production-job:release-1.0.2",
|
||||
|
||||
cmds=["Python","-c"],
|
||||
|
||||
arguments=["print('hello world')"],
|
||||
|
||||
name="fail",
|
||||
|
||||
task_id="failing-task",
|
||||
|
||||
get_logs=True,
|
||||
|
||||
dag=dag
|
||||
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
<!--
|
||||
|
||||
# Launching a test deployment
|
||||
|
||||
|
||||
|
||||
Since the Kubernetes Operator is not yet released, we haven't released an official helm chart or operator (however both are currently in progress). However, we are including instructions for a basic deployment below and are actively looking for foolhardy beta testers to try this new feature. To try this system out please follow these steps:
|
||||
|
||||
|
||||
|
||||
## Step 1: Set your kubeconfig to point to a kubernetes cluster
|
||||
|
||||
|
||||
|
||||
## Step 2: Clone the Airflow Repo:
|
||||
|
||||
|
||||
|
||||
Run git clone https://github.com/apache/incubator-airflow.git to clone the official Airflow repo.
|
||||
|
||||
|
||||
|
||||
## Step 3: Run
|
||||
|
||||
|
||||
|
||||
To run this basic deployment, we are co-opting the integration testing script that we currently use for the Kubernetes Executor (which will be explained in the next article of this series). To launch this deployment, run these three commands:
|
||||
|
||||
-->
|
||||
|
||||
|
||||
|
||||
#启动测试部署
|
||||
|
||||
|
||||
|
||||
由于Kubernetes运营商尚未发布,我们尚未发布官方[helm](https://helm.sh/) 图表或operator(但两者目前都在进行中)。 但是,我们在下面列出了基本部署的说明,并且正在积极寻找测试人员来尝试这一新功能。 要试用此系统,请按以下步骤操作:
|
||||
|
||||
|
||||
|
||||
##步骤1:将kubeconfig设置为指向kubernetes集群
|
||||
|
||||
|
||||
|
||||
##步骤2:clone Airflow 仓库:
|
||||
|
||||
|
||||
|
||||
运行git clone https:// github.com / apache / incubator-airflow.git来clone官方Airflow仓库。
|
||||
|
||||
|
||||
|
||||
##步骤3:运行
|
||||
|
||||
|
||||
|
||||
为了运行这个基本Deployment,我们正在选择我们目前用于Kubernetes Executor的集成测试脚本(将在本系列的下一篇文章中对此进行解释)。 要启动此部署,请运行以下三个命令:
|
||||
|
||||
|
||||
|
||||
```
|
||||
|
||||
sed -ie "s/KubernetesExecutor/LocalExecutor/g" scripts/ci/kubernetes/kube/configmaps.yaml
|
||||
|
||||
./scripts/ci/kubernetes/Docker/build.sh
|
||||
|
||||
./scripts/ci/kubernetes/kube/deploy.sh
|
||||
|
||||
```
|
||||
|
||||
<!--
|
||||
|
||||
Before we move on, let's discuss what these commands are doing:
|
||||
|
||||
|
||||
|
||||
### sed -ie "s/KubernetesExecutor/LocalExecutor/g" scripts/ci/kubernetes/kube/configmaps.yaml
|
||||
|
||||
|
||||
|
||||
The Kubernetes Executor is another Airflow feature that allows for dynamic allocation of tasks as idempotent pods. The reason we are switching this to the LocalExecutor is simply to introduce one feature at a time. You are more then welcome to skip this step if you would like to try the Kubernetes Executor, however we will go into more detail in a future article.
|
||||
|
||||
|
||||
|
||||
### ./scripts/ci/kubernetes/Docker/build.sh
|
||||
|
||||
|
||||
|
||||
This script will tar the Airflow master source code build a Docker container based on the Airflow distribution
|
||||
|
||||
|
||||
|
||||
### ./scripts/ci/kubernetes/kube/deploy.sh
|
||||
|
||||
|
||||
|
||||
Finally, we create a full Airflow deployment on your cluster. This includes Airflow configs, a postgres backend, the webserver + scheduler, and all necessary services between. One thing to note is that the role binding supplied is a cluster-admin, so if you do not have that level of permission on the cluster, you can modify this at scripts/ci/kubernetes/kube/airflow.yaml
|
||||
|
||||
|
||||
|
||||
## Step 4: Log into your webserver
|
||||
|
||||
|
||||
|
||||
Now that your Airflow instance is running let's take a look at the UI! The UI lives in port 8080 of the Airflow pod, so simply run
|
||||
|
||||
-->
|
||||
|
||||
|
||||
|
||||
在我们继续之前,让我们讨论这些命令正在做什么:
|
||||
|
||||
|
||||
|
||||
### sed -ie“s / KubernetesExecutor / LocalExecutor / g”scripts / ci / kubernetes / kube / configmaps.yaml
|
||||
|
||||
|
||||
|
||||
Kubernetes Executor是另一种Airflow功能,允许动态分配任务已解决幂等pod的问题。我们将其切换到LocalExecutor的原因只是一次引入一个功能。如果您想尝试Kubernetes Executor,欢迎您跳过此步骤,但我们将在以后的文章中详细介绍。
|
||||
|
||||
|
||||
|
||||
### ./scripts/ci/kubernetes/Docker/build.sh
|
||||
|
||||
|
||||
|
||||
此脚本将对Airflow主分支代码进行打包,以根据Airflow的发行文件构建Docker容器
|
||||
|
||||
|
||||
|
||||
### ./scripts/ci/kubernetes/kube/deploy.sh
|
||||
|
||||
|
||||
|
||||
最后,我们在您的群集上创建完整的Airflow部署。这包括Airflow配置,postgres后端,webserver +调度程序以及之间的所有必要服务。需要注意的一点是,提供的角色绑定是集群管理员,因此如果您没有该集群的权限级别,可以在scripts / ci / kubernetes / kube / airflow.yaml中进行修改。
|
||||
|
||||
|
||||
|
||||
##步骤4:登录您的网络服务器
|
||||
|
||||
|
||||
|
||||
现在您的Airflow实例正在运行,让我们来看看UI!用户界面位于Airflow pod的8080端口,因此只需运行即可
|
||||
|
||||
|
||||
|
||||
```
|
||||
|
||||
WEB=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep "airflow" | head -1)
|
||||
|
||||
kubectl port-forward $WEB 8080:8080
|
||||
|
||||
```
|
||||
|
||||
<!--
|
||||
|
||||
Now the Airflow UI will exist on http://localhost:8080. To log in simply enter airflow/airflow and you should have full access to the Airflow web UI.
|
||||
|
||||
|
||||
|
||||
## Step 5: Upload a test document
|
||||
|
||||
|
||||
|
||||
To modify/add your own DAGs, you can use kubectl cp to upload local files into the DAG folder of the Airflow scheduler. Airflow will then read the new DAG and automatically upload it to its system. The following command will upload any local file into the correct directory:
|
||||
|
||||
-->
|
||||
|
||||
|
||||
|
||||
现在,Airflow UI将存在于http://localhost:8080上。 要登录,只需输入airflow /airflow,您就可以完全访问Airflow Web UI。
|
||||
|
||||
|
||||
|
||||
##步骤5:上传测试文档
|
||||
|
||||
|
||||
|
||||
要修改/添加自己的DAG,可以使用kubectl cp将本地文件上传到Airflow调度程序的DAG文件夹中。 然后,Airflow将读取新的DAG并自动将其上传到其系统。 以下命令将任何本地文件上载到正确的目录中:
|
||||
|
||||
|
||||
|
||||
kubectl cp <local file> <namespace>/<pod>:/root/airflow/dags -c scheduler
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
|
||||
## Step 6: Enjoy!
|
||||
|
||||
|
||||
|
||||
# So when will I be able to use this?
|
||||
|
||||
|
||||
|
||||
While this feature is still in the early stages, we hope to see it released for wide release in the next few months.
|
||||
|
||||
|
||||
|
||||
# Get Involved
|
||||
|
||||
|
||||
|
||||
This feature is just the beginning of multiple major efforts to improves Apache Airflow integration into Kubernetes. The Kubernetes Operator has been merged into the 1.10 release branch of Airflow (the executor in experimental mode), along with a fully k8s native scheduler called the Kubernetes Executor (article to come). These features are still in a stage where early adopters/contributers can have a huge influence on the future of these features.
|
||||
|
||||
|
||||
|
||||
For those interested in joining these efforts, I'd recommend checkint out these steps:
|
||||
|
||||
|
||||
|
||||
* Join the airflow-dev mailing list at dev@airflow.apache.org.
|
||||
|
||||
* File an issue in Apache Airflow JIRA
|
||||
|
||||
* Join our SIG-BigData meetings on Wednesdays at 10am PST.
|
||||
|
||||
* Reach us on slack at #sig-big-data on kubernetes.slack.com
|
||||
|
||||
|
||||
|
||||
Special thanks to the Apache Airflow and Kubernetes communities, particularly Grant Nicholas, Ben Goldberg, Anirudh Ramanathan, Fokko Dreisprong, and Bolke de Bruin, for your awesome help on these features as well as our future efforts.
|
||||
|
||||
-->
|
||||
|
||||
|
||||
|
||||
##步骤6:使用它!
|
||||
|
||||
|
||||
|
||||
#那么我什么时候可以使用它?
|
||||
|
||||
|
||||
|
||||
虽然此功能仍处于早期阶段,但我们希望在未来几个月内发布该功能以进行广泛发布。
|
||||
|
||||
|
||||
|
||||
#参与其中
|
||||
|
||||
|
||||
|
||||
此功能只是将Apache Airflow集成到Kubernetes中的多项主要工作的开始。 Kubernetes Operator已合并到[Airflow的1.10发布分支](https://github.com/apache/incubator-airflow/tree/v1-10-test)(实验模式中的执行模块),以及完整的k8s本地调度程序称为Kubernetes Executor(即将发布文章)。这些功能仍处于早期采用者/贡献者可能对这些功能的未来产生巨大影响的阶段。
|
||||
|
||||
|
||||
|
||||
对于有兴趣加入这些工作的人,我建议按照以下步骤:
|
||||
|
||||
|
||||
|
||||
*加入airflow-dev邮件列表dev@airflow.apache.org。
|
||||
|
||||
*在[Apache Airflow JIRA]中提出问题(https://issues.apache.org/jira/projects/AIRFLOW/issues/)
|
||||
|
||||
*周三上午10点太平洋标准时间加入我们的SIG-BigData会议。
|
||||
|
||||
*在kubernetes.slack.com上的#sig-big-data找到我们。
|
||||
|
||||
|
||||
|
||||
特别感谢Apache Airflow和Kubernetes社区,特别是Grant Nicholas,Ben Goldberg,Anirudh Ramanathan,Fokko Dreisprong和Bolke de Bruin,感谢您对这些功能的巨大帮助以及我们未来的努力。
|
||||
|
|
@ -0,0 +1,401 @@
|
|||
---
|
||||
title: 基于IPVS的集群内部负载均衡
|
||||
cn-approvers:
|
||||
- congfairy
|
||||
layout: blog
|
||||
title: 'IPVS-Based In-Cluster Load Balancing Deep Dive'
|
||||
date: 2018-07-09
|
||||
---
|
||||
|
||||
<!--
|
||||
|
||||
Author: Jun Du(Huawei), Haibin Xie(Huawei), Wei Liang(Huawei)
|
||||
|
||||
Editor’s note: this post is part of a series of in-depth articles on what’s new in Kubernetes 1.11
|
||||
|
||||
-->
|
||||
|
||||
作者: Jun Du(华为), Haibin Xie(华为), Wei Liang(华为)
|
||||
|
||||
注意: 这篇文章出自 系列深度文章 介绍 Kubernetes 1.11 的新特性
|
||||
|
||||
<!--
|
||||
|
||||
Introduction
|
||||
|
||||
Per the Kubernetes 1.11 release blog post , we announced that IPVS-Based In-Cluster Service Load Balancing graduates to General Availability. In this blog, we will take you through a deep dive of the feature.
|
||||
|
||||
-->
|
||||
|
||||
介绍
|
||||
|
||||
根据 Kubernetes 1.11 发布的博客文章, 我们宣布基于 IPVS 的集群内部服务负载均衡已达到一般可用性。 在这篇博客中,我们将带您深入了解该功能。
|
||||
|
||||
<!--
|
||||
|
||||
What Is IPVS?
|
||||
|
||||
IPVS (IP Virtual Server) is built on top of the Netfilter and implements transport-layer load balancing as part of the Linux kernel.
|
||||
|
||||
IPVS is incorporated into the LVS (Linux Virtual Server), where it runs on a host and acts as a load balancer in front of a cluster of real servers. IPVS can direct requests for TCP- and UDP-based services to the real servers, and make services of the real servers appear as virtual services on a single IP address. Therefore, IPVS naturally supports Kubernetes Service.
|
||||
|
||||
-->
|
||||
|
||||
什么是 IPVS ?
|
||||
|
||||
IPVS (IP Virtual Server)是在 Netfilter 上层构建的,并作为 Linux 内核的一部分,实现传输层负载均衡。
|
||||
|
||||
IPVS 集成在 LVS(Linux Virtual Server,Linux 虚拟服务器)中,它在主机上运行,并在物理服务器集群前作为负载均衡器。IPVS 可以将基于 TCP 和 UDP 服务的请求定向到真实服务器,并使真实服务器的服务在单个IP地址上显示为虚拟服务。 因此,IPVS 自然支持 Kubernetes 服务。
|
||||
|
||||
<!--
|
||||
|
||||
Why IPVS for Kubernetes?
|
||||
|
||||
As Kubernetes grows in usage, the scalability of its resources becomes more and more important. In particular, the scalability of services is paramount to the adoption of Kubernetes by developers/companies running large workloads.
|
||||
|
||||
Kube-proxy, the building block of service routing has relied on the battle-hardened iptables to implement the core supported Service types such as ClusterIP and NodePort. However, iptables struggles to scale to tens of thousands of Services because it is designed purely for firewalling purposes and is based on in-kernel rule lists.
|
||||
|
||||
Even though Kubernetes already support 5000 nodes in release v1.6, the kube-proxy with iptables is actually a bottleneck to scale the cluster to 5000 nodes. One example is that with NodePort Service in a 5000-node cluster, if we have 2000 services and each services have 10 pods, this will cause at least 20000 iptable records on each worker node, and this can make the kernel pretty busy.
|
||||
|
||||
On the other hand, using IPVS-based in-cluster service load balancing can help a lot for such cases. IPVS is specifically designed for load balancing and uses more efficient data structures (hash tables) allowing for almost unlimited scale under the hood.
|
||||
|
||||
-->
|
||||
|
||||
为什么为 Kubernetes 选择 IPVS ?
|
||||
|
||||
随着 Kubernetes 的使用增长,其资源的可扩展性变得越来越重要。特别是,服务的可扩展性对于运行大型工作负载的开发人员/公司采用 Kubernetes 至关重要。
|
||||
|
||||
Kube-proxy 是服务路由的构建块,它依赖于经过强化攻击的 iptables 来实现支持核心的服务类型,如 ClusterIP 和 NodePort。 但是,iptables 难以扩展到成千上万的服务,因为它纯粹是为防火墙而设计的,并且基于内核规则列表。
|
||||
|
||||
尽管 Kubernetes 在版本v1.6中已经支持5000个节点,但使用 iptables 的 kube-proxy 实际上是将集群扩展到5000个节点的瓶颈。 一个例子是,在5000节点集群中使用 NodePort 服务,如果我们有2000个服务并且每个服务有10个 pod,这将在每个工作节点上至少产生20000个 iptable 记录,这可能使内核非常繁忙。
|
||||
|
||||
另一方面,使用基于 IPVS 的集群内服务负载均衡可以为这种情况提供很多帮助。 IPVS 专门用于负载均衡,并使用更高效的数据结构(哈希表),允许几乎无限的规模扩张。
|
||||
|
||||
<!--
|
||||
|
||||
IPVS-based Kube-proxy
|
||||
|
||||
Parameter Changes
|
||||
|
||||
Parameter: --proxy-mode In addition to existing userspace and iptables modes, IPVS mode is configured via --proxy-mode=ipvs. It implicitly uses IPVS NAT mode for service port mapping.
|
||||
|
||||
-->
|
||||
|
||||
基于 IPVS 的 Kube-proxy
|
||||
|
||||
参数更改
|
||||
|
||||
参数: --proxy-mode 除了现有的用户空间和 iptables 模式,IPVS 模式通过--proxy-mode = ipvs 进行配置。 它隐式使用 IPVS NAT 模式进行服务端口映射。
|
||||
|
||||
<!--
|
||||
|
||||
Parameter: --ipvs-scheduler
|
||||
|
||||
A new kube-proxy parameter has been added to specify the IPVS load balancing algorithm, with the parameter being --ipvs-scheduler. If it’s not configured, then round-robin (rr) is the default value.
|
||||
|
||||
- rr: round-robin
|
||||
- lc: least connection
|
||||
- dh: destination hashing
|
||||
- sh: source hashing
|
||||
- sed: shortest expected delay
|
||||
- nq: never queue
|
||||
|
||||
In the future, we can implement Service specific scheduler (potentially via annotation), which has higher priority and overwrites the value.
|
||||
|
||||
-->
|
||||
|
||||
参数: --ipvs-scheduler
|
||||
|
||||
添加了一个新的 kube-proxy 参数来指定 IPVS 负载均衡算法,参数为 --ipvs-scheduler。 如果未配置,则默认为 round-robin 算法(rr)。
|
||||
|
||||
- rr: round-robin
|
||||
- lc: least connection
|
||||
- dh: destination hashing
|
||||
- sh: source hashing
|
||||
- sed: shortest expected delay
|
||||
- nq: never queue
|
||||
|
||||
将来,我们可以实现特定于服务的调度程序(可能通过注释),该调度程序具有更高的优先级并覆盖该值。
|
||||
|
||||
<!--
|
||||
|
||||
Parameter: --cleanup-ipvs Similar to the --cleanup-iptables parameter, if true, cleanup IPVS configuration and IPTables rules that are created in IPVS mode.
|
||||
|
||||
Parameter: --ipvs-sync-period Maximum interval of how often IPVS rules are refreshed (e.g. '5s', '1m'). Must be greater than 0.
|
||||
|
||||
Parameter: --ipvs-min-sync-period Minimum interval of how often the IPVS rules are refreshed (e.g. '5s', '1m'). Must be greater than 0.
|
||||
|
||||
-->
|
||||
|
||||
参数: --cleanup-ipvs 类似于 --cleanup-iptables 参数,如果为 true,则清除在 IPVS 模式下创建的 IPVS 配置和 IPTables 规则。
|
||||
|
||||
参数: --ipvs-sync-period 刷新 IPVS 规则的最大间隔时间(例如'5s','1m')。 必须大于0。
|
||||
|
||||
参数: --ipvs-min-sync-period 刷新 IPVS 规则的最小间隔时间间隔(例如'5s','1m')。 必须大于0。
|
||||
|
||||
<!--
|
||||
|
||||
Parameter: --ipvs-exclude-cidrs A comma-separated list of CIDR's which the IPVS proxier should not touch when cleaning up IPVS rules because IPVS proxier can't distinguish kube-proxy created IPVS rules from user original IPVS rules. If you are using IPVS proxier with your own IPVS rules in the environment, this parameter should be specified, otherwise your original rule will be cleaned.
|
||||
|
||||
-->
|
||||
|
||||
参数: --ipvs-exclude-cidrs 清除 IPVS 规则时 IPVS 代理不应触及的 CIDR 的逗号分隔列表,因为 IPVS 代理无法区分 kube-proxy 创建的 IPVS 规则和用户原始规则 IPVS 规则。 如果您在环境中使用 IPVS proxier 和您自己的 IPVS 规则,则应指定此参数,否则将清除原始规则。
|
||||
|
||||
<!--
|
||||
|
||||
Design Considerations
|
||||
|
||||
IPVS Service Network Topology
|
||||
|
||||
When creating a ClusterIP type Service, IPVS proxier will do the following three things:
|
||||
|
||||
- Make sure a dummy interface exists in the node, defaults to kube-ipvs0
|
||||
- Bind Service IP addresses to the dummy interface
|
||||
- Create IPVS virtual servers for each Service IP address respectively
|
||||
-->
|
||||
|
||||
设计注意事项
|
||||
|
||||
IPVS 服务网络拓扑
|
||||
|
||||
创建 ClusterIP 类型服务时,IPVS proxier 将执行以下三项操作:
|
||||
|
||||
- 确保节点中存在虚拟接口,默认为 kube-ipvs0
|
||||
- 将服务 IP 地址绑定到虚拟接口
|
||||
- 分别为每个服务 IP 地址创建 IPVS 虚拟服务器
|
||||
|
||||
<!--
|
||||
|
||||
Here comes an example:
|
||||
|
||||
# kubectl describe svc nginx-service
|
||||
Name: nginx-service
|
||||
...
|
||||
Type: ClusterIP
|
||||
IP: 10.102.128.4
|
||||
Port: http 3080/TCP
|
||||
Endpoints: 10.244.0.235:8080,10.244.1.237:8080
|
||||
Session Affinity: None
|
||||
|
||||
# ip addr
|
||||
...
|
||||
73: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN qlen 1000
|
||||
link/ether 1a:ce:f5:5f:c1:4d brd ff:ff:ff:ff:ff:ff
|
||||
inet 10.102.128.4/32 scope global kube-ipvs0
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
# ipvsadm -ln
|
||||
IP Virtual Server version 1.2.1 (size=4096)
|
||||
Prot LocalAddress:Port Scheduler Flags
|
||||
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
|
||||
TCP 10.102.128.4:3080 rr
|
||||
-> 10.244.0.235:8080 Masq 1 0 0
|
||||
-> 10.244.1.237:8080 Masq 1 0 0
|
||||
|
||||
-->
|
||||
|
||||
这是一个例子:
|
||||
|
||||
# kubectl describe svc nginx-service
|
||||
Name: nginx-service
|
||||
...
|
||||
Type: ClusterIP
|
||||
IP: 10.102.128.4
|
||||
Port: http 3080/TCP
|
||||
Endpoints: 10.244.0.235:8080,10.244.1.237:8080
|
||||
Session Affinity: None
|
||||
|
||||
# ip addr
|
||||
...
|
||||
73: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN qlen 1000
|
||||
link/ether 1a:ce:f5:5f:c1:4d brd ff:ff:ff:ff:ff:ff
|
||||
inet 10.102.128.4/32 scope global kube-ipvs0
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
# ipvsadm -ln
|
||||
IP Virtual Server version 1.2.1 (size=4096)
|
||||
Prot LocalAddress:Port Scheduler Flags
|
||||
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
|
||||
TCP 10.102.128.4:3080 rr
|
||||
-> 10.244.0.235:8080 Masq 1 0 0
|
||||
-> 10.244.1.237:8080 Masq 1 0 0
|
||||
|
||||
<!--
|
||||
|
||||
Please note that the relationship between a Kubernetes Service and IPVS virtual servers is 1:N. For example, consider a Kubernetes Service that has more than one IP address. An External IP type Service has two IP addresses - ClusterIP and External IP. Then the IPVS proxier will create 2 IPVS virtual servers - one for Cluster IP and another one for External IP. The relationship between a Kubernetes Endpoint (each IP+Port pair) and an IPVS virtual server is 1:1.
|
||||
|
||||
Deleting of a Kubernetes service will trigger deletion of the corresponding IPVS virtual server, IPVS real servers and its IP addresses bound to the dummy interface.
|
||||
|
||||
Port Mapping
|
||||
|
||||
There are three proxy modes in IPVS: NAT (masq), IPIP and DR. Only NAT mode supports port mapping. Kube-proxy leverages NAT mode for port mapping. The following example shows IPVS mapping Service port 3080 to Pod port 8080.
|
||||
|
||||
-->
|
||||
|
||||
请注意,Kubernetes 服务和 IPVS 虚拟服务器之间的关系是“1:N”。 例如,考虑具有多个 IP 地址的 Kubernetes 服务。 外部 IP 类型服务有两个 IP 地址 - 集群IP和外部 IP。 然后,IPVS 代理将创建2个 IPVS 虚拟服务器 - 一个用于集群 IP,另一个用于外部 IP。 Kubernetes 的 endpoint(每个IP +端口对)与 IPVS 虚拟服务器之间的关系是“1:1”。
|
||||
|
||||
删除 Kubernetes 服务将触发删除相应的 IPVS 虚拟服务器,IPVS 物理服务器及其绑定到虚拟接口的 IP 地址。
|
||||
|
||||
端口映射
|
||||
|
||||
IPVS 中有三种代理模式:NAT(masq),IPIP 和 DR。 只有 NAT 模式支持端口映射。 Kube-proxy 利用 NAT 模式进行端口映射。 以下示例显示 IPVS 服务端口3080到Pod端口8080的映射。
|
||||
|
||||
TCP 10.102.128.4:3080 rr
|
||||
-> 10.244.0.235:8080 Masq 1 0 0
|
||||
-> 10.244.1.237:8080 Masq 1 0
|
||||
|
||||
<!--
|
||||
|
||||
Session Affinity
|
||||
|
||||
IPVS supports client IP session affinity (persistent connection). When a Service specifies session affinity, the IPVS proxier will set a timeout value (180min=10800s by default) in the IPVS virtual server. For example:
|
||||
|
||||
-->
|
||||
|
||||
会话关系
|
||||
|
||||
IPVS 支持客户端 IP 会话关联(持久连接)。 当服务指定会话关系时,IPVS 代理将在 IPVS 虚拟服务器中设置超时值(默认为180分钟= 10800秒)。 例如:
|
||||
|
||||
# kubectl describe svc nginx-service
|
||||
Name: nginx-service
|
||||
...
|
||||
IP: 10.102.128.4
|
||||
Port: http 3080/TCP
|
||||
Session Affinity: ClientIP
|
||||
|
||||
# ipvsadm -ln
|
||||
IP Virtual Server version 1.2.1 (size=4096)
|
||||
Prot LocalAddress:Port Scheduler Flags
|
||||
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
|
||||
TCP 10.102.128.4:3080 rr persistent 10800
|
||||
|
||||
<!--
|
||||
|
||||
Iptables & Ipset in IPVS Proxier
|
||||
|
||||
IPVS is for load balancing and it can't handle other workarounds in kube-proxy, e.g. packet filtering, hairpin-masquerade tricks, SNAT, etc.
|
||||
|
||||
IPVS proxier leverages iptables in the above scenarios. Specifically, ipvs proxier will fall back on iptables in the following 4 scenarios:
|
||||
|
||||
- kube-proxy start with --masquerade-all=true
|
||||
- Specify cluster CIDR in kube-proxy startup
|
||||
- Support Loadbalancer type service
|
||||
- Support NodePort type service
|
||||
|
||||
However, we don't want to create too many iptables rules. So we adopt ipset for the sake of decreasing iptables rules. The following is the table of ipset sets that IPVS proxier maintains:
|
||||
|
||||
-->
|
||||
|
||||
IPVS 代理中的 Iptables 和 Ipset
|
||||
|
||||
IPVS 用于负载均衡,它无法处理 kube-proxy 中的其他问题,例如 包过滤,数据包欺骗,SNAT 等
|
||||
|
||||
IPVS proxier 在上述场景中利用 iptables。 具体来说,ipvs proxier 将在以下4种情况下依赖于 iptables:
|
||||
|
||||
- kube-proxy 以 --masquerade-all = true 开头
|
||||
- 在 kube-proxy 启动中指定集群 CIDR
|
||||
- 支持 Loadbalancer 类型服务
|
||||
- 支持 NodePort 类型的服务
|
||||
|
||||
但是,我们不想创建太多的 iptables 规则。 所以我们采用 ipset 来减少 iptables 规则。 以下是 IPVS proxier 维护的 ipset 集表:
|
||||
|
||||
<!--
|
||||
|
||||
set name members usage
|
||||
KUBE-CLUSTER-IP All Service IP + port masquerade for cases that masquerade-all=true or clusterCIDR specified
|
||||
KUBE-LOOP-BACK All Service IP + port + IP masquerade for resolving hairpin issue
|
||||
KUBE-EXTERNAL-IP Service External IP + port masquerade for packets to external IPs
|
||||
KUBE-LOAD-BALANCER Load Balancer ingress IP + port masquerade for packets to Load Balancer type service
|
||||
KUBE-LOAD-BALANCER-LOCAL Load Balancer ingress IP + port with externalTrafficPolicy=local accept packets to Load Balancer with externalTrafficPolicy=local
|
||||
KUBE-LOAD-BALANCER-FW Load Balancer ingress IP + port with loadBalancerSourceRanges Drop packets for Load Balancer type Service with loadBalancerSourceRanges specified
|
||||
KUBE-LOAD-BALANCER-SOURCE-CIDR Load Balancer ingress IP + port + source CIDR accept packets for Load Balancer type Service with loadBalancerSourceRanges specified
|
||||
KUBE-NODE-PORT-TCP NodePort type Service TCP port masquerade for packets to NodePort(TCP)
|
||||
KUBE-NODE-PORT-LOCAL-TCP NodePort type Service TCP port with externalTrafficPolicy=local accept packets to NodePort Service with externalTrafficPolicy=local
|
||||
KUBE-NODE-PORT-UDP NodePort type Service UDP port masquerade for packets to NodePort(UDP)
|
||||
KUBE-NODE-PORT-LOCAL-UDP NodePort type service UDP port with externalTrafficPolicy=local accept packets to NodePort Service with externalTrafficPolicy=local
|
||||
|
||||
-->
|
||||
|
||||
设置名称 成员 用法
|
||||
KUBE-CLUSTER-IP 所有服务 IP + 端口 masquerade-all=true 或 clusterCIDR 指定的情况下进行伪装
|
||||
KUBE-LOOP-BACK 所有服务 IP +端口+ IP 解决数据包欺骗问题
|
||||
KUBE-EXTERNAL-IP 服务外部 IP +端口 将数据包伪装成外部 IP
|
||||
KUBE-LOAD-BALANCER 负载均衡器入口 IP +端口 将数据包伪装成 Load Balancer 类型的服务
|
||||
KUBE-LOAD-BALANCER-LOCAL 负载均衡器入口 IP +端口 以及 externalTrafficPolicy=local 接受数据包到 Load Balancer externalTrafficPolicy=local
|
||||
KUBE-LOAD-BALANCER-FW 负载均衡器入口 IP +端口 以及 loadBalancerSourceRanges 使用指定的 loadBalancerSourceRanges 丢弃 Load Balancer类型Service的数据包
|
||||
KUBE-LOAD-BALANCER-SOURCE-CIDR 负载均衡器入口 IP +端口 + 源 CIDR 接受 Load Balancer 类型 Service 的数据包,并指定loadBalancerSourceRanges
|
||||
KUBE-NODE-PORT-TCP NodePort 类型服务 TCP 将数据包伪装成 NodePort(TCP)
|
||||
KUBE-NODE-PORT-LOCAL-TCP NodePort 类型服务 TCP 端口,带有 externalTrafficPolicy=local 接受数据包到 NodePort 服务 使用 externalTrafficPolicy=local
|
||||
KUBE-NODE-PORT-UDP NodePort 类型服务 UDP 端口 将数据包伪装成 NodePort(UDP)
|
||||
KUBE-NODE-PORT-LOCAL-UDP NodePort 类型服务 UDP 端口 使用 externalTrafficPolicy=local 接受数据包到NodePort服务 使用 externalTrafficPolicy=local
|
||||
|
||||
<!--
|
||||
|
||||
In general, for IPVS proxier, the number of iptables rules is static, no matter how many Services/Pods we have.
|
||||
|
||||
-->
|
||||
|
||||
通常,对于 IPVS proxier,无论我们有多少 Service/ Pod,iptables 规则的数量都是静态的。
|
||||
|
||||
<!--
|
||||
|
||||
Run kube-proxy in IPVS Mode
|
||||
|
||||
Currently, local-up scripts, GCE scripts, and kubeadm support switching IPVS proxy mode via exporting environment variables (KUBE_PROXY_MODE=ipvs) or specifying flag (--proxy-mode=ipvs). Before running IPVS proxier, please ensure IPVS required kernel modules are already installed.
|
||||
|
||||
ip_vs
|
||||
ip_vs_rr
|
||||
ip_vs_wrr
|
||||
ip_vs_sh
|
||||
nf_conntrack_ipv4
|
||||
|
||||
Finally, for Kubernetes v1.10, feature gate SupportIPVSProxyMode is set to true by default. For Kubernetes v1.11, the feature gate is entirely removed. However, you need to enable --feature-gates=SupportIPVSProxyMode=true explicitly for Kubernetes before v1.10.
|
||||
|
||||
-->
|
||||
|
||||
在 IPVS 模式下运行 kube-proxy
|
||||
|
||||
目前,本地脚本,GCE 脚本和 kubeadm 支持通过导出环境变量(KUBE_PROXY_MODE=ipvs)或指定标志(--proxy-mode=ipvs)来切换 IPVS 代理模式。 在运行IPVS 代理之前,请确保已安装 IPVS 所需的内核模块。
|
||||
|
||||
ip_vs
|
||||
ip_vs_rr
|
||||
ip_vs_wrr
|
||||
ip_vs_sh
|
||||
nf_conntrack_ipv4
|
||||
|
||||
最后,对于 Kubernetes v1.10,“SupportIPVSProxyMode” 默认设置为 “true”。 对于 Kubernetes v1.11 ,该选项已完全删除。 但是,您需要在v1.10之前为Kubernetes 明确启用 --feature-gates = SupportIPVSProxyMode = true。
|
||||
|
||||
<!--
|
||||
|
||||
Get Involved
|
||||
|
||||
The simplest way to get involved with Kubernetes is by joining one of the many Special Interest Groups (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly community meeting, and through the channels below.
|
||||
|
||||
Thank you for your continued feedback and support.
|
||||
|
||||
Post questions (or answer questions) on Stack Overflow
|
||||
|
||||
Join the community portal for advocates on K8sPort
|
||||
|
||||
Follow us on Twitter @Kubernetesio for latest updates
|
||||
|
||||
Chat with the community on Slack
|
||||
|
||||
Share your Kubernetes story
|
||||
|
||||
-->
|
||||
|
||||
参与其中
|
||||
|
||||
参与 Kubernetes 的最简单方法是加入众多[特别兴趣小组](https://github.com/kubernetes/community/blob/master/sig-list.md) (SIG)中与您的兴趣一致的小组。 你有什么想要向 Kubernetes 社区广播的吗? 在我们的每周[社区会议](https://github.com/kubernetes/community/blob/master/communication.md#weekly-meeting)或通过以下渠道分享您的声音。
|
||||
|
||||
感谢您的持续反馈和支持。
|
||||
在[Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)上发布问题(或回答问题)
|
||||
|
||||
加入[K8sPort](http://k8sport.org/)的倡导者社区门户网站
|
||||
|
||||
在 Twitter 上关注我们 [@Kubernetesio](https://twitter.com/kubernetesio )获取最新更新
|
||||
|
||||
在[Slack](http://slack.k8s.io/)上与社区聊天
|
||||
|
||||
分享您的 Kubernetes [故事](https://docs.google.com/a/linuxfoundation.org/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform)
|
||||
|
|
@ -104,7 +104,7 @@ $ kubectl create -f ./secret.yaml
|
|||
secret "mysecret" created
|
||||
```
|
||||
|
||||
**编码注意:** secret 数据的序列化 JSON 和 YAML 值使用 base64 编码成字符串。换行符在这些字符串中无效,必须省略。当在 Darwin/macOS 上使用 `base64` 实用程序时,用户应避免使用 `-b` 选项来拆分长行。另外,对于 Linux 用户如果 `-w` 选项不可用的话,应该添加选项 `-w 0` 到 `base64` 命令或管道 `base64 | tr -d '\n' ` 。
|
||||
**编码注意:** secret 数据的序列化 JSON 和 YAML 值使用 base64 编码成字符串。换行符在这些字符串中无效,必须省略。当在 Darwin/OS X 上使用 `base64` 实用程序时,用户应避免使用 `-b` 选项来拆分长行。另外,对于 Linux 用户如果 `-w` 选项不可用的话,应该添加选项 `-w 0` 到 `base64` 命令或管道 `base64 | tr -d '\n' ` 。
|
||||
|
||||
#### 解码 Secret
|
||||
|
||||
|
|
@ -0,0 +1,464 @@
|
|||
---
|
||||
approvers:
|
||||
- davidopp
|
||||
- kevin-wangzefeng
|
||||
- bsalamat
|
||||
cn-approvers:
|
||||
- linyouchong
|
||||
title: Taint 和 Toleration
|
||||
content_template: templates/concept
|
||||
weight: 40
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
approvers:
|
||||
- davidopp
|
||||
- kevin-wangzefeng
|
||||
- bsalamat
|
||||
title: Taints and Tolerations
|
||||
content_template: templates/concept
|
||||
weight: 40
|
||||
---
|
||||
-->
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture overview %}}
|
||||
<!--
|
||||
Node affinity, described [here](/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature),
|
||||
is a property of *pods* that *attracts* them to a set of nodes (either as a
|
||||
preference or a hard requirement). Taints are the opposite -- they allow a
|
||||
*node* to *repel* a set of pods.
|
||||
-->
|
||||
节点亲和性(详见[这里](/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature)),是 *pod* 的一种属性(偏好或硬性要求),它使 *pod* 被吸引到一类特定的节点。Taint 则相反,它使 *节点* 能够 *排斥* 一类特定的 pod。
|
||||
|
||||
<!--
|
||||
Taints and tolerations work together to ensure that pods are not scheduled
|
||||
onto inappropriate nodes. One or more taints are applied to a node; this
|
||||
marks that the node should not accept any pods that do not tolerate the taints.
|
||||
Tolerations are applied to pods, and allow (but do not require) the pods to schedule
|
||||
onto nodes with matching taints.
|
||||
-->
|
||||
Taint 和 toleration 相互配合,可以用来避免 pod 被分配到不合适的节点上。每个节点上都可以应用一个或多个 taint ,这表示对于那些不能容忍这些 taint 的 pod,是不会被该节点接受的。如果将 toleration 应用于 pod 上,则表示这些 pod 可以(但不要求)被调度到具有匹配 taint 的节点上。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
<!--
|
||||
|
||||
## Concepts
|
||||
|
||||
-->
|
||||
|
||||
## 概念
|
||||
|
||||
<!--
|
||||
You add a taint to a node using [kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint).
|
||||
For example,
|
||||
-->
|
||||
您可以使用命令 [kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint) 给节点增加一个 taint。比如,
|
||||
|
||||
```shell
|
||||
kubectl taint nodes node1 key=value:NoSchedule
|
||||
```
|
||||
|
||||
<!--
|
||||
places a taint on node `node1`. The taint has key `key`, value `value`, and taint effect `NoSchedule`.
|
||||
This means that no pod will be able to schedule onto `node1` unless it has a matching toleration.
|
||||
You specify a toleration for a pod in the PodSpec. Both of the following tolerations "match" the
|
||||
taint created by the `kubectl taint` line above, and thus a pod with either toleration would be able
|
||||
to schedule onto `node1`:
|
||||
-->
|
||||
给节点 `node1` 增加一个 taint,它的 key 是 `key`,value 是 `value`,effect 是 `NoSchedule`。这表示只有拥有和这个 taint 相匹配的 toleration 的 pod 才能够被分配到 `node1` 这个节点。您可以在 PodSpec 中定义 pod 的 toleration。下面两个 toleration 均与上面例子中使用 `kubectl taint` 命令创建的 taint 相匹配,因此如果一个 pod 拥有其中的任何一个 toleration 都能够被分配到 `node1` :
|
||||
|
||||
<!--
|
||||
To remove the taint added by the command above, you can run:
|
||||
-->
|
||||
想删除上述命令添加的 taint ,您可以运行:
|
||||
```shell
|
||||
kubectl taint nodes kube11 key:NoSchedule-
|
||||
```
|
||||
|
||||
```yaml
|
||||
tolerations:
|
||||
- key: "key"
|
||||
operator: "Equal"
|
||||
value: "value"
|
||||
effect: "NoSchedule"
|
||||
```
|
||||
|
||||
```yaml
|
||||
tolerations:
|
||||
- key: "key"
|
||||
operator: "Exists"
|
||||
effect: "NoSchedule"
|
||||
```
|
||||
|
||||
<!--
|
||||
A toleration "matches" a taint if the keys are the same and the effects are the same, and:
|
||||
|
||||
* the `operator` is `Exists` (in which case no `value` should be specified), or
|
||||
* the `operator` is `Equal` and the `value`s are equal
|
||||
|
||||
`Operator` defaults to `Equal` if not specified.
|
||||
-->
|
||||
一个 toleration 和一个 taint 相“匹配”是指它们有一样的 key 和 effect ,并且:
|
||||
|
||||
* 如果 `operator` 是 `Exists` (此时 toleration 不能指定 `value`),或者
|
||||
* 如果 `operator` 是 `Equal` ,则它们的 `value` 应该相等
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
**Note:** There are two special cases:
|
||||
|
||||
* An empty `key` with operator `Exists` matches all keys, values and effects which means this
|
||||
will tolerate everything.
|
||||
-->
|
||||
|
||||
**注意:** 存在两种特殊情况:
|
||||
|
||||
* 如果一个 toleration 的 `key` 为空且 operator 为 `Exists` ,表示这个 toleration 与任意的 key 、 value 和 effect 都匹配,即这个 toleration 能容忍任意 taint。
|
||||
|
||||
```yaml
|
||||
tolerations:
|
||||
- operator: "Exists"
|
||||
```
|
||||
|
||||
<!--
|
||||
|
||||
* An empty `effect` matches all effects with key `key`.
|
||||
-->
|
||||
* 如果一个 toleration 的 `effect` 为空,则 `key` 值与之相同的相匹配 taint 的 `effect` 可以是任意值。
|
||||
|
||||
```yaml
|
||||
tolerations:
|
||||
- key: "key"
|
||||
operator: "Exists"
|
||||
```
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
The above example used `effect` of `NoSchedule`. Alternatively, you can use `effect` of `PreferNoSchedule`.
|
||||
This is a "preference" or "soft" version of `NoSchedule` -- the system will *try* to avoid placing a
|
||||
pod that does not tolerate the taint on the node, but it is not required. The third kind of `effect` is
|
||||
`NoExecute`, described later.
|
||||
-->
|
||||
上述例子使用到的 `effect` 的一个值 `NoSchedule`,您也可以使用另外一个值 `PreferNoSchedule`。这是“优化”或“软”版本的 `NoSchedule` ——系统会*尽量*避免将 pod 调度到存在其不能容忍 taint 的节点上,但这不是强制的。`effect` 的值还可以设置为 `NoExecute` ,下文会详细描述这个值。
|
||||
|
||||
<!--
|
||||
You can put multiple taints on the same node and multiple tolerations on the same pod.
|
||||
The way Kubernetes processes multiple taints and tolerations is like a filter: start
|
||||
with all of a node's taints, then ignore the ones for which the pod has a matching toleration; the
|
||||
remaining un-ignored taints have the indicated effects on the pod. In particular,
|
||||
-->
|
||||
您可以给一个节点添加多个 taint ,也可以给一个 pod 添加多个 toleration。Kubernetes 处理多个 taint 和 toleration 的过程就像一个过滤器:从一个节点的所有 taint 开始遍历,过滤掉那些 pod 中存在与之相匹配的 toleration 的 taint。余下未被过滤的 taint 的 effect 值决定了 pod 是否会被分配到该节点,特别是以下情况:
|
||||
|
||||
<!--
|
||||
|
||||
* if there is at least one un-ignored taint with effect `NoSchedule` then Kubernetes will not schedule
|
||||
the pod onto that node
|
||||
* if there is no un-ignored taint with effect `NoSchedule` but there is at least one un-ignored taint with
|
||||
effect `PreferNoSchedule` then Kubernetes will *try* to not schedule the pod onto the node
|
||||
* if there is at least one un-ignored taint with effect `NoExecute` then the pod will be evicted from
|
||||
the node (if it is already running on the node), and will not be
|
||||
scheduled onto the node (if it is not yet running on the node).
|
||||
-->
|
||||
* 如果未被过滤的 taint 中存在一个以上 effect 值为 `NoSchedule` 的 taint,则 Kubernetes 不会将 pod 分配到该节点。
|
||||
* 如果未被过滤的 taint 中不存在 effect 值为 `NoSchedule` 的 taint,但是存在 effect 值为 `PreferNoSchedule` 的 taint,则 Kubernetes 会*尝试*将 pod 分配到该节点。
|
||||
* 如果未被过滤的 taint 中存在一个以上 effect 值为 `NoExecute` 的 taint,则 Kubernetes 不会将 pod 分配到该节点(如果 pod 还未在节点上运行),或者将 pod 从该节点驱逐(如果 pod 已经在节点上运行)。
|
||||
|
||||
<!--
|
||||
For example, imagine you taint a node like this
|
||||
-->
|
||||
例如,假设您给一个节点添加了如下的 taint
|
||||
|
||||
```shell
|
||||
kubectl taint nodes node1 key1=value1:NoSchedule
|
||||
kubectl taint nodes node1 key1=value1:NoExecute
|
||||
kubectl taint nodes node1 key2=value2:NoSchedule
|
||||
```
|
||||
|
||||
<!--
|
||||
And a pod has two tolerations:
|
||||
-->
|
||||
然后存在一个 pod,它有两个 toleration
|
||||
|
||||
```yaml
|
||||
tolerations:
|
||||
- key: "key1"
|
||||
operator: "Equal"
|
||||
value: "value1"
|
||||
effect: "NoSchedule"
|
||||
- key: "key1"
|
||||
operator: "Equal"
|
||||
value: "value1"
|
||||
effect: "NoExecute"
|
||||
```
|
||||
|
||||
<!--
|
||||
In this case, the pod will not be able to schedule onto the node, because there is no
|
||||
toleration matching the third taint. But it will be able to continue running if it is
|
||||
already running on the node when the taint is added, because the third taint is the only
|
||||
one of the three that is not tolerated by the pod.
|
||||
-->
|
||||
在这个例子中,上述 pod 不会被分配到上述节点,因为其没有 toleration 和第三个 taint 相匹配。但是如果在给节点添加 上述 taint 之前,该 pod 已经在上述节点运行,那么它还可以继续运行在该节点上,因为第三个 taint 是三个 taint 中唯一不能被这个 pod 容忍的。
|
||||
|
||||
<!--
|
||||
Normally, if a taint with effect `NoExecute` is added to a node, then any pods that do
|
||||
not tolerate the taint will be evicted immediately, and any pods that do tolerate the
|
||||
taint will never be evicted. However, a toleration with `NoExecute` effect can specify
|
||||
an optional `tolerationSeconds` field that dictates how long the pod will stay bound
|
||||
to the node after the taint is added. For example,
|
||||
-->
|
||||
通常情况下,如果给一个节点添加了一个 effect 值为 `NoExecute` 的 taint,则任何不能忍受这个 taint 的 pod 都会马上被驱逐,任何可以忍受这个 taint 的 pod 都不会被驱逐。但是,如果 pod 存在一个 effect 值为 `NoExecute` 的 toleration 指定了可选属性 `tolerationSeconds` 的值,则表示在给节点添加了上述 taint 之后,pod 还能继续在节点上运行的时间。例如,
|
||||
|
||||
```yaml
|
||||
tolerations:
|
||||
- key: "key1"
|
||||
operator: "Equal"
|
||||
value: "value1"
|
||||
effect: "NoExecute"
|
||||
tolerationSeconds: 3600
|
||||
```
|
||||
|
||||
<!--
|
||||
means that if this pod is running and a matching taint is added to the node, then
|
||||
the pod will stay bound to the node for 3600 seconds, and then be evicted. If the
|
||||
taint is removed before that time, the pod will not be evicted.
|
||||
-->
|
||||
这表示如果这个 pod 正在运行,然后一个匹配的 taint 被添加到其所在的节点,那么 pod 还将继续在节点上运行 3600 秒,然后被驱逐。如果在此之前上述 taint 被删除了,则 pod 不会被驱逐。
|
||||
|
||||
<!--
|
||||
|
||||
## Example Use Cases
|
||||
|
||||
-->
|
||||
|
||||
## 使用例子
|
||||
|
||||
<!--
|
||||
Taints and tolerations are a flexible way to steer pods *away* from nodes or evict
|
||||
pods that shouldn't be running. A few of the use cases are
|
||||
-->
|
||||
通过 taint 和 toleration ,可以灵活地让 pod *避开*某些节点或者将 pod 从某些节点驱逐。下面是几个使用例子:
|
||||
|
||||
<!--
|
||||
|
||||
* **Dedicated Nodes**: If you want to dedicate a set of nodes for exclusive use by
|
||||
a particular set of users, you can add a taint to those nodes (say,
|
||||
`kubectl taint nodes nodename dedicated=groupName:NoSchedule`) and then add a corresponding
|
||||
toleration to their pods (this would be done most easily by writing a custom
|
||||
[admission controller](/docs/reference/access-authn-authz/admission-controllers/)).
|
||||
The pods with the tolerations will then be allowed to use the tainted (dedicated) nodes as
|
||||
well as any other nodes in the cluster. If you want to dedicate the nodes to them *and*
|
||||
ensure they *only* use the dedicated nodes, then you should additionally add a label similar
|
||||
to the taint to the same set of nodes (e.g. `dedicated=groupName`), and the admission
|
||||
controller should additionally add a node affinity to require that the pods can only schedule
|
||||
onto nodes labeled with `dedicated=groupName`.
|
||||
-->
|
||||
* **专用节点**:如果您想将某些节点专门分配给特定的一组用户使用,您可以给这些节点添加一个 taint(即,
|
||||
`kubectl taint nodes nodename dedicated=groupName:NoSchedule`),然后给这组用户的 pod 添加一个相对应的 toleration(通过编写一个自定义的[admission controller](/docs/admin/admission-controllers/),很容易就能做到)。拥有上述 toleration 的 pod 就能够被分配到上述专用节点,同时也能够被分配到集群中的其它节点。如果您希望这些 pod 只能被分配到上述专用节点,那么您还需要给这些专用节点另外添加一个和上述 taint 类似的 label (例如:`dedicated=groupName`),同时 还要在上述 admission controller 中给 pod 增加节点亲和性要求上述 pod 只能被分配到添加了 `dedicated=groupName` 标签的节点上。
|
||||
|
||||
<!--
|
||||
|
||||
* **Nodes with Special Hardware**: In a cluster where a small subset of nodes have specialized
|
||||
hardware (for example GPUs), it is desirable to keep pods that don't need the specialized
|
||||
hardware off of those nodes, thus leaving room for later-arriving pods that do need the
|
||||
specialized hardware. This can be done by tainting the nodes that have the specialized
|
||||
hardware (e.g. `kubectl taint nodes nodename special=true:NoSchedule` or
|
||||
`kubectl taint nodes nodename special=true:PreferNoSchedule`) and adding a corresponding
|
||||
toleration to pods that use the special hardware. As in the dedicated nodes use case,
|
||||
it is probably easiest to apply the tolerations using a custom
|
||||
[admission controller](/docs/eference/access-authn-authz/admission-controllers/)).
|
||||
For example, it is recommended to use [Extended
|
||||
Resources](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources)
|
||||
to represent the special hardware, taint your special hardware nodes with the
|
||||
extended resource name and run the
|
||||
[ExtendedResourceToleration](/docs/reference/access-authn-authz/admission-controllers/#extendedresourcetoleration)
|
||||
admission controller. Now, because the nodes are tainted, no pods without the
|
||||
toleration will schedule on them. But when you submit a pod that requests the
|
||||
extended resource, the `ExtendedResourceToleration` admission controller will
|
||||
automatically add the correct toleration to the pod and that pod will schedule
|
||||
on the special hardware nodes. This will make sure that these special hardware
|
||||
nodes are dedicated for pods requesting such hardware and you don't have to
|
||||
manually add tolerations to your pods.
|
||||
-->
|
||||
* **配备了特殊硬件的节点**:在部分节点配备了特殊硬件(比如 GPU)的集群中,我们希望不需要这类硬件的 pod 不要被分配到这些特殊节点,以便为后继需要这类硬件的 pod 保留资源。要达到这个目的,可以先给配备了特殊硬件的节点添加 taint(例如 `kubectl taint nodes nodename special=true:NoSchedule` or `kubectl taint nodes nodename special=true:PreferNoSchedule`),然后给使用了这类特殊硬件的 pod 添加一个相匹配的 toleration。和专用节点的例子类似,添加这个 toleration 的最简单的方法是使用自定义 [admission controller](/docs/reference/access-authn-authz/admission-controllers/)。比如,我们推荐使用 [Extended Resources](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources) 来表示特殊硬件,给配置了特殊硬件的节点添加 taint 时包含 extended resource 名称,然后运行一个 [ExtendedResourceToleration](/docs/reference/access-authn-authz/admission-controllers/#extendedresourcetoleration) admission controller。此时,因为节点已经被 taint 了,没有对应 toleration 的 Pod 会被调度到这些节点。但当你创建一个使用了 extended resource 的 Pod 时,`ExtendedResourceToleration` admission controller 会自动给 Pod 加上正确的 toleration ,这样 Pod 就会被自动调度到这些配置了特殊硬件件的节点上。这样就能够确保这些配置了特殊硬件的节点专门用于运行 需要使用这些硬件的 Pod,并且您无需手动给这些 Pod 添加 toleration。
|
||||
|
||||
<!--
|
||||
|
||||
* **Taint based Evictions (alpha feature)**: A per-pod-configurable eviction behavior
|
||||
when there are node problems, which is described in the next section.
|
||||
-->
|
||||
* **基于 taint 的驱逐 (alpha 特性)**: 这是在每个 pod 中配置的在节点出现问题时的驱逐行为,接下来的章节会描述这个特性
|
||||
|
||||
<!--
|
||||
|
||||
## Taint based Evictions
|
||||
|
||||
-->
|
||||
|
||||
## 基于 taint 的驱逐
|
||||
|
||||
<!--
|
||||
Earlier we mentioned the `NoExecute` taint effect, which affects pods that are already
|
||||
running on the node as follows
|
||||
|
||||
* pods that do not tolerate the taint are evicted immediately
|
||||
* pods that tolerate the taint without specifying `tolerationSeconds` in
|
||||
their toleration specification remain bound forever
|
||||
* pods that tolerate the taint with a specified `tolerationSeconds` remain
|
||||
bound for the specified amount of time
|
||||
-->
|
||||
前文我们提到过 taint 的 effect 值 `NoExecute` ,它会影响已经在节点上运行的 pod
|
||||
* 如果 pod 不能忍受effect 值为 `NoExecute` 的 taint,那么 pod 将马上被驱逐
|
||||
* 如果 pod 能够忍受effect 值为 `NoExecute` 的 taint,但是在 toleration 定义中没有指定 `tolerationSeconds`,则 pod 还会一直在这个节点上运行。
|
||||
* 如果 pod 能够忍受effect 值为 `NoExecute` 的 taint,而且指定了 `tolerationSeconds`,则 pod 还能在这个节点上继续运行这个指定的时间长度。
|
||||
|
||||
<!--
|
||||
In addition, Kubernetes 1.6 introduced alpha support for representing node
|
||||
problems. In other words, the node controller automatically taints a node when
|
||||
certain condition is true. The following taints are built in:
|
||||
|
||||
* `node.kubernetes.io/not-ready`: Node is not ready. This corresponds to
|
||||
the NodeCondition `Ready` being "`False`".
|
||||
* `node.kubernetes.io/unreachable`: Node is unreachable from the node
|
||||
controller. This corresponds to the NodeCondition `Ready` being "`Unknown`".
|
||||
* `node.kubernetes.io/out-of-disk`: Node becomes out of disk.
|
||||
* `node.kubernetes.io/memory-pressure`: Node has memory pressure.
|
||||
* `node.kubernetes.io/disk-pressure`: Node has disk pressure.
|
||||
* `node.kubernetes.io/network-unavailable`: Node's network is unavailable.
|
||||
* `node.kubernetes.io/unschedulable`: Node is unschedulable.
|
||||
* `node.cloudprovider.kubernetes.io/uninitialized`: When the kubelet is started
|
||||
with "external" cloud provider, this taint is set on a node to mark it
|
||||
as unusable. After a controller from the cloud-controller-manager initializes
|
||||
this node, the kubelet removes this taint.
|
||||
-->
|
||||
此外,Kubernetes 1.6 已经支持(alpha阶段)节点问题的表示。换句话说,当某种条件为真时,node controller会自动给节点添加一个 taint。当前内置的 taint 包括:
|
||||
* `node.kubernetes.io/not-ready`:节点未准备好。这相当于节点状态 `Ready` 的值为 "`False`"。
|
||||
* `node.kubernetes.io/unreachable`:node controller 访问不到节点. 这相当于节点状态 `Ready` 的值为 "`Unknown`"。
|
||||
* `node.kubernetes.io/out-of-disk`:节点磁盘耗尽。
|
||||
* `node.kubernetes.io/memory-pressure`:节点存在内存压力。
|
||||
* `node.kubernetes.io/disk-pressure`:节点存在磁盘压力。
|
||||
* `node.kubernetes.io/network-unavailable`:节点网络不可用。
|
||||
* `node.kubernetes.io/unschedulable`: 节点不可调度。
|
||||
* `node.cloudprovider.kubernetes.io/uninitialized`:如果 kubelet 启动时指定了一个 "外部" cloud provider,它将给当前节点添加一个 taint 将其标志为不可用。在 cloud-controller-manager 的一个 controller 初始化这个节点后,kubelet 将删除这个 taint。
|
||||
|
||||
<!--
|
||||
When the `TaintBasedEvictions` alpha feature is enabled (you can do this by
|
||||
including `TaintBasedEvictions=true` in `--feature-gates` for Kubernetes controller manager,
|
||||
such as `--feature-gates=FooBar=true,TaintBasedEvictions=true`), the taints are automatically
|
||||
added by the NodeController (or kubelet) and the normal logic for evicting pods from nodes
|
||||
based on the Ready NodeCondition is disabled.
|
||||
-->
|
||||
在启用了 `TaintBasedEvictions` 这个 alpha 功能特性后(在 Kubernetes controller manager 的 `--feature-gates` 参数中包含`TaintBasedEvictions=true` 开启这个功能特性,例如:`--feature-gates=FooBar=true,TaintBasedEvictions=true`),NodeController (或 kubelet)会自动给节点添加这类 taint,上述基于节点状态 Ready 对 pod 进行驱逐的逻辑会被禁用。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
**Note:** To maintain the existing [rate limiting](/docs/concepts/architecture/nodes/)
|
||||
behavior of pod evictions due to node problems, the system actually adds the taints
|
||||
in a rate-limited way. This prevents massive pod evictions in scenarios such
|
||||
as the master becoming partitioned from the nodes.
|
||||
-->
|
||||
注意:为了保证由于节点问题引起的 pod 驱逐[rate limiting](/docs/concepts/architecture/nodes/)行为正常,系统实际上会以 rate-limited 的方式添加 taint。在像 master 和 node 通讯中断等场景下,这避免了 pod 被大量驱逐。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
This alpha feature, in combination with `tolerationSeconds`, allows a pod
|
||||
to specify how long it should stay bound to a node that has one or both of these problems.
|
||||
-->
|
||||
使用这个 alpha 功能特性,结合 `tolerationSeconds` ,pod 就可以指定当节点出现一个或全部上述问题时还将在这个节点上运行多长的时间。
|
||||
|
||||
<!--
|
||||
For example, an application with a lot of local state might want to stay
|
||||
bound to node for a long time in the event of network partition, in the hope
|
||||
that the partition will recover and thus the pod eviction can be avoided.
|
||||
The toleration the pod would use in that case would look like
|
||||
-->
|
||||
比如,一个使用了很多本地状态的应用程序在网络断开时,仍然希望停留在当前节点上运行一段较长的时间,愿意等待网络恢复以避免被驱逐。在这种情况下,pod 的 toleration 可能是下面这样的:
|
||||
|
||||
```yaml
|
||||
tolerations:
|
||||
- key: "node.alpha.kubernetes.io/unreachable"
|
||||
operator: "Exists"
|
||||
effect: "NoExecute"
|
||||
tolerationSeconds: 6000
|
||||
```
|
||||
|
||||
<!--
|
||||
Note that Kubernetes automatically adds a toleration for
|
||||
`node.kubernetes.io/not-ready` with `tolerationSeconds=300`
|
||||
unless the pod configuration provided
|
||||
by the user already has a toleration for `node.kubernetes.io/not-ready`.
|
||||
Likewise it adds a toleration for
|
||||
`node.alpha.kubernetes.io/unreachable` with `tolerationSeconds=300`
|
||||
unless the pod configuration provided
|
||||
by the user already has a toleration for `node.alpha.kubernetes.io/unreachable`.
|
||||
-->
|
||||
注意,Kubernetes 会自动给 pod 添加一个 key 为 `node.kubernetes.io/not-ready` 的 toleration 并配置 `tolerationSeconds=300`,除非用户提供的 pod 配置中已经已存在了 key 为 `node.kubernetes.io/not-ready` 的 toleration。同样,Kubernetes 会给 pod 添加一个 key 为 `node.kubernetes.io/unreachable` 的 toleration 并配置 `tolerationSeconds=300`,除非用户提供的 pod 配置中已经已存在了 key 为 `node.kubernetes.io/unreachable` 的 toleration。
|
||||
|
||||
<!--
|
||||
These automatically-added tolerations ensure that
|
||||
the default pod behavior of remaining bound for 5 minutes after one of these
|
||||
problems is detected is maintained.
|
||||
The two default tolerations are added by the [DefaultTolerationSeconds
|
||||
admission controller](https://git.k8s.io/kubernetes/plugin/pkg/admission/defaulttolerationseconds).
|
||||
-->
|
||||
这种自动添加 toleration 机制保证了在其中一种问题被检测到时 pod 默认能够继续停留在当前节点运行 5 分钟。这两个默认 toleration 是由 [DefaultTolerationSeconds
|
||||
admission controller](https://git.k8s.io/kubernetes/plugin/pkg/admission/defaulttolerationseconds)添加的。
|
||||
|
||||
<!--
|
||||
[DaemonSet](/docs/concepts/workloads/controllers/daemonset/) pods are created with
|
||||
`NoExecute` tolerations for the following taints with no `tolerationSeconds`:
|
||||
|
||||
* `node.alpha.kubernetes.io/unreachable`
|
||||
* `node.kubernetes.io/not-ready`
|
||||
|
||||
This ensures that DaemonSet pods are never evicted due to these problems,
|
||||
which matches the behavior when this feature is disabled.
|
||||
-->
|
||||
[DaemonSet](/docs/concepts/workloads/controllers/daemonset/) 中的 pod 被创建时,针对以下 taint 自动添加的 `NoExecute` 的 toleration 将不会指定 `tolerationSeconds`:
|
||||
|
||||
* `node.alpha.kubernetes.io/unreachable`
|
||||
* `node.kubernetes.io/not-ready`
|
||||
|
||||
这保证了出现上述问题时 DaemonSet 中的 pod 永远不会被驱逐,这和 `TaintBasedEvictions` 这个特性被禁用后的行为是一样的。
|
||||
|
||||
<!--
|
||||
|
||||
## Taint Nodes by Condition
|
||||
|
||||
-->
|
||||
|
||||
## 基于节点状态添加 taint
|
||||
|
||||
<!--
|
||||
Version 1.8 introduces an alpha feature that causes the node controller to create taints corresponding to
|
||||
Node conditions. When this feature is enabled (you can do this by including `TaintNodesByCondition=true` in the `--feature-gates` command line flag to the scheduler, such as
|
||||
`--feature-gates=FooBar=true,TaintNodesByCondition=true`), the scheduler does not check Node conditions; instead the scheduler checks taints. This assures that Node conditions don't affect what's scheduled onto the Node. The user can choose to ignore some of the Node's problems (represented as Node conditions) by adding appropriate Pod tolerations.
|
||||
|
||||
Starting in Kubernetes 1.8, the DaemonSet controller automatically adds the
|
||||
following `NoSchedule` tolerations to all daemons, to prevent DaemonSets from
|
||||
breaking.
|
||||
|
||||
* `node.kubernetes.io/memory-pressure`
|
||||
* `node.kubernetes.io/disk-pressure`
|
||||
* `node.kubernetes.io/out-of-disk` (*only for critical pods*)
|
||||
* `node.kubernetes.io/unschedulable` (1.10 or later)
|
||||
* `node.kubernetes.io/network-unavailable` (*host network only*)
|
||||
-->
|
||||
1.8 版本引入了一个 alpha 特性,让 node controller 根据节点的状态创建 taint。当开启了这个特性时(通过给 scheduler 的 `--feature-gates` 添加 `TaintNodesByCondition=true` 参数,例如:`--feature-gates=FooBar=true,TaintNodesByCondition=true`),scheduler不会去检查节点的状态,而是检查节点的 taint。这确保了节点的状态不影响应该调度哪些 Pod 到节点上。用户可以通过给 Pod 添加 toleration 来选择忽略节点的一些问题(节点状态的形式表示)。
|
||||
从 Kubernetes 1.8 开始,DaemonSet controller 会自动添加如下 `NoSchedule` toleration,以防止 DaemonSet 中断。
|
||||
* `node.kubernetes.io/memory-pressure`
|
||||
* `node.kubernetes.io/disk-pressure`
|
||||
* `node.kubernetes.io/out-of-disk` (*只适合 critical pod*)
|
||||
* `node.kubernetes.io/unschedulable` (1.10 或更高版本)
|
||||
* `node.kubernetes.io/network-unavailable` (*只适合 host network*)
|
||||
|
||||
<!--
|
||||
Adding these tolerations ensures backward compatibility. You can also add
|
||||
arbitrary tolerations to DaemonSets.
|
||||
-->
|
||||
添加上述 toleration 确保了向后兼容,您也可以选择自由的向 DaemonSet 添加 toleration。
|
||||
|
|
@ -87,7 +87,7 @@ Kubelet会获取并且定期刷新ECR的凭证。它需要以下权限
|
|||
|
||||
- 验证是否满足以上要求
|
||||
- 获取工作站的$REGION (例如 `us-west-2`)凭证,使用凭证SSH到主机手动运行docker,检查是否运行
|
||||
- 验证kublet是否使用参数`--cloud-provider=aws`运行
|
||||
- 验证kubelet是否使用参数`--cloud-provider=aws`运行
|
||||
- 检查kubelet日志(例如 `journalctl -u kubelet`),是否有类似的行
|
||||
- `plugins.go:56] Registering credential provider: aws-ecr-key`
|
||||
- `provider.go:91] Refreshing cache for provider: *aws_credentials.ecrProvider`
|
||||
|
|
@ -0,0 +1,68 @@
|
|||
---
|
||||
title: 安全考虑
|
||||
content_template: templates/task
|
||||
---
|
||||
<!--
|
||||
---
|
||||
title: Security Considerations
|
||||
content_template: templates/task
|
||||
---
|
||||
-->
|
||||
|
||||
{{% capture overview %}}
|
||||
<!--
|
||||
By default all connections between every provided node are secured via TLS by easyrsa, including the etcd cluster.
|
||||
|
||||
This page explains the security considerations of a deployed cluster and production recommendations.
|
||||
-->
|
||||
默认情况下,所有提供的节点之间的所有连接(包括 etcd 集群)都通过 easyrsa 的 TLS 进行保护。
|
||||
|
||||
本文介绍已部署集群的安全注意事项和生产环境建议。
|
||||
{{% /capture %}}
|
||||
{{% capture prerequisites %}}
|
||||
<!--
|
||||
This page assumes you have a working Juju deployed cluster.
|
||||
-->
|
||||
本文假定您拥有一个使用 Juju 部署的正在运行的集群。
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture steps %}}
|
||||
<!--
|
||||
## Implementation
|
||||
|
||||
The TLS and easyrsa implementations use the following [layers](https://jujucharms.com/docs/2.2/developer-layers).
|
||||
|
||||
[layer-tls-client](https://github.com/juju-solutions/layer-tls-client)
|
||||
[layer-easyrsa](https://github.com/juju-solutions/layer-easyrsa)
|
||||
-->
|
||||
## 实现
|
||||
|
||||
TLS 和 easyrsa 的实现使用以下 [layers](https://jujucharms.com/docs/2.2/developer-layers)。
|
||||
|
||||
[layer-tls-client](https://github.com/juju-solutions/layer-tls-client)
|
||||
[layer-easyrsa](https://github.com/juju-solutions/layer-easyrsa)
|
||||
|
||||
<!--
|
||||
## Limiting ssh access
|
||||
|
||||
By default the administrator can ssh to any deployed node in a cluster. You can mass disable ssh access to the cluster nodes by issuing the following command.
|
||||
|
||||
juju model-config proxy-ssh=true
|
||||
|
||||
Note: The Juju controller node will still have open ssh access in your cloud, and will be used as a jump host in this case.
|
||||
|
||||
Refer to the [model management](https://jujucharms.com/docs/2.2/models) page in the Juju documentation for instructions on how to manage ssh keys.
|
||||
-->
|
||||
## 限制 ssh 访问
|
||||
|
||||
默认情况下,管理员可以 ssh 到集群中的任意已部署节点。您可以通过以下命令来批量禁用集群节点的 ssh 访问权限。
|
||||
|
||||
juju model-config proxy-ssh=true
|
||||
|
||||
注意:Juju 控制器节点在您的云中仍然有开放的 ssh 访问权限,并且在这种情况下将被用作跳板机。
|
||||
|
||||
有关如何管理 ssh 密钥的说明,请参阅 Juju 文档中的 [模型管理](https://jujucharms.com/docs/2.2/models) 页面。
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,563 @@
|
|||
---
|
||||
assignees:
|
||||
- bprashanth
|
||||
- davidopp
|
||||
- derekwaynecarr
|
||||
- erictune
|
||||
- janetkuo
|
||||
- thockin
|
||||
cn-approvers:
|
||||
- linyouchong
|
||||
title: 使用准入控制插件
|
||||
---
|
||||
<!--
|
||||
---
|
||||
assignees:
|
||||
- bprashanth
|
||||
- davidopp
|
||||
- derekwaynecarr
|
||||
- erictune
|
||||
- janetkuo
|
||||
- thockin
|
||||
title: Using Admission Controllers
|
||||
---
|
||||
-->
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
<!--
|
||||
## What are they?
|
||||
-->
|
||||
## 什么是准入控制插件?
|
||||
|
||||
<!--
|
||||
An admission control plug-in is a piece of code that intercepts requests to the Kubernetes
|
||||
API server prior to persistence of the object, but after the request is authenticated
|
||||
and authorized. The plug-in code is in the API server process
|
||||
and must be compiled into the binary in order to be used at this time.
|
||||
-->
|
||||
一个准入控制插件是一段代码,它会在请求通过认证和授权之后、对象被持久化之前拦截到达 API server 的请求。插件代码运行在 API server 进程中,必须将其编译为二进制文件,以便在此时使用。
|
||||
|
||||
<!--
|
||||
Each admission control plug-in is run in sequence before a request is accepted into the cluster. If
|
||||
any of the plug-ins in the sequence reject the request, the entire request is rejected immediately
|
||||
and an error is returned to the end-user.
|
||||
-->
|
||||
在每个请求被集群接受之前,准入控制插件依次执行。如果插件序列中任何一个拒绝了该请求,则整个请求将立即被拒绝并且返回一个错误给终端用户。
|
||||
|
||||
<!--
|
||||
Admission control plug-ins may mutate the incoming object in some cases to apply system configured
|
||||
defaults. In addition, admission control plug-ins may mutate related resources as part of request
|
||||
processing to do things like increment quota usage.
|
||||
-->
|
||||
准入控制插件可能会在某些情况下改变传入的对象,从而应用系统配置的默认值。另外,作为请求处理的一部分,准入控制插件可能会对相关的资源进行变更,以实现类似增加配额使用量这样的功能。
|
||||
|
||||
<!--
|
||||
## Why do I need them?
|
||||
-->
|
||||
## 为什么需要准入控制插件?
|
||||
|
||||
<!--
|
||||
Many advanced features in Kubernetes require an admission control plug-in to be enabled in order
|
||||
to properly support the feature. As a result, a Kubernetes API server that is not properly
|
||||
configured with the right set of admission control plug-ins is an incomplete server and will not
|
||||
support all the features you expect.
|
||||
-->
|
||||
Kubernetes 的许多高级功能都要求启用一个准入控制插件,以便正确地支持该特性。因此,一个没有正确配置准入控制插件的 Kubernetes API server 是不完整的,它不会支持您所期望的所有特性。
|
||||
|
||||
<!--
|
||||
## How do I turn on an admission control plug-in?
|
||||
-->
|
||||
## 如何启用一个准入控制插件?
|
||||
|
||||
<!--
|
||||
The Kubernetes API server supports a flag, `admission-control` that takes a comma-delimited,
|
||||
ordered list of admission control choices to invoke prior to modifying objects in the cluster.
|
||||
-->
|
||||
Kubernetes API server 支持一个标志参数 `admission-control` ,它指定了一个用于在集群修改对象之前调用的以逗号分隔的准入控制插件顺序列表。
|
||||
|
||||
<!--
|
||||
## What does each plug-in do?
|
||||
-->
|
||||
## 每个插件的功能是什么?
|
||||
|
||||
### AlwaysAdmit
|
||||
|
||||
<!--
|
||||
Use this plugin by itself to pass-through all requests.
|
||||
-->
|
||||
使用这个插件自行通过所有的请求。
|
||||
|
||||
### AlwaysPullImages
|
||||
|
||||
<!--
|
||||
This plug-in modifies every new Pod to force the image pull policy to Always. This is useful in a
|
||||
multitenant cluster so that users can be assured that their private images can only be used by those
|
||||
who have the credentials to pull them. Without this plug-in, once an image has been pulled to a
|
||||
node, any pod from any user can use it simply by knowing the image's name (assuming the Pod is
|
||||
scheduled onto the right node), without any authorization check against the image. When this plug-in
|
||||
is enabled, images are always pulled prior to starting containers, which means valid credentials are
|
||||
required.
|
||||
-->
|
||||
这个插件修改每一个新创建的 Pod 的镜像拉取策略为 Always 。这在多租户集群中是有用的,这样用户就可以放心,他们的私有镜像只能被那些有凭证的人使用。没有这个插件,一旦镜像被拉取到节点上,任何用户的 pod 都可以通过已了解到的镜像的名称(假设 pod 被调度到正确的节点上)来使用它,而不需要对镜像进行任何授权检查。当启用这个插件时,总是在启动容器之前拉取镜像,这意味着需要有效的凭证。
|
||||
|
||||
### AlwaysDeny
|
||||
|
||||
<!--
|
||||
Rejects all requests. Used for testing.
|
||||
-->
|
||||
拒绝所有的请求。用于测试。
|
||||
|
||||
<!--
|
||||
### DenyExecOnPrivileged (deprecated)
|
||||
-->
|
||||
### DenyExecOnPrivileged (已废弃)
|
||||
|
||||
<!--
|
||||
This plug-in will intercept all requests to exec a command in a pod if that pod has a privileged container.
|
||||
-->
|
||||
如果一个 pod 拥有一个特权容器,这个插件将拦截所有在该 pod 中执行 exec 命令的请求。
|
||||
|
||||
<!--
|
||||
If your cluster supports privileged containers, and you want to restrict the ability of end-users to exec
|
||||
commands in those containers, we strongly encourage enabling this plug-in.
|
||||
-->
|
||||
如果集群支持特权容器,并且希望限制最终用户在这些容器中执行 exec 命令的能力,我们强烈建议启用这个插件。
|
||||
|
||||
<!--
|
||||
This functionality has been merged into [DenyEscalatingExec](#denyescalatingexec).
|
||||
-->
|
||||
此功能已合并到 [DenyEscalatingExec](#denyescalatingexec)。
|
||||
|
||||
### DenyEscalatingExec
|
||||
|
||||
<!--
|
||||
This plug-in will deny exec and attach commands to pods that run with escalated privileges that
|
||||
allow host access. This includes pods that run as privileged, have access to the host IPC namespace, and
|
||||
have access to the host PID namespace.
|
||||
-->
|
||||
这个插件将拒绝在拥有衍生特权而具备访问宿主机能力的 pod 中执行 exec 和 attach 命令。这包括在特权模式运行的 pod ,可以访问主机 IPC 命名空间的 pod ,和访问主机 PID 命名空间的 pod 。
|
||||
|
||||
<!--
|
||||
If your cluster supports containers that run with escalated privileges, and you want to
|
||||
restrict the ability of end-users to exec commands in those containers, we strongly encourage
|
||||
enabling this plug-in.
|
||||
-->
|
||||
如果集群支持使用以衍生特权运行的容器,并且希望限制最终用户在这些容器中执行 exec 命令的能力,我们强烈建议启用这个插件。
|
||||
|
||||
### ImagePolicyWebhook
|
||||
|
||||
<!--
|
||||
The ImagePolicyWebhook plug-in allows a backend webhook to make admission decisions. You enable this plug-in by setting the admission-control option as follows:
|
||||
-->
|
||||
ImagePolicyWebhook 插件允许使用一个后端的 webhook 做出准入决策。您可以按照如下配置 admission-control 选项来启用这个插件:
|
||||
|
||||
```shell
|
||||
--admission-control=ImagePolicyWebhook
|
||||
```
|
||||
|
||||
<!--
|
||||
#### Configuration File Format
|
||||
-->
|
||||
#### 配置文件格式
|
||||
<!--
|
||||
ImagePolicyWebhook uses the admission config file `--admission-control-config-file` to set configuration options for the behavior of the backend. This file may be json or yaml and has the following format:
|
||||
-->
|
||||
ImagePolicyWebhook 插件使用了admission config 文件 `--admission-control-config-file` 来为后端行为设置配置选项。该文件可以是 json 或 yaml ,并具有以下格式:
|
||||
|
||||
```javascript
|
||||
{
|
||||
"imagePolicy": {
|
||||
"kubeConfigFile": "path/to/kubeconfig/for/backend",
|
||||
"allowTTL": 50, // time in s to cache approval
|
||||
"denyTTL": 50, // time in s to cache denial
|
||||
"retryBackoff": 500, // time in ms to wait between retries
|
||||
"defaultAllow": true // determines behavior if the webhook backend fails
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
<!--
|
||||
The config file must reference a [kubeconfig](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) formatted file which sets up the connection to the backend. It is required that the backend communicate over TLS.
|
||||
-->
|
||||
这个配置文件必须引用一个 [kubeconfig](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) 格式的文件,并在其中配置指向后端的连接。且需要在 TLS 上与后端进行通信。
|
||||
|
||||
<!--
|
||||
The kubeconfig file's cluster field must point to the remote service, and the user field must contain the returned authorizer.
|
||||
-->
|
||||
kubeconfig 文件的 cluster 字段需要指向远端服务,user 字段需要包含已返回的授权者。
|
||||
|
||||
```yaml
|
||||
# clusters refers to the remote service.
|
||||
clusters:
|
||||
- name: name-of-remote-imagepolicy-service
|
||||
cluster:
|
||||
certificate-authority: /path/to/ca.pem # CA for verifying the remote service.
|
||||
server: https://images.example.com/policy # URL of remote service to query. Must use 'https'.
|
||||
|
||||
# users refers to the API server's webhook configuration.
|
||||
users:
|
||||
- name: name-of-api-server
|
||||
user:
|
||||
client-certificate: /path/to/cert.pem # cert for the webhook plugin to use
|
||||
client-key: /path/to/key.pem # key matching the cert
|
||||
```
|
||||
<!--
|
||||
For additional HTTP configuration, refer to the [kubeconfig](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) documentation.
|
||||
-->
|
||||
对于更多的 HTTP 配置,请参阅 [kubeconfig](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) 文档。
|
||||
|
||||
<!--
|
||||
#### Request Payloads
|
||||
-->
|
||||
#### 请求载荷
|
||||
|
||||
<!--
|
||||
When faced with an admission decision, the API Server POSTs a JSON serialized api.imagepolicy.v1alpha1.ImageReview object describing the action. This object contains fields describing the containers being admitted, as well as any pod annotations that match `*.image-policy.k8s.io/*`.
|
||||
-->
|
||||
当面对一个准入决策时,API server 发送一个描述操作的 JSON 序列化的 api.imagepolicy.v1alpha1.ImageReview 对象。该对象包含描述被审核容器的字段,以及所有匹配 `*.image-policy.k8s.io/*` 的 pod 注释。
|
||||
|
||||
<!--
|
||||
Note that webhook API objects are subject to the same versioning compatibility rules as other Kubernetes API objects. Implementers should be aware of looser compatibility promises for alpha objects and check the "apiVersion" field of the request to ensure correct deserialization. Additionally, the API Server must enable the imagepolicy.k8s.io/v1alpha1 API extensions group (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`).
|
||||
-->
|
||||
注意,webhook API 对象与其他 Kubernetes API 对象一样受制于相同的版本控制兼容性规则。实现者应该知道对 alpha 对象的更宽松的兼容性,并检查请求的 "apiVersion" 字段,以确保正确的反序列化。此外,API server 必须启用 imagepolicy.k8s.io/v1alpha1 API 扩展组 (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`)。
|
||||
|
||||
<!--
|
||||
An example request body:
|
||||
-->
|
||||
请求载荷例子:
|
||||
|
||||
```
|
||||
{
|
||||
"apiVersion":"imagepolicy.k8s.io/v1alpha1",
|
||||
"kind":"ImageReview",
|
||||
"spec":{
|
||||
"containers":[
|
||||
{
|
||||
"image":"myrepo/myimage:v1"
|
||||
},
|
||||
{
|
||||
"image":"myrepo/myimage@sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed"
|
||||
}
|
||||
],
|
||||
"annotations":[
|
||||
"mycluster.image-policy.k8s.io/ticket-1234": "break-glass"
|
||||
],
|
||||
"namespace":"mynamespace"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
<!--
|
||||
The remote service is expected to fill the ImageReviewStatus field of the request and respond to either allow or disallow access. The response body's "spec" field is ignored and may be omitted. A permissive response would return:
|
||||
-->
|
||||
远程服务将填充请求的 ImageReviewStatus 字段,并返回允许或不允许访问。响应主体的 "spec" 字段会被忽略,并且可以省略。一个允许访问应答会返回:
|
||||
|
||||
```
|
||||
{
|
||||
"apiVersion": "imagepolicy.k8s.io/v1alpha1",
|
||||
"kind": "ImageReview",
|
||||
"status": {
|
||||
"allowed": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
<!--
|
||||
To disallow access, the service would return:
|
||||
-->
|
||||
不允许访问,服务将返回:
|
||||
|
||||
```
|
||||
{
|
||||
"apiVersion": "imagepolicy.k8s.io/v1alpha1",
|
||||
"kind": "ImageReview",
|
||||
"status": {
|
||||
"allowed": false,
|
||||
"reason": "image currently blacklisted"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
<!--
|
||||
For further documentation refer to the `imagepolicy.v1alpha1` API objects and `plugin/pkg/admission/imagepolicy/admission.go`.
|
||||
-->
|
||||
更多的文档,请参阅 `imagepolicy.v1alpha1` API 对象 和 `plugin/pkg/admission/imagepolicy/admission.go`。
|
||||
|
||||
<!--
|
||||
#### Extending with Annotations
|
||||
-->
|
||||
使用注解进行扩展
|
||||
|
||||
<!--
|
||||
All annotations on a Pod that match `*.image-policy.k8s.io/*` are sent to the webhook. Sending annotations allows users who are aware of the image policy backend to send extra information to it, and for different backends implementations to accept different information.
|
||||
-->
|
||||
一个 pod 中匹配 `*.image-policy.k8s.io/*` 的注解都会被发送给 webhook。这允许了解镜像策略后端的用户向它发送额外的信息,并为不同的后端实现接收不同的信息。
|
||||
|
||||
<!--
|
||||
Examples of information you might put here are:
|
||||
-->
|
||||
您可以在这里输入的信息有:
|
||||
|
||||
<!--
|
||||
* request to "break glass" to override a policy, in case of emergency.
|
||||
-->
|
||||
* 在紧急情况下,请求 "break glass" 覆盖一个策略。
|
||||
<!--
|
||||
* a ticket number from a ticket system that documents the break-glass request
|
||||
-->
|
||||
* 从一个记录了 break-glass 的请求的票证系统得到的一个票证编号
|
||||
<!--
|
||||
* provide a hint to the policy server as to the imageID of the image being provided, to save it a lookup
|
||||
-->
|
||||
* 向策略服务器提供一个提示,用于提供镜像的 imageID,以方便它进行查找
|
||||
|
||||
<!--
|
||||
In any case, the annotations are provided by the user and are not validated by Kubernetes in any way. In the future, if an annotation is determined to be widely useful, it may be promoted to a named field of ImageReviewSpec.
|
||||
-->
|
||||
在任何情况下,注解都是由用户提供的,并不会被 Kubernetes 以任何方式进行验证。在将来,如果一个注解确定将被广泛使用,它可能会被提升为 ImageReviewSpec 的一个命名字段。
|
||||
|
||||
### ServiceAccount
|
||||
|
||||
<!--
|
||||
This plug-in implements automation for [serviceAccounts](/docs/user-guide/service-accounts).
|
||||
We strongly recommend using this plug-in if you intend to make use of Kubernetes `ServiceAccount` objects.
|
||||
-->
|
||||
这个插件实现了 [serviceAccounts](/docs/user-guide/service-accounts) 的自动化。
|
||||
如果您打算使用 Kubernetes 的 ServiceAccount 对象,我们强烈建议您使用这个插件。
|
||||
|
||||
### SecurityContextDeny
|
||||
|
||||
<!--
|
||||
This plug-in will deny any pod that attempts to set certain escalating [SecurityContext](/docs/user-guide/security-context) fields. This should be enabled if a cluster doesn't utilize [pod security policies](/docs/user-guide/pod-security-policy) to restrict the set of values a security context can take.
|
||||
-->
|
||||
该插件将拒绝任何试图设置特定扩展 [SecurityContext](/docs/user-guide/security-context) 字段的 pod。如果集群没有使用 [ pod 安全策略](/docs/user-guide/pod-security-policy) 来限制安全上下文所能获取的值集,那么应该启用这个功能。
|
||||
|
||||
### ResourceQuota
|
||||
|
||||
<!--
|
||||
This plug-in will observe the incoming request and ensure that it does not violate any of the constraints
|
||||
enumerated in the `ResourceQuota` object in a `Namespace`. If you are using `ResourceQuota`
|
||||
objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints.
|
||||
-->
|
||||
此插件将观察传入的请求,并确保它不违反任何一个 `Namespace` 中的 `ResourceQuota` 对象中枚举出来的约束。如果您在 Kubernetes 部署中使用了 `ResourceQuota`
|
||||
,您必须使用这个插件来强制执行配额限制。
|
||||
|
||||
<!--
|
||||
See the [resourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/admission_control_resource_quota.md) and the [example of Resource Quota](/docs/concepts/policy/resource-quotas/) for more details.
|
||||
-->
|
||||
请查看 [resourceQuota 设计文档](https://git.k8s.io/community/contributors/design-proposals/admission_control_resource_quota.md) 和 [Resource Quota 例子](/docs/concepts/policy/resource-quotas/) 了解更多细节。
|
||||
|
||||
<!--
|
||||
It is strongly encouraged that this plug-in is configured last in the sequence of admission control plug-ins. This is
|
||||
so that quota is not prematurely incremented only for the request to be rejected later in admission control.
|
||||
-->
|
||||
强烈建议将这个插件配置在准入控制插件序列的末尾。这样配额就不会过早地增加,只会在稍后的准入控制中被拒绝。
|
||||
|
||||
### LimitRanger
|
||||
|
||||
<!--
|
||||
This plug-in will observe the incoming request and ensure that it does not violate any of the constraints
|
||||
enumerated in the `LimitRange` object in a `Namespace`. If you are using `LimitRange` objects in
|
||||
your Kubernetes deployment, you MUST use this plug-in to enforce those constraints. LimitRanger can also
|
||||
be used to apply default resource requests to Pods that don't specify any; currently, the default LimitRanger
|
||||
applies a 0.1 CPU requirement to all Pods in the `default` namespace.
|
||||
-->
|
||||
这个插件将观察传入的请求,并确保它不会违反 `Namespace` 中 `LimitRange` 对象枚举的任何约束。如果您在 Kubernetes 部署中使用了 `LimitRange` 对象,则必须使用此插件来执行这些约束。LimitRanger 插件还可以用于将默认资源请求应用到没有指定任何内容的 Pod ;当前,默认的 LimitRanger 对 `default` 命名空间中的所有 pod 应用了0.1 CPU 的需求。
|
||||
|
||||
<!--
|
||||
See the [limitRange design doc](https://git.k8s.io/community/contributors/design-proposals/admission_control_limit_range.md) and the [example of Limit Range](/docs/tasks/configure-pod-container/limit-range/) for more details.
|
||||
-->
|
||||
请查看 [limitRange 设计文档](https://git.k8s.io/community/contributors/design-proposals/admission_control_limit_range.md) 和 [Limit Range 例子](/docs/tasks/configure-pod-container/limit-range/) 了解更多细节。
|
||||
|
||||
<!--
|
||||
### InitialResources (experimental)
|
||||
-->
|
||||
### InitialResources (试验)
|
||||
|
||||
<!--
|
||||
This plug-in observes pod creation requests. If a container omits compute resource requests and limits,
|
||||
then the plug-in auto-populates a compute resource request based on historical usage of containers running the same image.
|
||||
If there is not enough data to make a decision the Request is left unchanged.
|
||||
When the plug-in sets a compute resource request, it annotates the pod with information on what compute resources it auto-populated.
|
||||
-->
|
||||
此插件观察 pod 创建请求。如果容器忽略了 requests 和 limits 计算资源,那么插件就会根据运行相同镜像的容器的历史使用记录来自动填充计算资源请求。如果没有足够的数据进行决策,则请求将保持不变。当插件设置了一个计算资源请求时,它会用它自动填充的计算资源对 pod 进行注解。
|
||||
|
||||
<!--
|
||||
See the [InitialResouces proposal](https://git.k8s.io/community/contributors/design-proposals/initial-resources.md) for more details.
|
||||
-->
|
||||
请查看 [InitialResouces 建议书](https://git.k8s.io/community/contributors/design-proposals/initial-resources.md) 了解更多细节。
|
||||
|
||||
### NamespaceLifecycle
|
||||
|
||||
<!--
|
||||
This plug-in enforces that a `Namespace` that is undergoing termination cannot have new objects created in it,
|
||||
and ensures that requests in a non-existent `Namespace` are rejected.
|
||||
-->
|
||||
这个插件强制不能在一个正在被终止的 `Namespace` 中创建新对象,和确保使用不存在 `Namespace` 的请求被拒绝。
|
||||
|
||||
<!--
|
||||
A `Namespace` deletion kicks off a sequence of operations that remove all objects (pods, services, etc.) in that
|
||||
namespace. In order to enforce integrity of that process, we strongly recommend running this plug-in.
|
||||
-->
|
||||
删除 `Namespace` 触发了在该命名空间中删除所有对象( pod 、 services 等)的一系列操作。为了确保这个过程的完整性,我们强烈建议启用这个插件。
|
||||
|
||||
### DefaultStorageClass
|
||||
|
||||
<!--
|
||||
This plug-in observes creation of `PersistentVolumeClaim` objects that do not request any specific storage class
|
||||
and automatically adds a default storage class to them.
|
||||
This way, users that do not request any special storage class do no need to care about them at all and they
|
||||
will get the default one.
|
||||
-->
|
||||
这个插件观察不指定 storage class 字段的 `PersistentVolumeClaim` 对象的创建,并自动向它们添加默认的 storage class 。这样,不指定 storage class 字段的用户根本无需关心它们,它们将得到默认的 storage class 。
|
||||
|
||||
<!--
|
||||
This plug-in does not do anything when no default storage class is configured. When more than one storage
|
||||
class is marked as default, it rejects any creation of `PersistentVolumeClaim` with an error and administrator
|
||||
must revisit `StorageClass` objects and mark only one as default.
|
||||
This plugin ignores any `PersistentVolumeClaim` updates, it acts only on creation.
|
||||
-->
|
||||
当没有配置默认 storage class 时,这个插件不会执行任何操作。当一个以上的 storage class 被标记为默认时,它拒绝 `PersistentVolumeClaim` 创建并返回一个错误,管理员必须重新检查 `StorageClass` 对象,并且只标记一个作为默认值。这个插件忽略了任何 `PersistentVolumeClaim` 更新,它只对创建起作用。
|
||||
|
||||
<!--
|
||||
See [persistent volume](/docs/user-guide/persistent-volumes) documentation about persistent volume claims and
|
||||
storage classes and how to mark a storage class as default.
|
||||
-->
|
||||
查看 [persistent volume](/docs/user-guide/persistent-volumes) 文档了解 persistent volume claims 和 storage classes 并了解如何将一个 storage classes 标志为默认。
|
||||
|
||||
### DefaultTolerationSeconds
|
||||
|
||||
<!--
|
||||
This plug-in sets the default forgiveness toleration for pods, which have no forgiveness tolerations, to tolerate
|
||||
the taints `notready:NoExecute` and `unreachable:NoExecute` for 5 minutes.
|
||||
-->
|
||||
这个插件设置了 pod 默认的宽恕容忍时间,对于那些没有设置宽恕容忍时间的 pod ,可以容忍 `notready:NoExecute` 和 `unreachable:NoExecute` 这些 taint 5分钟。
|
||||
|
||||
### PodNodeSelector
|
||||
|
||||
<!--
|
||||
This plug-in defaults and limits what node selectors may be used within a namespace by reading a namespace annotation and a global configuration.
|
||||
-->
|
||||
通过读取命名空间注释和全局配置,这个插件默认并限制了在一个命名空间中使用什么节点选择器。
|
||||
|
||||
<!--
|
||||
#### Configuration File Format
|
||||
-->
|
||||
#### 配置文件格式
|
||||
<!--
|
||||
PodNodeSelector uses the admission config file `--admission-control-config-file` to set configuration options for the behavior of the backend.
|
||||
-->
|
||||
PodNodeSelector 插件使用准入配置文件 `--admission-control-config-file` 来设置后端行为的配置选项。
|
||||
|
||||
<!--
|
||||
Note that the configuration file format will move to a versioned file in a future release.
|
||||
-->
|
||||
请注意,配置文件格式将在未来版本中移至版本化文件。
|
||||
|
||||
<!--
|
||||
This file may be json or yaml and has the following format:
|
||||
-->
|
||||
这个文件可能是 json 或 yaml ,格式如下:
|
||||
|
||||
```yaml
|
||||
podNodeSelectorPluginConfig:
|
||||
clusterDefaultNodeSelector: <node-selectors-labels>
|
||||
namespace1: <node-selectors-labels>
|
||||
namespace2: <node-selectors-labels>
|
||||
```
|
||||
|
||||
<!--
|
||||
#### Configuration Annotation Format
|
||||
-->
|
||||
#### 配置注解格式
|
||||
<!--
|
||||
PodNodeSelector uses the annotation key `scheduler.alpha.kubernetes.io/node-selector` to assign node selectors to namespaces.
|
||||
-->
|
||||
PodNodeSelector 插件使用键为 `scheduler.alpha.kubernetes.io/node-selector` 的注解将节点选择器分配给 namespace 。
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/node-selector: <node-selectors-labels>
|
||||
name: namespace3
|
||||
```
|
||||
|
||||
### PodSecurityPolicy
|
||||
|
||||
<!--
|
||||
This plug-in acts on creation and modification of the pod and determines if it should be admitted
|
||||
based on the requested security context and the available Pod Security Policies.
|
||||
-->
|
||||
此插件负责在创建和修改 pod 时根据请求的安全上下文和可用的 pod 安全策略确定是否应该通过 pod。
|
||||
|
||||
<!--
|
||||
For Kubernetes < 1.6.0, the API Server must enable the extensions/v1beta1/podsecuritypolicy API
|
||||
extensions group (`--runtime-config=extensions/v1beta1/podsecuritypolicy=true`).
|
||||
-->
|
||||
对于 Kubernetes < 1.6.0 的版本,API Server 必须启用 extensions/v1beta1/podsecuritypolicy API 扩展组 (`--runtime-config=extensions/v1beta1/podsecuritypolicy=true`)。
|
||||
|
||||
<!--
|
||||
See also [Pod Security Policy documentation](/docs/concepts/policy/pod-security-policy/)
|
||||
for more information.
|
||||
-->
|
||||
查看 [Pod 安全策略文档](/docs/concepts/policy/pod-security-policy/) 了解更多细节。
|
||||
|
||||
### NodeRestriction
|
||||
|
||||
<!--
|
||||
This plug-in limits the `Node` and `Pod` objects a kubelet can modify. In order to be limited by this admission plugin,
|
||||
kubelets must use credentials in the `system:nodes` group, with a username in the form `system:node:<nodeName>`.
|
||||
Such kubelets will only be allowed to modify their own `Node` API object, and only modify `Pod` API objects that are bound to their node.
|
||||
-->
|
||||
这个插件限制了 kubelet 可以修改的 `Node` 和 `Pod` 对象。 为了受到这个入场插件的限制,kubelet 必须在 `system:nodes` 组中使用凭证,并使用 `system:node:<nodeName>` 形式的用户名。这样的 kubelet 只允许修改自己的 `Node` API 对象,只能修改绑定到节点本身的 `Pod` 对象。
|
||||
<!--
|
||||
Future versions may add additional restrictions to ensure kubelets have the minimal set of permissions required to operate correctly.
|
||||
-->
|
||||
未来的版本可能会添加额外的限制,以确保 kubelet 具有正确操作所需的最小权限集。
|
||||
|
||||
<!--
|
||||
## Is there a recommended set of plug-ins to use?
|
||||
-->
|
||||
## 是否有推荐的一组插件可供使用?
|
||||
|
||||
<!--
|
||||
Yes.
|
||||
For Kubernetes >= 1.6.0, we strongly recommend running the following set of admission control plug-ins (order matters):
|
||||
-->
|
||||
有。
|
||||
对于 Kubernetes >= 1.6.0 版本,我们强烈建议运行以下一系列准入控制插件(顺序也很重要)
|
||||
|
||||
```shell
|
||||
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds
|
||||
```
|
||||
|
||||
<!--
|
||||
For Kubernetes >= 1.4.0, we strongly recommend running the following set of admission control plug-ins (order matters):
|
||||
-->
|
||||
对于 Kubernetes >= 1.4.0 版本,我们强烈建议运行以下一系列准入控制插件(顺序也很重要)
|
||||
|
||||
```shell
|
||||
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
|
||||
```
|
||||
|
||||
<!--
|
||||
For Kubernetes >= 1.2.0, we strongly recommend running the following set of admission control plug-ins (order matters):
|
||||
-->
|
||||
对于 Kubernetes >= 1.2.0 版本,我们强烈建议运行以下一系列准入控制插件(顺序也很重要)
|
||||
|
||||
```shell
|
||||
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota
|
||||
```
|
||||
|
||||
<!--
|
||||
For Kubernetes >= 1.0.0, we strongly recommend running the following set of admission control plug-ins (order matters):
|
||||
-->
|
||||
对于 Kubernetes >= 1.0.0 版本,我们强烈建议运行以下一系列准入控制插件(顺序也很重要)
|
||||
|
||||
```shell
|
||||
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,PersistentVolumeLabel,ResourceQuota
|
||||
```
|
||||
|
|
@ -0,0 +1,304 @@
|
|||
---
|
||||
reviewers:
|
||||
- erictune
|
||||
- lavalamp
|
||||
- deads2k
|
||||
- liggitt
|
||||
cnapprove:
|
||||
- fatalc
|
||||
<!-- title: Authorization Overview -->
|
||||
title: 授权概述
|
||||
content_template: templates/concept
|
||||
weight: 60
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
<!-- Learn more about Kubernetes authorization, including details about creating
|
||||
policies using the supported authorization modules. -->
|
||||
了解有关 Kubernetes 授权的更多信息,包括使用支持的授权模块创建策略的详细信息。
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
<!-- In Kubernetes, you must be authenticated (logged in) before your request can be
|
||||
authorized (granted permission to access). For information about authentication,
|
||||
see [Accessing Control Overview](/docs/reference/access-authn-authz/controlling-access/).
|
||||
|
||||
Kubernetes expects attributes that are common to REST API requests. This means
|
||||
that Kubernetes authorization works with existing organization-wide or
|
||||
cloud-provider-wide access control systems which may handle other APIs besides
|
||||
the Kubernetes API. -->
|
||||
|
||||
在Kubernetes中,您必须在授权(授予访问权限)之前进行身份验证(登录),有关身份验证的信息,
|
||||
请参阅 [访问控制概述](/docs/reference/access-authn-authz/controlling-access/).
|
||||
|
||||
Kubernetes期望REST API请求中常见的属性。
|
||||
这意味着Kubernetes授权适用于现有的组织范围或云提供商范围的访问控制系统,
|
||||
除了Kubernetes API之外,它还可以处理其他API。
|
||||
|
||||
<!-- ## Determine Whether a Request is Allowed or Denied
|
||||
Kubernetes authorizes API requests using the API server. It evaluates all of the
|
||||
request attributes against all policies and allows or denies the request. All
|
||||
parts of an API request must be allowed by some policy in order to proceed. This
|
||||
means that permissions are denied by default.
|
||||
|
||||
(Although Kubernetes uses the API server, access controls and policies that
|
||||
depend on specific fields of specific kinds of objects are handled by Admission
|
||||
Controllers.)
|
||||
|
||||
When multiple authorization modules are configured, each is checked in sequence.
|
||||
If any authorizer approves or denies a request, that decision is immediately
|
||||
returned and no other authorizer is consulted. If all modules have no opinion on
|
||||
the request, then the request is denied. A deny returns an HTTP status code 403. -->
|
||||
|
||||
## 确定是允许还是拒绝请求
|
||||
Kubernetes 使用 API 服务器授权 API 请求。它根据所有策略评估所有请求属性来决定允许或拒绝请求。
|
||||
一个API请求的所有部分必须被某些策略允许才能继续。这意味着默认情况下拒绝权限。
|
||||
|
||||
(尽管 Kubernetes 使用 API 服务器,但是依赖于特定种类对象的特定字段的访问控制和策略由准入控制器处理。)
|
||||
|
||||
配置多个授权模块时,将按顺序检查每个模块。
|
||||
如果任何授权模块批准或拒绝请求,则立即返回该决定,并且不会与其他授权模块协商。
|
||||
如果所有模块对请求没有意见,则拒绝该请求。一个拒绝响应返回 HTTP 状态代码 403 。
|
||||
|
||||
<!--
|
||||
## Review Your Request Attributes
|
||||
Kubernetes reviews only the following API request attributes:
|
||||
|
||||
* **user** - The `user` string provided during authentication.
|
||||
* **group** - The list of group names to which the authenticated user belongs.
|
||||
* **extra** - A map of arbitrary string keys to string values, provided by the authentication layer.
|
||||
* **API** - Indicates whether the request is for an API resource.
|
||||
* **Request path** - Path to miscellaneous non-resource endpoints like `/api` or `/healthz`.
|
||||
* **API request verb** - API verbs `get`, `list`, `create`, `update`, `patch`, `watch`, `proxy`, `redirect`, `delete`, and `deletecollection` are used for resource requests. To determine the request verb for a resource API endpoint, see [Determine the request verb](/docs/reference/access-authn-authz/authorization/#determine-whether-a-request-is-allowed-or-denied) below.
|
||||
* **HTTP request verb** - HTTP verbs `get`, `post`, `put`, and `delete` are used for non-resource requests.
|
||||
* **Resource** - The ID or name of the resource that is being accessed (for resource requests only) -- For resource requests using `get`, `update`, `patch`, and `delete` verbs, you must provide the resource name.
|
||||
* **Subresource** - The subresource that is being accessed (for resource requests only).
|
||||
* **Namespace** - The namespace of the object that is being accessed (for namespaced resource requests only).
|
||||
* **API group** - The API group being accessed (for resource requests only). An empty string designates the [core API group](/docs/concepts/overview/kubernetes-api/).
|
||||
-->
|
||||
|
||||
## 审查您的请求属性
|
||||
Kubernetes仅审查以下API请求属性:
|
||||
|
||||
* **user** - 身份验证期间提供的`user`字符串。
|
||||
* **group** - 经过身份验证的用户所属的组名列表。
|
||||
* **extra** - 由身份验证层提供的任意字符串键到字符串值的映射。
|
||||
* **API** - 指示请求是否针对 API 资源。
|
||||
* **Request path** - 各种非资源端点的路径,如`/api`或`/healthz`。
|
||||
* **API request verb** - API 动词`get`,`list`,`create`,`update`,`patch`,`watch`,`proxy`,`redirect`,`delete`和`deletecollection`用于资源请求。要确定资源API端点的请求动词,请参阅[确定请求动词](/docs/reference/access-authn-authz/authorization/#determine-whether-a-request-is-allowed-or-denied)。
|
||||
* **HTTP request verb** - HTTP 动词`get`,`post`,`put`和`delete`用于非资源请求。
|
||||
* **Resource** - 正在访问的资源的 ID 或名称(仅限资源请求) - 对于使用`get`,`update`,`patch`和`delete`动词的资源请求,您必须提供资源名称。
|
||||
* **Subresource** - 正在访问的子资源(仅限资源请求)。
|
||||
* **Namespace** - 正在访问的对象的名称空间(仅适用于命名空间资源请求)。
|
||||
* **API group** - 正在访问的 API 组(仅限资源请求)。空字符串表示[核心API组](/docs/concepts/overview/kubernetes-api/)。
|
||||
|
||||
<!--
|
||||
## Determine the Request Verb
|
||||
To determine the request verb for a resource API endpoint, review the HTTP verb
|
||||
used and whether or not the request acts on an individual resource or a
|
||||
collection of resources:
|
||||
|
||||
HTTP verb | request verb
|
||||
----------|---------------
|
||||
POST | create
|
||||
GET, HEAD | get (for individual resources), list (for collections)
|
||||
PUT | update
|
||||
PATCH | patch
|
||||
DELETE | delete (for individual resources), deletecollection (for collections)
|
||||
|
||||
Kubernetes sometimes checks authorization for additional permissions using specialized verbs. For example:
|
||||
|
||||
* [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/) checks for authorization of the `use` verb on `podsecuritypolicies` resources in the `policy` API group.
|
||||
* [RBAC](/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping) checks for authorization
|
||||
of the `bind` verb on `roles` and `clusterroles` resources in the `rbac.authorization.k8s.io` API group.
|
||||
* [Authentication](/docs/reference/access-authn-authz/authentication/) layer checks for authorization of the `impersonate` verb on `users`, `groups`, and `serviceaccounts` in the core API group, and the `userextras` in the `authentication.k8s.io` API group.
|
||||
-->
|
||||
|
||||
## 确定请求动词
|
||||
|
||||
要确定资源API端点的请求谓词,请检查所使用的 HTTP 动词以及请求是否对单个资源或资源集合起作用:
|
||||
|
||||
HTTP 动词 | request 动词
|
||||
----------|---------------
|
||||
POST | create
|
||||
GET, HEAD | get (单个资源), list (资源集合)
|
||||
PUT | update
|
||||
PATCH | patch
|
||||
DELETE | delete (单个资源), deletecollection (资源集合)
|
||||
|
||||
Kubernetes有时使用专门的动词检查授权以获得额外的权限。例如:
|
||||
|
||||
* [Pod安全策略](/docs/concepts/policy/pod-security-policy/) 检查`policy` API组中`podsecuritypolicies`资源的`use`动词的授权。
|
||||
* [RBAC](/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping) 检查`rbac.authorization.k8s.io` API 组中`roles`和`clusterroles`资源的`bind`动词的授权。
|
||||
* [认证](/docs/reference/access-authn-authz/authentication/) layer检查核心API组中`users`,`groups`和`serviceaccounts`的`impersonate`动词的授权,以及`authentication.k8s.io` API组中的`userextras`。
|
||||
|
||||
<!--
|
||||
## Authorization Modules
|
||||
* **Node** - A special-purpose authorizer that grants permissions to kubelets based on the pods they are scheduled to run. To learn more about using the Node authorization mode, see [Node Authorization](/docs/reference/access-authn-authz/node/).
|
||||
* **ABAC** - Attribute-based access control (ABAC) defines an access control paradigm whereby access rights are granted to users through the use of policies which combine attributes together. The policies can use any type of attributes (user attributes, resource attributes, object, environment attributes, etc). To learn more about using the ABAC mode, see [ABAC Mode](/docs/reference/access-authn-authz/abac/).
|
||||
* **RBAC** - Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within an enterprise. In this context, access is the ability of an individual user to perform a specific task, such as view, create, or modify a file. To learn more about using the RBAC mode, see [RBAC Mode](/docs/reference/access-authn-authz/rbac/)
|
||||
* When specified RBAC (Role-Based Access Control) uses the `rbac.authorization.k8s.io` API group to drive authorization decisions, allowing admins to dynamically configure permission policies through the Kubernetes API.
|
||||
* To enable RBAC, start the apiserver with `--authorization-mode=RBAC`.
|
||||
* **Webhook** - A WebHook is an HTTP callback: an HTTP POST that occurs when something happens; a simple event-notification via HTTP POST. A web application implementing WebHooks will POST a message to a URL when certain things happen. To learn more about using the Webhook mode, see [Webhook Mode](/docs/reference/access-authn-authz/webhook/).
|
||||
-->
|
||||
|
||||
## 授权模块
|
||||
* **Node** - 一个专用授权程序,根据计划运行的 pod 为 kubelet 授予权限。了解有关使用节点授权模式的更多信息,请参阅[节点授权](/docs/reference/access-authn-authz/node/).
|
||||
* **ABAC** - 基于属性的访问控制(ABAC) 定义了一种访问控制范例,通过使用将属性组合在一起的策略,将访问权限授予用户。策略可以使用任何类型的属性(用户属性,资源属性,对象,环境属性等)。要了解有关使用 ABAC 模式的更多信息,请参阅[ABAC 模式](/docs/reference/access-authn-authz/abac/)。
|
||||
* **RBAC** - 基于角色的访问控制(RBAC)是一种基于企业内个人用户的角色来管理对计算机或网络资源的访问的方法。在此上下文中,权限是单个用户执行特定任务的能力,例如查看,创建或修改文件。要了解有关使用 RBAC 模式的更多信息,请参阅[RBAC 模式](/docs/reference/access-authn-authz/rbac/)。
|
||||
* 当指定的RBAC(基于角色的访问控制)使用`rbac.authorization.k8s.io` API 组来驱动授权决策时,允许管理员通过 Kubernetes API 动态配置权限策略。
|
||||
* 要启用RBAC,请使用`--authorization-mode = RBAC`启动 apiserver 。
|
||||
* **Webhook** - WebHook 是一个 HTTP 回调: 发生某些事情时调用的 HTTP POST;通过 HTTP POST 进行简单的事件通知。实现 WebHooks 的 Web 应用程序会在发生某些事情时将消息发布到URL。要了解有关使用 Webhook 模式的更多信息,请参阅[Webhook 模式](/docs/reference/access-authn-authz/webhook/)。
|
||||
|
||||
<!--
|
||||
#### Checking API Access
|
||||
|
||||
`kubectl` provides the `auth can-i` subcommand for quickly querying the API authorization layer.
|
||||
The command uses the `SelfSubjectAccessReview` API to determine if the current user can perform
|
||||
a given action, and works regardless of the authorization mode used.
|
||||
-->
|
||||
|
||||
#### 检查API访问
|
||||
|
||||
`kubectl`提供`auth can-i`子命令,用于快速查询 API 授权层。
|
||||
该命令使用`SelfSubjectAccessReview` API来确定当前用户是否可以执行给定操作,并且无论使用何种授权模式都可以工作。
|
||||
|
||||
```bash
|
||||
$ kubectl auth can-i create deployments --namespace dev
|
||||
yes
|
||||
$ kubectl auth can-i create deployments --namespace prod
|
||||
no
|
||||
```
|
||||
<!--
|
||||
Administrators can combine this with [user impersonation](/docs/reference/access-authn-authz/authentication/#user-impersonation)
|
||||
to determine what action other users can perform.
|
||||
-->
|
||||
管理员可以将此与[用户模拟](/docs/reference/access-authn-authz/authentication/#user-impersonation)结合使用,以确定其他用户可以执行的操作。
|
||||
|
||||
```bash
|
||||
$ kubectl auth can-i list secrets --namespace dev --as dave
|
||||
no
|
||||
```
|
||||
<!--
|
||||
`SelfSubjectAccessReview` is part of the `authorization.k8s.io` API group, which
|
||||
exposes the API server authorization to external services. Other resources in
|
||||
this group include:
|
||||
|
||||
* `SubjectAccessReview` - Access review for any user, not just the current one. Useful for delegating authorization decisions to the API server. For example, the kubelet and extension API servers use this to determine user access to their own APIs.
|
||||
* `LocalSubjectAccessReview` - Like `SubjectAccessReview` but restricted to a specific namespace.
|
||||
* `SelfSubjectRulesReview` - A review which returns the set of actions a user can perform within a namespace. Useful for users to quickly summarize their own access, or for UIs to hide/show actions.
|
||||
|
||||
These APIs can be queried by creating normal Kubernetes resources, where the response "status"
|
||||
field of the returned object is the result of the query.
|
||||
-->
|
||||
|
||||
`SelfSubjectAccessReview`是`authorization.k8s.io` API组的一部分,它将 API 服务器授权公开给外部服务。
|
||||
该组中的其他资源包括:
|
||||
|
||||
* `SubjectAccessReview` - 访问任何用户的 Review ,而不仅仅是当前用户。用于将授权决策委派给API服务器。例如,kubelet 和扩展 API 服务器使用它来确定用户对自己的API的访问权限。
|
||||
* `LocalSubjectAccessReview` - 与`SubjectAccessReview`类似,但仅限于特定的命名空间。
|
||||
* `SelfSubjectRulesReview` - 返回用户可在命名空间内执行的操作集的审阅。用户可以快速汇总自己的访问权限,或者用于隐藏/显示操作的UI。
|
||||
|
||||
可以通过创建普通 Kubernetes 资源来查询这些 API ,其中返回对象的响应“status”字段是查询的结果。
|
||||
|
||||
```bash
|
||||
$ kubectl create -f - -o yaml << EOF
|
||||
apiVersion: authorization.k8s.io/v1
|
||||
kind: SelfSubjectAccessReview
|
||||
spec:
|
||||
resourceAttributes:
|
||||
group: apps
|
||||
name: deployments
|
||||
verb: create
|
||||
namespace: dev
|
||||
EOF
|
||||
|
||||
apiVersion: authorization.k8s.io/v1
|
||||
kind: SelfSubjectAccessReview
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
spec:
|
||||
resourceAttributes:
|
||||
group: apps
|
||||
name: deployments
|
||||
namespace: dev
|
||||
verb: create
|
||||
status:
|
||||
allowed: true
|
||||
denied: false
|
||||
```
|
||||
|
||||
<!--
|
||||
## Using Flags for Your Authorization Module
|
||||
|
||||
You must include a flag in your policy to indicate which authorization module
|
||||
your policies include:
|
||||
|
||||
The following flags can be used:
|
||||
|
||||
* `--authorization-mode=ABAC` Attribute-Based Access Control (ABAC) mode allows you to configure policies using local files.
|
||||
* `--authorization-mode=RBAC` Role-based access control (RBAC) mode allows you to create and store policies using the Kubernetes API.
|
||||
* `--authorization-mode=Webhook` WebHook is an HTTP callback mode that allows you to manage authorization using a remote REST endpoint.
|
||||
* `--authorization-mode=Node` Node authorization is a special-purpose authorization mode that specifically authorizes API requests made by kubelets.
|
||||
* `--authorization-mode=AlwaysDeny` This flag blocks all requests. Use this flag only for testing.
|
||||
* `--authorization-mode=AlwaysAllow` This flag allows all requests. Use this flag only if you do not require authorization for your API requests.
|
||||
|
||||
You can choose more than one authorization module. Modules are checked in order
|
||||
so an earlier module has higher priority to allow or deny a request.
|
||||
-->
|
||||
|
||||
## 为您的授权模块使用标志
|
||||
|
||||
您必须在策略中包含一个标志,以指明您的策略包含哪个授权模块:
|
||||
|
||||
可以使用以下标志:
|
||||
|
||||
* `--authorization-mode=ABAC` 基于属性的访问控制(ABAC)模式允许您使用本地文件配置策略。
|
||||
* `--authorization-mode=RBAC` 基于角色的访问控制(RBAC)模式允许您使用 Kubernetes API 创建和存储策略。
|
||||
* `--authorization-mode=Webhook` WebHook 是一种 HTTP 回调模式,允许您使用远程REST端点管理授权。
|
||||
* `--authorization-mode=Node` 节点授权是一种特殊用途的授权模式,专门授权由 kubelet 发出的API请求。
|
||||
* `--authorization-mode=AlwaysDeny` 该标志阻止所有请求。仅将此标志用于测试。
|
||||
* `--authorization-mode=AlwaysAllow` 此标志允许所有请求。仅在您不需要 API 请求的授权时才使用此标志。
|
||||
|
||||
您可以选择多个授权模块。按顺序检查模块,以便较早的模块具有更高的优先级来允许或拒绝请求。
|
||||
|
||||
<!--
|
||||
## Privilege escalation via pod creation
|
||||
|
||||
Users who have the ability to create pods in a namespace can potentially
|
||||
escalate their privileges within that namespace. They can create pods that
|
||||
access their privileges within that namespace. They can create pods that access
|
||||
secrets the user cannot themselves read, or that run under a service account
|
||||
with different/greater permissions.
|
||||
-->
|
||||
## 通过pod创建权限升级
|
||||
|
||||
能够在命名空间中创建 pod 的用户可能会升级其在该命名空间内的权限。
|
||||
他们可以创建在该命名空间内访问其权限的 pod 。
|
||||
他们可以创建用户无法自己读取 secret 的 pod ,或者在具有不同/更高权限的服务帐户下运行的 pod 。
|
||||
|
||||
{{< caution >}}
|
||||
<!--
|
||||
**Caution:** System administrators, use care when granting access to pod
|
||||
creation. A user granted permission to create pods (or controllers that create
|
||||
pods) in the namespace can: read all secrets in the namespace; read all config
|
||||
maps in the namespace; and impersonate any service account in the namespace and
|
||||
take any action the account could take. This applies regardless of authorization
|
||||
mode.
|
||||
-->
|
||||
**注意:** 系统管理员在授予对 pod 创建的访问权限时要小心。
|
||||
授予在命名空间中创建 pod(或创建pod的控制器)的权限的用户可以:
|
||||
读取命名空间中的所有秘密;读取命名空间中的所有配置映射;
|
||||
并模拟命名空间中的任何服务帐户并执行帐户可以执行的任何操作。
|
||||
无论采用何种授权方式,这都适用。
|
||||
{{< /caution >}}
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
<!--
|
||||
* To learn more about Authentication, see **Authentication** in [Controlling Access to the Kubernetes API](/docs/reference/access-authn-authz/controlling-access/).
|
||||
* To learn more about Admission Control, see [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/).
|
||||
-->
|
||||
* 要了解有关身份验证的更多信息,请参阅 **身份验证** [控制对Kubernetes API的访问](/docs/reference/access-authn-authz/controlling-access/).
|
||||
* 要了解有关准入控制的更多信息,请参阅 [使用准入控制器](/docs/reference/access-authn-authz/admission-controllers/).
|
||||
{{% /capture %}}
|
||||
|
|
@ -0,0 +1,491 @@
|
|||
---
|
||||
title: kube-proxy
|
||||
notitle: true
|
||||
---
|
||||
## kube-proxy
|
||||
|
||||
|
||||
<!--
|
||||
### Synopsis
|
||||
|
||||
|
||||
The Kubernetes network proxy runs on each node. This
|
||||
reflects services as defined in the Kubernetes API on each node and can do simple
|
||||
TCP and UDP stream forwarding or round robin TCP and UDP forwarding across a set of backends.
|
||||
Service cluster IPs and ports are currently found through Docker-links-compatible
|
||||
environment variables specifying ports opened by the service proxy. There is an optional
|
||||
addon that provides cluster DNS for these cluster IPs. The user must create a service
|
||||
with the apiserver API to configure the proxy.
|
||||
-->
|
||||
### 概要
|
||||
|
||||
Kubernetes 在每个节点上运行网络代理。这反映每个节点上 Kubernetes API 中定义的服务,并且可以做简单的
|
||||
TCP 和 UDP 流转发或在一组后端中轮询,进行 TCP 和 UDP 转发。目前服务集群 IP 和端口通过由服务代理打开的端口
|
||||
的 Docker-links-compatible 环境变量找到。有一个可选的为这些集群 IP 提供集群 DNS 的插件。用户必须
|
||||
通过 apiserver API 创建服务去配置代理。
|
||||
|
||||
```
|
||||
kube-proxy [flags]
|
||||
```
|
||||
|
||||
<!--
|
||||
### Options
|
||||
-->
|
||||
### 选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
<colgroup>
|
||||
<col span="1" style="width: 10px;" />
|
||||
<col span="1" />
|
||||
</colgroup>
|
||||
<tbody>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--azure-container-registry-config string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Path to the file containing Azure container registry configuration information.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">包含 Azure 容器仓库配置信息的文件的路径。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--bind-address 0.0.0.0 Default: 0.0.0.0</td>
|
||||
-->
|
||||
<td colspan="2">--bind-address 0.0.0.0 默认: 0.0.0.0</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The IP address for the proxy server to serve on (set to 0.0.0.0 for all IPv4 interfaces and `::` for all IPv6 interfaces)</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">要服务的代理服务器的 IP 地址(对于所有 IPv4 接口设置为 0.0.0.0,对于所有 IPv6 接口设置为 ::)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--cleanup</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">If true cleanup iptables and ipvs rules and exit.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">如果为 true,清理 iptables 和 ipvs 规则并退出。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--cleanup-ipvs Default: true</td>
|
||||
-->
|
||||
<td colspan="2">--cleanup-ipvs 默认: true</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">If true make kube-proxy cleanup ipvs rules before running. Default is true</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">如果为 true,则使 kube-proxy 在运行之前清理 ipvs 规则。 默认为 true</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--cluster-cidr string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The CIDR range of pods in the cluster. When configured, traffic sent to a Service cluster IP from outside this range will be masqueraded and traffic sent from pods to an external LoadBalancer IP will be directed to the respective cluster IP instead</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">集群中的 CIDR 范围。 配置后,从该范围之外发送到服务集群 IP 的流量将被伪装,从 pod 发送到外部 LoadBalancer IP 的流量将被定向到相应的集群 IP</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--config string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The path to the configuration file.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">配置文件的路径。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--config-sync-period duration Default: 15m0s</td>
|
||||
-->
|
||||
<td colspan="2">--config-sync-period duration 默认: 15m0s</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">How often configuration from the apiserver is refreshed. Must be greater than 0.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">来自 apiserver 的配置的刷新频率。必须大于 0。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--conntrack-max-per-core int32 Default: 32768</td>
|
||||
-->
|
||||
<td colspan="2">--conntrack-max-per-core int32 默认: 32768</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Maximum number of NAT connections to track per CPU core (0 to leave the limit as-is and ignore conntrack-min).</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">每个 CPU 核跟踪的最大 NAT 连接数(0 表示保留原样限制并忽略 conntrack-min)。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--conntrack-min int32 Default: 131072</td>
|
||||
-->
|
||||
<td colspan="2">--conntrack-min int32 默认: 131072</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Minimum number of conntrack entries to allocate, regardless of conntrack-max-per-core (set conntrack-max-per-core=0 to leave the limit as-is).</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">要分配的最小 conntrack 条目数,不管 conntrack-max-per-core(设置 conntrack-max-per-core = 0 保留原样限制)。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--conntrack-tcp-timeout-close-wait duration Default: 1h0m0s</td>
|
||||
-->
|
||||
<td colspan="2">--conntrack-tcp-timeout-close-wait duration 默认: 1h0m0s</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">NAT timeout for TCP connections in the CLOSE_WAIT state</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">处于 CLOSE_WAIT 状态的 TCP 连接的 NAT 超时</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--conntrack-tcp-timeout-established duration Default: 24h0m0s</td>
|
||||
-->
|
||||
<td colspan="2">--conntrack-tcp-timeout-established duration 默认: 24h0m0s</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Idle timeout for established TCP connections (0 to leave as-is)</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">已建立的 TCP 连接的空闲超时(0 保持原样)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--feature-gates mapStringBool</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:<br/>APIListChunking=true|false (BETA - default=true)<br/>APIResponseCompression=true|false (ALPHA - default=false)<br/>AdvancedAuditing=true|false (BETA - default=true)<br/>AllAlpha=true|false (ALPHA - default=false)<br/>AppArmor=true|false (BETA - default=true)<br/>AttachVolumeLimit=true|false (ALPHA - default=false)<br/>BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)<br/>BlockVolume=true|false (ALPHA - default=false)<br/>CPUManager=true|false (BETA - default=true)<br/>CRIContainerLogRotation=true|false (BETA - default=true)<br/>CSIBlockVolume=true|false (ALPHA - default=false)<br/>CSIPersistentVolume=true|false (BETA - default=true)<br/>CustomPodDNS=true|false (BETA - default=true)<br/>CustomResourceSubresources=true|false (BETA - default=true)<br/>CustomResourceValidation=true|false (BETA - default=true)<br/>DebugContainers=true|false (ALPHA - default=false)<br/>DevicePlugins=true|false (BETA - default=true)<br/>DynamicKubeletConfig=true|false (BETA - default=true)<br/>DynamicProvisioningScheduling=true|false (ALPHA - default=false)<br/>EnableEquivalenceClassCache=true|false (ALPHA - default=false)<br/>ExpandInUsePersistentVolumes=true|false (ALPHA - default=false)<br/>ExpandPersistentVolumes=true|false (BETA - default=true)<br/>ExperimentalCriticalPodAnnotation=true|false (ALPHA - default=false)<br/>ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)<br/>GCERegionalPersistentDisk=true|false (BETA - default=true)<br/>HugePages=true|false (BETA - default=true)<br/>HyperVContainer=true|false (ALPHA - default=false)<br/>Initializers=true|false (ALPHA - default=false)<br/>KubeletPluginsWatcher=true|false (ALPHA - default=false)<br/>LocalStorageCapacityIsolation=true|false (BETA - default=true)<br/>MountContainers=true|false (ALPHA - default=false)<br/>MountPropagation=true|false (BETA - default=true)<br/>PersistentLocalVolumes=true|false (BETA - default=true)<br/>PodPriority=true|false (BETA - default=true)<br/>PodReadinessGates=true|false (BETA - default=false)<br/>PodShareProcessNamespace=true|false (ALPHA - default=false)<br/>QOSReserved=true|false (ALPHA - default=false)<br/>ReadOnlyAPIDataVolumes=true|false (DEPRECATED - default=true)<br/>ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)<br/>ResourceQuotaScopeSelectors=true|false (ALPHA - default=false)<br/>RotateKubeletClientCertificate=true|false (BETA - default=true)<br/>RotateKubeletServerCertificate=true|false (ALPHA - default=false)<br/>RunAsGroup=true|false (ALPHA - default=false)<br/>ScheduleDaemonSetPods=true|false (ALPHA - default=false)<br/>ServiceNodeExclusion=true|false (ALPHA - default=false)<br/>ServiceProxyAllowExternalIPs=true|false (DEPRECATED - default=false)<br/>StorageObjectInUseProtection=true|false (default=true)<br/>StreamingProxyRedirects=true|false (BETA - default=true)<br/>SupportIPVSProxyMode=true|false (default=true)<br/>SupportPodPidsLimit=true|false (ALPHA - default=false)<br/>Sysctls=true|false (BETA - default=true)<br/>TaintBasedEvictions=true|false (ALPHA - default=false)<br/>TaintNodesByCondition=true|false (ALPHA - default=false)<br/>TokenRequest=true|false (ALPHA - default=false)<br/>TokenRequestProjection=true|false (ALPHA - default=false)<br/>VolumeScheduling=true|false (BETA - default=true)<br/>VolumeSubpath=true|false (default=true)<br/>VolumeSubpathEnvExpansion=true|false (ALPHA - default=false)</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">一组 key=value 对,用于描述 alpha/experimental 特征的特征门。选项包括:<br/>APIListChunking=true|false (BETA - 默认=true)<br/>APIResponseCompression=true|false (ALPHA - 默认=false)<br/>AdvancedAuditing=true|false (BETA - 默认=true)<br/>AllAlpha=true|false (ALPHA - 默认=false)<br/>AppArmor=true|false (BETA - 默认=true)<br/>AttachVolumeLimit=true|false (ALPHA - 默认=false)<br/>BalanceAttachedNodeVolumes=true|false (ALPHA - 默认=false)<br/>BlockVolume=true|false (ALPHA - 默认=false)<br/>CPUManager=true|false (BETA - 默认=true)<br/>CRIContainerLogRotation=true|false (BETA - 默认=true)<br/>CSIBlockVolume=true|false (ALPHA - 默认=false)<br/>CSIPersistentVolume=true|false (BETA - 默认=true)<br/>CustomPodDNS=true|false (BETA - 默认=true)<br/>CustomResourceSubresources=true|false (BETA - 默认=true)<br/>CustomResourceValidation=true|false (BETA - 默认=true)<br/>DebugContainers=true|false (ALPHA - 默认=false)<br/>DevicePlugins=true|false (BETA - 默认=true)<br/>DynamicKubeletConfig=true|false (BETA - 默认=true)<br/>DynamicProvisioningScheduling=true|false (ALPHA - 默认=false)<br/>EnableEquivalenceClassCache=true|false (ALPHA - 默认=false)<br/>ExpandInUsePersistentVolumes=true|false (ALPHA - 默认=false)<br/>ExpandPersistentVolumes=true|false (BETA - 默认=true)<br/>ExperimentalCriticalPodAnnotation=true|false (ALPHA - 默认=false)<br/>ExperimentalHostUserNamespaceDefaulting=true|false (BETA - 默认=false)<br/>GCERegionalPersistentDisk=true|false (BETA - 默认=true)<br/>HugePages=true|false (BETA - 默认=true)<br/>HyperVContainer=true|false (ALPHA - 默认=false)<br/>Initializers=true|false (ALPHA - 默认=false)<br/>KubeletPluginsWatcher=true|false (ALPHA - 默认=false)<br/>LocalStorageCapacityIsolation=true|false (BETA - default=true)<br/>MountContainers=true|false (ALPHA - 默认=false)<br/>MountPropagation=true|false (BETA - 默认=true)<br/>PersistentLocalVolumes=true|false (BETA - 默认=true)<br/>PodPriority=true|false (BETA - 默认=true)<br/>PodReadinessGates=true|false (BETA - 默认=false)<br/>PodShareProcessNamespace=true|false (ALPHA - 默认=false)<br/>QOSReserved=true|false (ALPHA - 默认=false)<br/>ReadOnlyAPIDataVolumes=true|false (弃用 - 默认=true)<br/>ResourceLimitsPriorityFunction=true|false (ALPHA - 默认=false)<br/>ResourceQuotaScopeSelectors=true|false (ALPHA - 默认=false)<br/>RotateKubeletClientCertificate=true|false (BETA - 默认=true)<br/>RotateKubeletServerCertificate=true|false (ALPHA - 默认=false)<br/>RunAsGroup=true|false (ALPHA - 默认=false)<br/>ScheduleDaemonSetPods=true|false (ALPHA - 默认=false)<br/>ServiceNodeExclusion=true|false (ALPHA - 默认=false)<br/>ServiceProxyAllowExternalIPs=true|false (弃用 - 默认=false)<br/>StorageObjectInUseProtection=true|false (default=true)<br/>StreamingProxyRedirects=true|false (BETA - 默认=true)<br/>SupportIPVSProxyMode=true|false (默认=true)<br/>SupportPodPidsLimit=true|false (ALPHA - 默认=false)<br/>Sysctls=true|false (BETA - 默认=true)<br/>TaintBasedEvictions=true|false (ALPHA - 默认=false)<br/>TaintNodesByCondition=true|false (ALPHA - 默认=false)<br/>TokenRequest=true|false (ALPHA - 默认=false)<br/>TokenRequestProjection=true|false (ALPHA - 默认=false)<br/>VolumeScheduling=true|false (BETA - 默认=true)<br/>VolumeSubpath=true|false (默认=true)<br/>VolumeSubpathEnvExpansion=true|false (ALPHA - 默认=false)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--healthz-bind-address 0.0.0.0 Default: 0.0.0.0:10256</td>
|
||||
-->
|
||||
<td colspan="2">--healthz-bind-address 0.0.0.0 默认: 0.0.0.0:10256</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The IP address and port for the health check server to serve on (set to 0.0.0.0 for all IPv4 interfaces and `::` for all IPv6 interfaces)</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">服务健康检查的 IP 地址和端口(对于所有 IPv4 接口设置为 0.0.0.0,对于所有 IPv6 接口设置为 ::)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--healthz-port int32 Default: 10256</td>
|
||||
-->
|
||||
<td colspan="2">--healthz-port int32 默认: 10256</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The port to bind the health check server. Use 0 to disable.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">绑定健康检查服务的端口。使用 0 禁用。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">-h, --help</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">help for kube-proxy</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">kube-proxy 帮助信息</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--hostname-override string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">If non-empty, will use this string as identification instead of the actual hostname.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">如果非空,将使用此字符串作为标识而不是实际的主机名。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--iptables-masquerade-bit int32 Default: 14</td>
|
||||
-->
|
||||
<td colspan="2">--iptables-masquerade-bit int32 默认: 14</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">If using the pure iptables proxy, the bit of the fwmark space to mark packets requiring SNAT with. Must be within the range [0, 31].</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">如果使用纯 iptables 代理,则 fwmark 空间的位用于标记需要 SNAT 的数据包。 必须在 [0,31] 范围内。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--iptables-min-sync-period duration</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The minimum interval of how often the iptables rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m').</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">当端点和服务发生变化时,iptables 规则的刷新的最小间隔(例如 '5s','1m','2h22m')。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--iptables-sync-period duration Default: 30s</td>
|
||||
-->
|
||||
<td colspan="2">--iptables-sync-period duration 默认: 30s</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The maximum interval of how often iptables rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">iptables 规则刷新的最大时间间隔(例如 '5s','1m','2h22m')。必须大于 0。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--ipvs-exclude-cidrs stringSlice</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">A comma-separated list of CIDR's which the ipvs proxier should not touch when cleaning up IPVS rules.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">以逗号分隔的 CIDR 列表,在清理 IPVS 规则时,不应该触及 ipvs proxier。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--ipvs-min-sync-period duration</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The minimum interval of how often the ipvs rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m').</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">当端点和服务发生变化时,ipvs 规则的刷新的最小间隔(例如 '5s','1m','2h22m')。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--ipvs-scheduler string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The ipvs scheduler type when proxy mode is ipvs</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">代理模式为 ipvs 时的 ipvs 调度器类型</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--ipvs-sync-period duration Default: 30s</td>
|
||||
-->
|
||||
<td colspan="2">--ipvs-sync-period duration 默认: 30s</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The maximum interval of how often ipvs rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">ipvs 规则刷新的最大时间间隔(例如 '5s','1m','2h22m')。必须大于 0。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--kube-api-burst int32 Default: 10</td>
|
||||
-->
|
||||
<td colspan="2">--kube-api-burst int32 默认: 10</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Burst to use while talking with kubernetes apiserver</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">每秒与 kubernetes apiserver 交互的数量</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--kube-api-content-type string Default: "application/vnd.kubernetes.protobuf"</td>
|
||||
-->
|
||||
<td colspan="2">--kube-api-content-type string 默认: "application/vnd.kubernetes.protobuf"</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Content type of requests sent to apiserver.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">发送到 apiserver 的请求的内容类型。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--kube-api-qps float32 Default: 5</td>
|
||||
-->
|
||||
<td colspan="2">--kube-api-qps float32 默认: 5</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">QPS to use while talking with kubernetes apiserver</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">与 kubernetes apiserver 交互时使用的 QPS</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--kubeconfig string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Path to kubeconfig file with authorization information (the master location is set by the master flag).</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">包含授权信息的 kubeconfig 文件的路径(master 位置由 master 标志设置)。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--log-flush-frequency duration Default: 5s</td>
|
||||
-->
|
||||
<td colspan="2">--log-flush-frequency duration 默认: 5s</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Maximum number of seconds between log flushes</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">日志刷新最大间隔</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--masquerade-all</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">If using the pure iptables proxy, SNAT all traffic sent via Service cluster IPs (this not commonly needed)</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">如果使用纯 iptables 代理,SNAT 所有通过服务句群 IP 发送的流量(这通常不需要)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--master string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The address of the Kubernetes API server (overrides any value in kubeconfig)</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Kubernetes API 服务器的地址(覆盖 kubeconfig 中的任何值)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--metrics-bind-address 0.0.0.0 Default: 127.0.0.1:10249</td>
|
||||
-->
|
||||
<td colspan="2">--metrics-bind-address 0.0.0.0 默认: 127.0.0.1:10249</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The IP address and port for the metrics server to serve on (set to 0.0.0.0 for all IPv4 interfaces and `::` for all IPv6 interfaces)</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">要服务的度量服务器的 IP 地址和端口(对于所有 IPv4 接口设置为 0.0.0.0,对于所有 IPv6 接口设置为 ::)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--nodeport-addresses stringSlice</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">A string slice of values which specify the addresses to use for NodePorts. Values may be valid IP blocks (e.g. 1.2.3.0/24, 1.2.3.4/32). The default empty string slice ([]) means to use all local addresses.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">一个字符串值,指定用于 NodePorts 的地址。 值可以是有效的 IP 块(例如 1.2.3.0/24, 1.2.3.4/32)。 默认的空字符串切片([])表示使用所有本地地址。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--oom-score-adj int32 Default: -999</td>
|
||||
-->
|
||||
<td colspan="2">--oom-score-adj int32 默认: -999</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The oom-score-adj value for kube-proxy process. Values must be within the range [-1000, 1000]</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">kube-proxy 进程的 oom-score-adj 值。 值必须在 [-1000,1000] 范围内</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--profiling</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">If true enables profiling via web interface on /debug/pprof handler.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">如果为 true,则通过 Web 接口 /debug/pprof 启用性能分析。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--proxy-mode ProxyMode</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Which proxy mode to use: 'userspace' (older) or 'iptables' (faster) or 'ipvs' (experimental). If blank, use the best-available proxy (currently iptables). If the iptables proxy is selected, regardless of how, but the system's kernel or iptables versions are insufficient, this always falls back to the userspace proxy.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">使用哪种代理模式:'userspace'(较旧)或 'iptables'(较快)或 'ipvs'(实验)。 如果为空,使用最佳可用代理(当前为 iptables)。 如果选择了 iptables 代理,无论如何,但系统的内核或 iptables 版本不足,这总是会回退到用户空间代理。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--proxy-port-range port-range</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Range of host ports (beginPort-endPort, single port or beginPort+offset, inclusive) that may be consumed in order to proxy service traffic. If (unspecified, 0, or 0-0) then ports will be randomly chosen.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">主机端口的范围(beginPort-endPort,单端口或 beginPort + offset,包括),可以被代理服务流量消耗。 如果(未指定,0 或 0-0)则随机选择端口。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--udp-timeout duration Default: 250ms</td>
|
||||
-->
|
||||
<td colspan="2">--udp-timeout duration 默认: 250ms</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">How long an idle UDP connection will be kept open (e.g. '250ms', '2s'). Must be greater than 0. Only applicable for proxy-mode=userspace</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">空闲 UDP 连接将保持打开的时长(例如 '250ms','2s')。 必须大于 0。仅适用于 proxy-mode=userspace</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--version version[=true]</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Print version information and quit</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">打印版本信息并退出</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--write-config-to string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">If set, write the default configuration values to this file and exit.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">如果设置,将配置值写入此文件并退出。</td>
|
||||
</tr>
|
||||
|
||||
</tbody>
|
||||
</table>
|
||||
|
|
@ -0,0 +1,380 @@
|
|||
---
|
||||
title: kube-scheduler
|
||||
notitle: true
|
||||
---
|
||||
## kube-scheduler
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
### Synopsis
|
||||
|
||||
|
||||
The Kubernetes scheduler is a policy-rich, topology-aware,
|
||||
workload-specific function that significantly impacts availability, performance,
|
||||
and capacity. The scheduler needs to take into account individual and collective
|
||||
resource requirements, quality of service requirements, hardware/software/policy
|
||||
constraints, affinity and anti-affinity specifications, data locality, inter-workload
|
||||
interference, deadlines, and so on. Workload-specific requirements will be exposed
|
||||
through the API as necessary.
|
||||
-->
|
||||
### 概要
|
||||
|
||||
|
||||
Kubernetes 调度器是一个策略丰富、拓扑感知、工作负载特定的功能,显著影响可用性、性能和容量。调度器需要考虑个人和集体
|
||||
的资源要求、服务质量要求、硬件/软件/政策约束、亲和力和反亲和力规范、数据局部性、负载间干扰、完成期限等。
|
||||
工作负载特定的要求必要时将通过 API 暴露。
|
||||
|
||||
```
|
||||
kube-scheduler [flags]
|
||||
```
|
||||
|
||||
<!--
|
||||
### Options
|
||||
-->
|
||||
### 选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
<colgroup>
|
||||
<col span="1" style="width: 10px;" />
|
||||
<col span="1" />
|
||||
</colgroup>
|
||||
<tbody>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--address string Default: "0.0.0.0"</td>
|
||||
-->
|
||||
<td colspan="2">--address string 默认: "0.0.0.0"</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">DEPRECATED: the IP address on which to listen for the --port port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). See --bind-address instead.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">弃用: 要监听 --port 端口的 IP 地址(对于所有 IPv4 接口设置为 0.0.0.0,对于所有 IPv6 接口设置为 ::)。 请参阅 --bind-address。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--algorithm-provider string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">DEPRECATED: the scheduling algorithm provider to use, one of: ClusterAutoscalerProvider | DefaultProvider</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">弃用: 要使用的调度算法,可选值:ClusterAutoscalerProvider |DefaultProvider</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--azure-container-registry-config string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Path to the file containing Azure container registry configuration information.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">包含 Azure 容器仓库配置信息的文件的路径。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--config string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The path to the configuration file. Flags override values in this file.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">配置文件的路径。标志会覆盖此文件中的值。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--contention-profiling</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">DEPRECATED: enable lock contention profiling, if profiling is enabled</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">弃用: 如果启用了性能分析,则启用锁竞争分析</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--feature-gates mapStringBool</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:<br/>APIListChunking=true|false (BETA - default=true)<br/>APIResponseCompression=true|false (ALPHA - default=false)<br/>AdvancedAuditing=true|false (BETA - default=true)<br/>AllAlpha=true|false (ALPHA - default=false)<br/>AppArmor=true|false (BETA - default=true)<br/>AttachVolumeLimit=true|false (ALPHA - default=false)<br/>BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)<br/>BlockVolume=true|false (ALPHA - default=false)<br/>CPUManager=true|false (BETA - default=true)<br/>CRIContainerLogRotation=true|false (BETA - default=true)<br/>CSIBlockVolume=true|false (ALPHA - default=false)<br/>CSIPersistentVolume=true|false (BETA - default=true)<br/>CustomPodDNS=true|false (BETA - default=true)<br/>CustomResourceSubresources=true|false (BETA - default=true)<br/>CustomResourceValidation=true|false (BETA - default=true)<br/>DebugContainers=true|false (ALPHA - default=false)<br/>DevicePlugins=true|false (BETA - default=true)<br/>DynamicKubeletConfig=true|false (BETA - default=true)<br/>DynamicProvisioningScheduling=true|false (ALPHA - default=false)<br/>EnableEquivalenceClassCache=true|false (ALPHA - default=false)<br/>ExpandInUsePersistentVolumes=true|false (ALPHA - default=false)<br/>ExpandPersistentVolumes=true|false (BETA - default=true)<br/>ExperimentalCriticalPodAnnotation=true|false (ALPHA - default=false)<br/>ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)<br/>GCERegionalPersistentDisk=true|false (BETA - default=true)<br/>HugePages=true|false (BETA - default=true)<br/>HyperVContainer=true|false (ALPHA - default=false)<br/>Initializers=true|false (ALPHA - default=false)<br/>KubeletPluginsWatcher=true|false (ALPHA - default=false)<br/>LocalStorageCapacityIsolation=true|false (BETA - default=true)<br/>MountContainers=true|false (ALPHA - default=false)<br/>MountPropagation=true|false (BETA - default=true)<br/>PersistentLocalVolumes=true|false (BETA - default=true)<br/>PodPriority=true|false (BETA - default=true)<br/>PodReadinessGates=true|false (BETA - default=false)<br/>PodShareProcessNamespace=true|false (ALPHA - default=false)<br/>QOSReserved=true|false (ALPHA - default=false)<br/>ReadOnlyAPIDataVolumes=true|false (DEPRECATED - default=true)<br/>ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)<br/>ResourceQuotaScopeSelectors=true|false (ALPHA - default=false)<br/>RotateKubeletClientCertificate=true|false (BETA - default=true)<br/>RotateKubeletServerCertificate=true|false (ALPHA - default=false)<br/>RunAsGroup=true|false (ALPHA - default=false)<br/>ScheduleDaemonSetPods=true|false (ALPHA - default=false)<br/>ServiceNodeExclusion=true|false (ALPHA - default=false)<br/>ServiceProxyAllowExternalIPs=true|false (DEPRECATED - default=false)<br/>StorageObjectInUseProtection=true|false (default=true)<br/>StreamingProxyRedirects=true|false (BETA - default=true)<br/>SupportIPVSProxyMode=true|false (default=true)<br/>SupportPodPidsLimit=true|false (ALPHA - default=false)<br/>Sysctls=true|false (BETA - default=true)<br/>TaintBasedEvictions=true|false (ALPHA - default=false)<br/>TaintNodesByCondition=true|false (ALPHA - default=false)<br/>TokenRequest=true|false (ALPHA - default=false)<br/>TokenRequestProjection=true|false (ALPHA - default=false)<br/>VolumeScheduling=true|false (BETA - default=true)<br/>VolumeSubpath=true|false (default=true)<br/>VolumeSubpathEnvExpansion=true|false (ALPHA - default=false)</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">一组 key=value 对,用于描述 alpha/experimental 特征的特征门。选项包括:<br/>APIListChunking=true|false (BETA - 默认=true)<br/>APIResponseCompression=true|false (ALPHA - 默认=false)<br/>AdvancedAuditing=true|false (BETA - 默认=true)<br/>AllAlpha=true|false (ALPHA - 默认=false)<br/>AppArmor=true|false (BETA - 默认=true)<br/>AttachVolumeLimit=true|false (ALPHA - 默认=false)<br/>BalanceAttachedNodeVolumes=true|false (ALPHA - 默认=false)<br/>BlockVolume=true|false (ALPHA - 默认=false)<br/>CPUManager=true|false (BETA - 默认=true)<br/>CRIContainerLogRotation=true|false (BETA - 默认=true)<br/>CSIBlockVolume=true|false (ALPHA - 默认=false)<br/>CSIPersistentVolume=true|false (BETA - 默认=true)<br/>CustomPodDNS=true|false (BETA - 默认=true)<br/>CustomResourceSubresources=true|false (BETA - 默认=true)<br/>CustomResourceValidation=true|false (BETA - 默认=true)<br/>DebugContainers=true|false (ALPHA - 默认=false)<br/>DevicePlugins=true|false (BETA - 默认=true)<br/>DynamicKubeletConfig=true|false (BETA - 默认=true)<br/>DynamicProvisioningScheduling=true|false (ALPHA - 默认=false)<br/>EnableEquivalenceClassCache=true|false (ALPHA - 默认=false)<br/>ExpandInUsePersistentVolumes=true|false (ALPHA - 默认=false)<br/>ExpandPersistentVolumes=true|false (BETA - 默认=true)<br/>ExperimentalCriticalPodAnnotation=true|false (ALPHA - 默认=false)<br/>ExperimentalHostUserNamespaceDefaulting=true|false (BETA - 默认=false)<br/>GCERegionalPersistentDisk=true|false (BETA - 默认=true)<br/>HugePages=true|false (BETA - 默认=true)<br/>HyperVContainer=true|false (ALPHA - 默认=false)<br/>Initializers=true|false (ALPHA - 默认=false)<br/>KubeletPluginsWatcher=true|false (ALPHA - 默认=false)<br/>LocalStorageCapacityIsolation=true|false (BETA - 默认=true)<br/>MountContainers=true|false (ALPHA - 默认=false)<br/>MountPropagation=true|false (BETA - 默认=true)<br/>PersistentLocalVolumes=true|false (BETA - 默认=true)<br/>PodPriority=true|false (BETA - 默认=true)<br/>PodReadinessGates=true|false (BETA - 默认=false)<br/>PodShareProcessNamespace=true|false (ALPHA - 默认=false)<br/>QOSReserved=true|false (ALPHA - 默认=false)<br/>ReadOnlyAPIDataVolumes=true|false (弃用 - 默认=true)<br/>ResourceLimitsPriorityFunction=true|false (ALPHA - 默认=false)<br/>ResourceQuotaScopeSelectors=true|false (ALPHA - 默认=false)<br/>RotateKubeletClientCertificate=true|false (BETA - 默认=true)<br/>RotateKubeletServerCertificate=true|false (ALPHA - 默认=false)<br/>RunAsGroup=true|false (ALPHA - 默认=false)<br/>ScheduleDaemonSetPods=true|false (ALPHA - 默认=false)<br/>ServiceNodeExclusion=true|false (ALPHA - 默认=false)<br/>ServiceProxyAllowExternalIPs=true|false (弃用 - 默认=false)<br/>StorageObjectInUseProtection=true|false (默认=true)<br/>StreamingProxyRedirects=true|false (BETA - 默认=true)<br/>SupportIPVSProxyMode=true|false (默认=true)<br/>SupportPodPidsLimit=true|false (ALPHA - 默认=false)<br/>Sysctls=true|false (BETA - 默认=true)<br/>TaintBasedEvictions=true|false (ALPHA - 默认=false)<br/>TaintNodesByCondition=true|false (ALPHA - 默认=false)<br/>TokenRequest=true|false (ALPHA - 默认=false)<br/>TokenRequestProjection=true|false (ALPHA - 默认=false)<br/>VolumeScheduling=true|false (BETA - 默认=true)<br/>VolumeSubpath=true|false (默认=true)<br/>VolumeSubpathEnvExpansion=true|false (ALPHA - 默认=false)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">-h, --help</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">help for kube-scheduler</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">kube-scheduler 帮助信息</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--kube-api-burst int32 Default: 100</td>
|
||||
-->
|
||||
<td colspan="2">--kube-api-burst int32 默认: 100</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">DEPRECATED: burst to use while talking with kubernetes apiserver</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">弃用: 每秒与 kubernetes apiserver 交互的数量</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--kube-api-content-type string Default: "application/vnd.kubernetes.protobuf"</td>
|
||||
-->
|
||||
<td colspan="2">--kube-api-content-type string 默认: "application/vnd.kubernetes.protobuf"</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">DEPRECATED: content type of requests sent to apiserver.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">弃用: 发送到 apiserver 的请求的内容类型</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--kube-api-qps float32 Default: 50</td>
|
||||
-->
|
||||
<td colspan="2">--kube-api-qps float32 默认: 50</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">DEPRECATED: QPS to use while talking with kubernetes apiserver</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">弃用: 与 kubernetes apiserver 交互时使用的 QPS</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--kubeconfig string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">DEPRECATED: path to kubeconfig file with authorization and master location information.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">弃用: 包含授权和 master 位置信息的 kubeconfig 文件的路径。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--leader-elect Default: true</td>
|
||||
-->
|
||||
<td colspan="2">--leader-elect 默认: true</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">在执行主循环之前,启动 leader 选举客户端并获得领导能力。在运行复制组件以实现高可用性时启用此选项。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--leader-elect-lease-duration duration Default: 15s</td>
|
||||
-->
|
||||
<td colspan="2">--leader-elect-lease-duration duration 默认: 15s</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">非 leader 候选人在观察领导层续约之后将等待的时间,直到试图获得领导但尚未更新的 leader 位置。这实际上是 leader 在被另一个候选人替换之前可以停止的最长持续时间。这仅适用于启用 leader 选举的情况。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--leader-elect-renew-deadline duration Default: 10s</td>
|
||||
-->
|
||||
<td colspan="2">--leader-elect-renew-deadline duration 默认: 10s</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">代理 master 在停止领导之前更新领导位置的时间间隔。这必须小于或等于租约期限。这仅适用于启用 leader 选举的情况</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--leader-elect-resource-lock endpoints Default: "endpoints"</td>
|
||||
-->
|
||||
<td colspan="2">--leader-elect-resource-lock endpoints 默认: "endpoints"</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The type of resource object that is used for locking during leader election. Supported options are endpoints (default) and `configmaps`.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">在 leader 选举期间用于锁定的资源对象的类型。支持的选项是 endpoints (默认) 和 `configmaps`。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--leader-elect-retry-period duration Default: 2s</td>
|
||||
-->
|
||||
<td colspan="2">--leader-elect-retry-period duration 默认: 2s</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">客户端在尝试获取和更新领导之间应该等待的持续时间。这仅适用于启用leader选举的情况。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--lock-object-name string Default: "kube-scheduler"</td>
|
||||
-->
|
||||
<td colspan="2">--lock-object-name string 默认: "kube-scheduler"</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">DEPRECATED: define the name of the lock object.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">弃用: 定义锁对象的名称。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--lock-object-namespace string Default: "kube-system"</td>
|
||||
-->
|
||||
<td colspan="2">--lock-object-namespace string 默认: "kube-system"</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">DEPRECATED: define the namespace of the lock object.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">弃用: 定义锁对象的命名空间。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--log-flush-frequency duration Default: 5s</td>
|
||||
-->
|
||||
<td colspan="2">--log-flush-frequency duration 默认: 5s</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Maximum number of seconds between log flushes</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">日志刷新最大间隔</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--master string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The address of the Kubernetes API server (overrides any value in kubeconfig)</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Kubernetes API 服务器的地址(覆盖 kubeconfig 中的任何值)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--policy-config-file string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">DEPRECATED: file with scheduler policy configuration. This file is used if policy ConfigMap is not provided or --use-legacy-policy-config=true</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">弃用: 包含调度器策略配置的文件。如果未提供策略 ConfigMap 或 --use-legacy-policy-config==true,则使用此文件</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--policy-configmap string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">DEPRECATED: name of the ConfigMap object that contains scheduler's policy configuration. It must exist in the system namespace before scheduler initialization if --use-legacy-policy-config=false. The config must be provided as the value of an element in 'Data' map with the key='policy.cfg'</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">弃用: 包含调度器策略配置的 ConfigMap 对象的名称。如果 --use-legacy-policy-config==false,它必须在调度器初始化之前存在于系统命名空间中。配置必须作为 'Data' 映射中元素的值提供,其中 key='policy.cfg'</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--policy-configmap-namespace string Default: "kube-system"</td>
|
||||
-->
|
||||
<td colspan="2">--policy-configmap-namespace string 默认: "kube-system"</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">DEPRECATED: the namespace where policy ConfigMap is located. The kube-system namespace will be used if this is not provided or is empty.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">弃用: 策略 ConfigMap 所在的命名空间。 如果未提供此命名空间或为空,则将使用系统命名空间。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--port int Default: 10251</td>
|
||||
-->
|
||||
<td colspan="2">--port int 默认: 10251</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">DEPRECATED: the port on which to serve HTTP insecurely without authentication and authorization. If 0, don't serve HTTPS at all. See --secure-port instead.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">弃用: 不安全地提供没有身份验证和授权的 HTTP 端口。 如果为0,则根本不提供 HTTPS。 请参阅 --secure-port。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--profiling</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">DEPRECATED: enable profiling via web interface host:port/debug/pprof/</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">弃用: 通过 web 接口 host:port/debug/pprof/ 启动性能分析</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--scheduler-name string Default: "default-scheduler"</td>
|
||||
-->
|
||||
<td colspan="2">--scheduler-name string 默认: "default-scheduler"</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">DEPRECATED: name of the scheduler, used to select which pods will be processed by this scheduler, based on pod's "spec.schedulerName".</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">弃用: 调度器名称,用于根据 pod 的 "spec.SchedulerName" 选择哪些 pod 将被此调度器处理。</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--use-legacy-policy-config</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">DEPRECATED: when set to true, scheduler will ignore policy ConfigMap and uses policy config file</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">弃用: 当设置为 true 时,调度器将忽略策略 ConfigMap 并使用策略配置文件</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--version version[=true]</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Print version information and quit</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">打印版本信息并退出</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--write-config-to string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">If set, write the configuration values to this file and exit.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">如果设置,将配置值写入此文件并退出。</td>
|
||||
</tr>
|
||||
|
||||
</tbody>
|
||||
</table>
|
||||
|
|
@ -0,0 +1,112 @@
|
|||
---
|
||||
title: kubelet
|
||||
notitle: true
|
||||
---
|
||||
## kubelet
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
### Synopsis
|
||||
-->
|
||||
### 概要
|
||||
|
||||
|
||||
<!--
|
||||
The kubelet is the primary "node agent" that runs on each
|
||||
node. The kubelet works in terms of a PodSpec. A PodSpec is a YAML or JSON object
|
||||
that describes a pod. The kubelet takes a set of PodSpecs that are provided through
|
||||
various mechanisms (primarily through the apiserver) and ensures that the containers
|
||||
described in those PodSpecs are running and healthy. The kubelet doesn't manage
|
||||
containers which were not created by Kubernetes.
|
||||
-->
|
||||
kubelet 是在每个节点上运行的主要 "节点代理"。kubelet 以 PodSpec 为单位来运行任务,PodSpec 是一个描述 pod 的 YAML 或 JSON 对象。
|
||||
kubelet 运行多种机制(主要通过 apiserver)提供的一组 PodSpec,并确保这些 PodSpecs 中描述的容器健康运行。
|
||||
不是 Kubernetes 创建的容器将不在 kubelet 的管理范围。
|
||||
|
||||
<!--
|
||||
Other than from a PodSpec from the apiserver, there are three ways that a container
|
||||
manifest can be provided to the Kubelet.
|
||||
|
||||
File: Path passed as a flag on the command line. Files under this path will be monitored
|
||||
periodically for updates. The monitoring period is 20s by default and is configurable
|
||||
via a flag.
|
||||
|
||||
HTTP endpoint: HTTP endpoint passed as a parameter on the command line. This endpoint
|
||||
is checked every 20 seconds (also configurable with a flag).
|
||||
|
||||
HTTP server: The kubelet can also listen for HTTP and respond to a simple API
|
||||
(underspec'd currently) to submit a new manifest.
|
||||
-->
|
||||
除了来自 apiserver 的 PodSpec 之外,还有三种方法可以将容器清单提供给 Kubelet。
|
||||
|
||||
文件:通过命令行传入的文件路径。kubelet 将定期监听该路径下的文件以获得更新。监视周期默认为 20 秒,可通过参数进行配置。
|
||||
|
||||
HTTP 端点:HTTP 端点以命令行参数传入。每 20 秒检查一次该端点(该时间间隔也是可以通过命令行配置的)。
|
||||
|
||||
HTTP 服务:kubelet 还可以监听 HTTP 并响应简单的 API(当前未指定)以提交新的清单。
|
||||
|
||||
```
|
||||
kubelet [flags]
|
||||
```
|
||||
|
||||
<!--
|
||||
### Options
|
||||
-->
|
||||
### 选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
<colgroup>
|
||||
<col span="1" style="width: 10px;" />
|
||||
<col span="1" />
|
||||
</colgroup>
|
||||
<tbody>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--azure-container-registry-config string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Path to the file containing Azure container registry configuration information.</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">包含 Azure 容器注册配置信息的文件路径</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">-h, --help</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">help for kubelet</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">kubelet 的帮助信息</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--log-flush-frequency duration Default: 5s</td>
|
||||
-->
|
||||
<td colspan="2">--log-flush-frequency 间隔 默认: 5s</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Maximum number of seconds between log flushes</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">日志刷新间隔的最大秒数</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--version version[=true]</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!--
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Print version information and quit</td>
|
||||
-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">打印版本信息并退出</td>
|
||||
</tr>
|
||||
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,178 @@
|
|||
---
|
||||
title: kubectl
|
||||
notitle: true
|
||||
---
|
||||
|
||||
## kubectl
|
||||
<!--
|
||||
kubectl controls the Kubernetes cluster manager
|
||||
-->
|
||||
kubectl 可以操控 Kubernetes 集群。
|
||||
|
||||
<!--
|
||||
### Synopsis
|
||||
|
||||
kubectl controls the Kubernetes cluster manager.
|
||||
|
||||
Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/
|
||||
-->
|
||||
### 简介
|
||||
|
||||
kubectl 可以操控 Kubernetes 集群。
|
||||
|
||||
获取更多信息,请访问:https://kubernetes.io/docs/reference/kubectl/overview/
|
||||
|
||||
```
|
||||
kubectl [flags]
|
||||
```
|
||||
<!--
|
||||
### Options
|
||||
|
||||
```
|
||||
--alsologtostderr log to standard error as well as files
|
||||
--as string Username to impersonate for the operation
|
||||
--as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
|
||||
--cache-dir string Default HTTP cache directory (default "/home/username/.kube/http-cache")
|
||||
--certificate-authority string Path to a cert file for the certificate authority
|
||||
--client-certificate string Path to a client certificate file for TLS
|
||||
--client-key string Path to a client key file for TLS
|
||||
--cluster string The name of the kubeconfig cluster to use
|
||||
--context string The name of the kubeconfig context to use
|
||||
-h, --help help for kubectl
|
||||
--insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
|
||||
--kubeconfig string Path to the kubeconfig file to use for CLI requests.
|
||||
--log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
|
||||
--log-dir string If non-empty, write log files in this directory
|
||||
--logtostderr log to standard error instead of files
|
||||
--match-server-version Require server version to match client version
|
||||
-n, --namespace string If present, the namespace scope for this CLI request
|
||||
--request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
|
||||
-s, --server string The address and port of the Kubernetes API server
|
||||
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
|
||||
--token string Bearer token for authentication to the API server
|
||||
--user string The name of the kubeconfig user to use
|
||||
-v, --v Level log level for V logs
|
||||
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
|
||||
```
|
||||
-->
|
||||
### 选项
|
||||
```
|
||||
--alsologtostderr 同时输出日志到标准错误控制台和文件
|
||||
--as string 以指定用户执行操作
|
||||
--as-group stringArray 模拟操作的组,可以使用这个标识来指定多个组。
|
||||
--cache-dir string 默认 HTTP 缓存目录(默认值 "/home/username/.kube/http-cache" )
|
||||
--certificate-authority string 用于进行认证授权的 .cert 文件路径
|
||||
--client-certificate string TLS 使用的客户端证书路径
|
||||
--client-key string TLS 使用的客户端密钥文件路径
|
||||
--cluster string 指定要使用的 kubeconfig 文件中集群名
|
||||
--context string 指定要使用的 kubeconfig 文件中上下文
|
||||
-h, --help kubectl 帮助
|
||||
--insecure-skip-tls-verify 值为 true,则不会检查服务器的证书的有效性。 这将使您的HTTPS连接不安全
|
||||
--kubeconfig string CLI 请求使用的 kubeconfig 配置文件路径。
|
||||
--log-backtrace-at traceLocation 当日志长度超出规定的行数时,忽略堆栈信息(默认值:0)
|
||||
--log-dir string 如果不为空,则将日志文件写入此目录
|
||||
--logtostderr 日志输出到标准错误控制台而不输出到文件
|
||||
--match-server-version 要求客户端版本和服务端版本相匹配
|
||||
-n, --namespace string 如果存在,CLI 请求将使用此命名空间
|
||||
--request-timeout string 放弃一个简单服务请求前的等待时间,非零值需要包含相应时间单位(例如:1s, 2m, 3h)。零值则认为不做超时请求。 (默认值 "0")
|
||||
-s, --server string Kubernetes API server 的地址和端口
|
||||
--stderrthreshold severity 等于或高于此阈值的日志将输出标准错误控制台(默认值2)
|
||||
--token string 用于 API server 进行身份认证的承载令牌
|
||||
--user string 指定使用的 kubeconfig 配置文件中的用户名
|
||||
-v, --v Level 指定输出日志的日志级别
|
||||
--vmodule moduleSpec 指定输出日志的模块,格式如下:pattern=N,使用逗号分隔
|
||||
```
|
||||
<!--
|
||||
### SEE ALSO
|
||||
|
||||
* [kubectl alpha](kubectl_alpha.md) - Commands for features in alpha
|
||||
* [kubectl annotate](kubectl_annotate.md) - Update the annotations on a resource
|
||||
* [kubectl api-resources](kubectl_api-resources.md) - Print the supported API resources on the server
|
||||
* [kubectl api-versions](kubectl_api-versions.md) - Print the supported API versions on the server, in the form of "group/version"
|
||||
* [kubectl apply](kubectl_apply.md) - Apply a configuration to a resource by filename or stdin
|
||||
* [kubectl attach](kubectl_attach.md) - Attach to a running container
|
||||
* [kubectl auth](kubectl_auth.md) - Inspect authorization
|
||||
* [kubectl autoscale](kubectl_autoscale.md) - Auto-scale a Deployment, ReplicaSet, or ReplicationController
|
||||
* [kubectl certificate](kubectl_certificate.md) - Modify certificate resources.
|
||||
* [kubectl cluster-info](kubectl_cluster-info.md) - Display cluster info
|
||||
* [kubectl completion](kubectl_completion.md) - Output shell completion code for the specified shell (bash or zsh)
|
||||
* [kubectl config](kubectl_config.md) - Modify kubeconfig files
|
||||
* [kubectl convert](kubectl_convert.md) - Convert config files between different API versions
|
||||
* [kubectl cordon](kubectl_cordon.md) - Mark node as unschedulable
|
||||
* [kubectl cp](kubectl_cp.md) - Copy files and directories to and from containers.
|
||||
* [kubectl create](kubectl_create.md) - Create a resource from a file or from stdin.
|
||||
* [kubectl delete](kubectl_delete.md) - Delete resources by filenames, stdin, resources and names, or by resources and label selector
|
||||
* [kubectl describe](kubectl_describe.md) - Show details of a specific resource or group of resources
|
||||
* [kubectl drain](kubectl_drain.md) - Drain node in preparation for maintenance
|
||||
* [kubectl edit](kubectl_edit.md) - Edit a resource on the server
|
||||
* [kubectl exec](kubectl_exec.md) - Execute a command in a container
|
||||
* [kubectl explain](kubectl_explain.md) - Documentation of resources
|
||||
* [kubectl expose](kubectl_expose.md) - Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
|
||||
* [kubectl get](kubectl_get.md) - Display one or many resources
|
||||
* [kubectl label](kubectl_label.md) - Update the labels on a resource
|
||||
* [kubectl logs](kubectl_logs.md) - Print the logs for a container in a pod
|
||||
* [kubectl options](kubectl_options.md) - Print the list of flags inherited by all commands
|
||||
* [kubectl patch](kubectl_patch.md) - Update field(s) of a resource using strategic merge patch
|
||||
* [kubectl plugin](kubectl_plugin.md) - Runs a command-line plugin
|
||||
* [kubectl port-forward](kubectl_port-forward.md) - Forward one or more local ports to a pod
|
||||
* [kubectl proxy](kubectl_proxy.md) - Run a proxy to the Kubernetes API server
|
||||
* [kubectl replace](kubectl_replace.md) - Replace a resource by filename or stdin
|
||||
* [kubectl rollout](kubectl_rollout.md) - Manage the rollout of a resource
|
||||
* [kubectl run](kubectl_run.md) - Run a particular image on the cluster
|
||||
* [kubectl scale](kubectl_scale.md) - Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job
|
||||
* [kubectl set](kubectl_set.md) - Set specific features on objects
|
||||
* [kubectl taint](kubectl_taint.md) - Update the taints on one or more nodes
|
||||
* [kubectl top](kubectl_top.md) - Display Resource (CPU/Memory/Storage) usage.
|
||||
* [kubectl uncordon](kubectl_uncordon.md) - Mark node as schedulable
|
||||
* [kubectl version](kubectl_version.md) - Print the client and server version information
|
||||
* [kubectl wait](kubectl_wait.md) - Experimental: Wait for one condition on one or many resources
|
||||
-->
|
||||
### 接下来看
|
||||
|
||||
* [kubectl alpha](kubectl_alpha.md) - alpha环境上命令属性
|
||||
* [kubectl annotate](kubectl_annotate.md) - 更新资源上注释
|
||||
* [kubectl api-resources](kubectl_api-resources.md) - 在服务器上打印支持的 API 资源
|
||||
* [kubectl api-versions](kubectl_api-versions.md) - 以 "group/version" 的形式在服务器上打印支持的 API 版本
|
||||
* [kubectl apply](kubectl_apply.md) - 通过文件名或标准输入将配置添加给资源
|
||||
* [kubectl attach](kubectl_attach.md) - 附加到正在运行的容器
|
||||
* [kubectl auth](kubectl_auth.md) - 检查授权
|
||||
* [kubectl autoscale](kubectl_autoscale.md) - 自动扩展 Deployment, ReplicaSet 或 ReplicationController
|
||||
* [kubectl certificate](kubectl_certificate.md) - 修改证书资源。
|
||||
* [kubectl cluster-info](kubectl_cluster-info.md) - 展示集群信息
|
||||
* [kubectl completion](kubectl_completion.md) - 为给定的 shell 输出完成代码( bash 或 zsh)
|
||||
* [kubectl config](kubectl_config.md) - 修改 kubeconfig 配置文件
|
||||
* [kubectl convert](kubectl_convert.md) - 在不同的 API 版本之间转换配置文件
|
||||
* [kubectl cordon](kubectl_cordon.md) - 将 node 节点标记为不可调度
|
||||
* [kubectl cp](kubectl_cp.md) - 从容器复制文件和目录,也可将文件和目录复制到容器。
|
||||
* [kubectl create](kubectl_create.md) - 通过文件名或标准输入创建资源。
|
||||
* [kubectl delete](kubectl_delete.md) - 通过文件名,标准输入,资源和名称或资源和标签选择器删除资源
|
||||
* [kubectl describe](kubectl_describe.md) - 显示特定资源或资源组的详细信息
|
||||
* [kubectl drain](kubectl_drain.md) - 为便于维护,需要提前驱逐node节点
|
||||
* [kubectl edit](kubectl_edit.md) - 在服务器编辑资源
|
||||
* [kubectl exec](kubectl_exec.md) - 容器内退出命令
|
||||
* [kubectl explain](kubectl_explain.md) - 资源文档
|
||||
* [kubectl expose](kubectl_expose.md) - 获取 replication controller, service, deployment 或 pod 资源,并作为新的 Kubernetes 服务暴露
|
||||
* [kubectl get](kubectl_get.md) - 展示一个或多个资源
|
||||
* [kubectl label](kubectl_label.md) - 升级资源标签
|
||||
* [kubectl logs](kubectl_logs.md) - 为 pod 中的容器打印日志
|
||||
* [kubectl options](kubectl_options.md) - 打印所有命令继承的标识列表
|
||||
* [kubectl patch](kubectl_patch.md) - 使用战略性合并补丁更新资源字段
|
||||
* [kubectl plugin](kubectl_plugin.md) - 运行命令行插件
|
||||
* [kubectl port-forward](kubectl_port-forward.md) - 给 pod 开放一个或多个本地端口
|
||||
* [kubectl proxy](kubectl_proxy.md) - 为 Kubernetes API server 运行代理
|
||||
* [kubectl replace](kubectl_replace.md) - 通过文件或标准输入替换资源
|
||||
* [kubectl rollout](kubectl_rollout.md) - 管理资源展示
|
||||
* [kubectl run](kubectl_run.md) - 在集群上运行指定镜像
|
||||
* [kubectl scale](kubectl_scale.md) - 给 Deployment, ReplicaSet, Replication Controller 或 Job 设置新副本规模
|
||||
* [kubectl set](kubectl_set.md) - 给对象设置特定功能
|
||||
* [kubectl taint](kubectl_taint.md) - 更新一个或多个 node 节点的污点信息
|
||||
* [kubectl top](kubectl_top.md) - 展示资源 (CPU/Memory/Storage) 使用信息。
|
||||
* [kubectl uncordon](kubectl_uncordon.md) - 标记 node 节点为可调度
|
||||
* [kubectl version](kubectl_version.md) - 打印客户端和服务端版本信息
|
||||
* [kubectl wait](kubectl_wait.md) - 试验: 在一个或多个资源上等待条件完成
|
||||
|
||||
<!--
|
||||
###### Auto generated by spf13/cobra on 16-Jun-2018
|
||||
-->
|
||||
######2018年6月16日,通过spf13/cobra自动生成
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue