Merge pull request #25115 from SomtochiAma/merged-master-dev-1.20

Merge master into dev-1.20 to keep in sync
This commit is contained in:
Kubernetes Prow Robot 2020-11-19 08:58:52 -08:00 committed by GitHub
commit dbf0117733
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
82 changed files with 4390 additions and 2129 deletions

7
.github/OWNERS vendored Normal file
View File

@ -0,0 +1,7 @@
# See the OWNERS docs at https://go.k8s.io/owners
reviewers:
- sig-docs-en-reviews # Defined in OWNERS_ALIASES
approvers:
- sig-docs-en-owners # Defined in OWNERS_ALIASES

11
.github/workflows/OWNERS vendored Normal file
View File

@ -0,0 +1,11 @@
# See the OWNERS docs at https://go.k8s.io/owners
# When modifying this file, consider the security implications of
# allowing listed reviewers / approvals to modify or remove any
# configured GitHub Actions.
reviewers:
- sig-docs-leads
approvers:
- sig-docs-leads

View File

@ -157,7 +157,7 @@ github_repo = "https://github.com/kubernetes/website"
# param for displaying an announcement block on every page.
# See /i18n/en.toml for message text and title.
announcement = true
announcement_bg = "#3f0374" # choose a dark color text is white
announcement_bg = "#3d4cb7" # choose a dark color text is white
#Searching
k8s_search = true

View File

@ -8,7 +8,7 @@ sitemap:
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
[Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) is an open-source system for automating deployment, scaling, and management of containerized applications.
[Kubernetes]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}), also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon [15 years of experience of running production workloads at Google](http://queue.acm.org/detail.cfm?id=2898444), combined with best-of-breed ideas and practices from the community.
{{% /blocks/feature %}}
@ -28,7 +28,7 @@ Whether testing locally or running a global enterprise, Kubernetes flexibility g
{{% /blocks/feature %}}
{{% blocks/feature image="suitcase" %}}
#### Run Anywhere
#### Run K8s Anywhere
Kubernetes is open source giving you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, letting you effortlessly move workloads to where it matters to you.

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 66 KiB

View File

@ -0,0 +1,57 @@
---
layout: blog
title: "Cloud native security for your clusters"
date: 2020-11-18
slug: cloud-native-security-for-your-clusters
---
**Author**: [Pushkar Joglekar](https://twitter.com/pudijoglekar)
Over the last few years a small, security focused community has been working diligently to deepen our understanding of security, given the evolving cloud native infrastructure and corresponding iterative deployment practices. To enable sharing of this knowledge with the rest of the community, members of [CNCF SIG Security](https://github.com/cncf/sig-security) (a group which reports into [CNCF TOC](https://github.com/cncf/toc#sigs) and who are friends with [Kubernetes SIG Security](https://github.com/kubernetes/community/tree/master/sig-security)) led by Emily Fox, collaborated on a whitepaper outlining holistic cloud native security concerns and best practices. After over 1200 comments, changes, and discussions from 35 members across the world, we are proud to share [cloud native security whitepaper v1.0](https://www.cncf.io/blog/2020/11/18/announcing-the-cloud-native-security-white-paper) that serves as essential reading for security leadership in enterprises, financial and healthcare industries, academia, government, and non-profit organizations.
The paper attempts to _not_ focus on any specific [cloud native project](https://www.cncf.io/projects/). Instead, the intent is to model and inject security into four logical phases of cloud native application lifecycle: _Develop, Distribute, Deploy, and Runtime_.
<img alt="Cloud native application lifecycle phases"
src="cloud-native-app-lifecycle-phases.svg"
style="width:60em;max-width:100%;">
## Kubernetes native security controls
When using Kubernetes as a workload orchestrator, some of the security controls this version of the whitepaper recommends are:
* [Pod Security Policies](/docs/concepts/policy/pod-security-policy/): Implement a single source of truth for “least privilege” workloads across the entire cluster
* [Resource requests and limits](/docs/concepts/configuration/manage-resources-containers/#requests-and-limits): Apply requests (soft constraint) and limits (hard constraint) for shared resources such as memory and CPU
* [Audit log analysis](/docs/tasks/debug-application-cluster/audit/): Enable Kubernetes API auditing and filtering for security relevant events
* [Control plane authentication and certificate root of trust](/docs/concepts/architecture/control-plane-node-communication/): Enable mutual TLS authentication with a trusted CA for communication within the cluster
* [Secrets management](/docs/concepts/configuration/secret/): Integrate with a built-in or external secrets store
## Cloud native complementary security controls
Kubernetes has direct involvement in the _deploy_ phase and to a lesser extent in the _runtime_ phase. Ensuring the artifacts are securely _developed_ and _distributed_ is necessary for, enabling workloads in Kubernetes to run “secure by default”. Throughout all phases of the Cloud native application life cycle, several complementary security controls exist for Kubernetes orchestrated workloads, which includes but are not limited to:
* Develop:
- Image signing and verification
- Image vulnerability scanners
* Distribute:
- Pre-deployment checks for detecting excessive privileges
- Enabling observability and logging
* Deploy:
- Using a service mesh for workload authentication and authorization
- Enforcing “default deny” network policies for inter-workload communication via [network plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
* Runtime:
- Deploying security monitoring agents for workloads
- Isolating applications that run on the same node using SELinux, AppArmor, etc.
- Scanning configuration against recognized secure baselines for node, workload and orchestrator
## Understand first, secure next
The cloud native way, including containers, provides great security benefits for its users: immutability, modularity, faster upgrades and consistent state across the environment. Realizing this fundamental change in “the way things are done”, motivates us to look at security with a cloud native lens. One of the things that was evident for all the authors of the paper was the fact that its tough to make smarter decisions on how and what to secure in a cloud native ecosystem if you do not understand the tools, patterns, and frameworks at hand (in addition to knowing your own critical assets). Hence, for all the security practitioners out there who want to be partners rather than a gatekeeper for your friends in Operations, Product Development, and Compliance, lets make an attempt to _learn more so we can secure better_.
We recommend following this **7 step R.U.N.T.I.M.E. path** to get started on cloud native security:
1. <b>R</b>ead the paper and any linked material in it
2. <b>U</b>nderstand challenges and constraints for your environment
3. <b>N</b>ote the content and controls that apply to your environment
4. <b>T</b>alk about your observations with your peers
5. <b>I</b>nvolve your leadership and ask for help
6. <b>M</b>ake a risk profile based on existing and missing security controls
7. <b>E</b>xpend time, money, and resources that improve security posture and reduce risk where appropriate.
## Acknowledgements
Huge shout out to _Emily Fox, Tim Bannister (The Scale Factory), Chase Pettet (Mirantis), and Wayne Haber (GitLab)_ for contributing with their wonderful suggestions for this blog post.

View File

@ -31,7 +31,7 @@ This lets you fetch a container image running in the cloud and
debug the exact same code locally if needed.
A ConfigMap is not designed to hold large chunks of data. The data stored in a
ConfigMap cannot exeed 1 MiB. If you need to store settings that are
ConfigMap cannot exceed 1 MiB. If you need to store settings that are
larger than this limit, you may want to consider mounting a volume or use a
separate database or file service.
@ -88,7 +88,7 @@ data:
There are four different ways that you can use a ConfigMap to configure
a container inside a Pod:
1. Command line arguments to the entrypoint of a container
1. Inside a container command and args
1. Environment variables for a container
1. Add a file in read-only volume, for the application to read
1. Write code to run inside the Pod that uses the Kubernetes API to read a ConfigMap

View File

@ -22,6 +22,8 @@ Kubernetes as a project currently supports and maintains [GCE](https://git.k8s.i
## Additional controllers
{{% thirdparty-content %}}
* [AKS Application Gateway Ingress Controller](https://github.com/Azure/application-gateway-kubernetes-ingress) is an ingress controller that enables ingress to [AKS clusters](https://docs.microsoft.com/azure/aks/kubernetes-walkthrough-portal) using the [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/overview).
* [Ambassador](https://www.getambassador.io/) API Gateway is an [Envoy](https://www.envoyproxy.io) based ingress
controller with [community](https://www.getambassador.io/docs) or

View File

@ -94,7 +94,7 @@ run, what volume plugin it uses (including Flex), etc. The repository
[kubernetes-sigs/sig-storage-lib-external-provisioner](https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner)
houses a library for writing external provisioners that implements the bulk of
the specification. Some external provisioners are listed under the repository
[kubernetes-sigs/external-storage](https://github.com/kubernetes-sigs/external-dns).
[kubernetes-sigs/sig-storage-lib-external-provisioner](https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner).
For example, NFS doesn't provide an internal provisioner, but an external
provisioner can be used. There are also cases when 3rd party storage

View File

@ -150,7 +150,7 @@ remembered and reused, even after the Pod is running, for at least a few seconds
If you need to discover Pods promptly after they are created, you have a few options:
- Query the Kubernetes API directly (for example, using a watch) rather than relying on DNS lookups.
- Decrease the time of caching in your Kubernetes DNS provider (tpyically this means editing the config map for CoreDNS, which currently caches for 30 seconds).
- Decrease the time of caching in your Kubernetes DNS provider (typically this means editing the config map for CoreDNS, which currently caches for 30 seconds).
As mentioned in the [limitations](#limitations) section, you are responsible for

View File

@ -142,7 +142,7 @@ The `restartPolicy` applies to all containers in the Pod. `restartPolicy` only
refers to restarts of the containers by the kubelet on the same node. After containers
in a Pod exit, the kubelet restarts them with an exponential back-off delay (10s, 20s,
40s, …), that is capped at five minutes. Once a container has executed for 10 minutes
without any problems, the kubelet resets the restart backoff timer forthat container.
without any problems, the kubelet resets the restart backoff timer for that container.
## Pod conditions

View File

@ -1,6 +1,6 @@
---
content_type: concept
title: Contribute to Kubernetes docs
title: Contribute to K8s docs
linktitle: Contribute
main_menu: true
no_list: true
@ -8,7 +8,7 @@ weight: 80
card:
name: contribute
weight: 10
title: Start contributing
title: Start contributing to K8s
---
<!-- overview -->

View File

@ -43,7 +43,7 @@ When opening a pull request, you need to know in advance which branch to base yo
Scenario | Branch
:---------|:------------
Existing or new English language content for the current release | `master`
Content for a feature change release | The branch which corresponds to the major and minor version the feature change is in, using the pattern `dev-release-<version>`. For example, if a feature changes in the `{{< latest-version >}}` release, then add documentation changes to the ``dev-{{< release-branch >}}`` branch.
Content for a feature change release | The branch which corresponds to the major and minor version the feature change is in, using the pattern `dev-<version>`. For example, if a feature changes in the `v{{< skew nextMinorVersion >}}` release, then add documentation changes to the ``dev-{{< skew nextMinorVersion >}}`` branch.
Content in other languages (localizations) | Use the localization's convention. See the [Localization branching strategy](/docs/contribute/localization/#branching-strategy) for more information.

View File

@ -32,7 +32,7 @@ cards:
button: "View Tutorials"
button_path: "/docs/tutorials"
- name: setup
title: "Set up a cluster"
title: "Set up a K8s cluster"
description: "Get Kubernetes running based on your resources and needs."
button: "Set up Kubernetes"
button_path: "/docs/setup"
@ -57,7 +57,7 @@ cards:
button: Contribute to the docs
button_path: /docs/contribute
- name: release-notes
title: Release Notes
title: K8s Release Notes
description: If you are installing Kubernetes or upgrading to the newest version, refer to the current release notes.
button: "Download Kubernetes"
button_path: "/docs/setup/release/notes"

View File

@ -143,13 +143,15 @@ different Kubernetes components.
| `RotateKubeletServerCertificate` | `false` | Alpha | 1.7 | 1.11 |
| `RotateKubeletServerCertificate` | `true` | Beta | 1.12 | |
| `RunAsGroup` | `true` | Beta | 1.14 | |
| `ServiceAppProtocol` | `false` | Alpha | 1.18 | 1.18 |
| `ServiceAppProtocol` | `true` | Beta | 1.19 | |
| `RuntimeClass` | `false` | Alpha | 1.12 | 1.13 |
| `RuntimeClass` | `true` | Beta | 1.14 | |
| `SCTPSupport` | `false` | Alpha | 1.12 | 1.18 |
| `SCTPSupport` | `true` | Beta | 1.19 | |
| `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 |
| `ServerSideApply` | `true` | Beta | 1.16 | |
| `ServiceAccountIssuerDiscovery` | `false` | Alpha | 1.18 | 1.19 |
| `ServiceAccountIssuerDiscovery` | `true` | Beta | 1.20 | |
| `ServiceAppProtocol` | `false` | Alpha | 1.18 | |
| `ServiceAccountIssuerDiscovery` | `false` | Alpha | 1.18 | |
| `ServiceAppProtocol` | `false` | Alpha | 1.18 | 1.18 |
| `ServiceAppProtocol` | `true` | Beta | 1.19 | |
| `ServiceNodeExclusion` | `false` | Alpha | 1.8 | 1.18 |
| `ServiceNodeExclusion` | `true` | Beta | 1.19 | |
| `ServiceTopology` | `false` | Alpha | 1.17 | |

View File

@ -27,15 +27,15 @@ The following examples will show how you can interact with the health API endpoi
For all endpoints you can use the `verbose` parameter to print out the checks and their status.
This can be useful for a human operator to debug the current status of the Api server, it is not intended to be consumed by a machine:
```shell
curl -k https://localhost:6443/livez?verbose
```
```shell
curl -k https://localhost:6443/livez?verbose
```
or from a remote host with authentication:
```shell
kubectl get --raw='/readyz?verbose'
```
```shell
kubectl get --raw='/readyz?verbose'
```
The output will look like this:
@ -62,9 +62,9 @@ The output will look like this:
The Kubernetes API server also supports to exclude specific checks.
The query parameters can also be combined like in this example:
```shell
curl -k 'https://localhost:6443/readyz?verbose&exclude=etcd'
```
```shell
curl -k 'https://localhost:6443/readyz?verbose&exclude=etcd'
```
The output show that the `etcd` check is excluded:
@ -98,6 +98,6 @@ The schema for the individual health checks is `/livez/<healthcheck-name>` where
The `<healthcheck-name>` path can be discovered using the `verbose` flag from above and take the path between `[+]` and `ok`.
These individual health checks should not be consumed by machines but can be helpful for a human operator to debug a system:
```shell
curl -k https://localhost:6443/livez/etcd
```
```shell
curl -k https://localhost:6443/livez/etcd
```

View File

@ -25,10 +25,11 @@ daemons installed:
## Running Node Conformance Test
To run the node conformance test, perform the following steps:
1. Point your Kubelet to localhost `--api-servers="http://localhost:8080"`,
because the test framework starts a local master to test Kubelet. There are some
other Kubelet flags you may care:
1. Work out the value of the `--kubeconfig` option for the kubelet; for example:
`--kubeconfig=/var/lib/kubelet/config.yaml`.
Because the test framework starts a local control plane to test the kubelet,
use `http://localhost:8080` as the URL of the API server.
There are some other kubelet command line parameters you may want to use:
* `--pod-cidr`: If you are using `kubenet`, you should specify an arbitrary CIDR
to Kubelet, for example `--pod-cidr=10.180.0.0/24`.
* `--cloud-provider`: If you are using `--cloud-provider=gce`, you should

View File

@ -50,7 +50,7 @@ this example.
1. Configure the kubelet to be a service manager for etcd.
{{< note >}}You must do this on every host where etcd should be running.{{< /note >}}
Since etcd was created first, you must override the service priority by creating a new unit file
that has higher precedence than the kubeadm-provided kubelet unit file.
@ -68,6 +68,12 @@ this example.
systemctl restart kubelet
```
Check the kubelet status to ensure it is running.
```sh
systemctl status kubelet
```
1. Create configuration files for kubeadm.
Generate one kubeadm configuration file for each host that will have an etcd

View File

@ -140,7 +140,7 @@ curl -L https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/dow
### Joining a Windows worker node
{{< note >}}
You must install the `Containers` feature and install Docker. Instructions
to do so are available at [Install Docker Engine - Enterprise on Windows Servers](https://docs.mirantis.com/docker-enterprise/v3.1/dockeree-products/docker-engine-enterprise/dee-windows.html).
to do so are available at [Install Docker Engine - Enterprise on Windows Servers](https://hub.docker.com/editions/enterprise/docker-ee-server-windows).
{{< /note >}}
{{< note >}}

View File

@ -532,7 +532,7 @@ This functionality is available in Kubernetes v1.6 and later.
## Use ConfigMap-defined environment variables in Pod commands
You can use ConfigMap-defined environment variables in the `command` section of the Pod specification using the `$(VAR_NAME)` Kubernetes substitution syntax.
You can use ConfigMap-defined environment variables in the `command` and `args` of a container using the `$(VAR_NAME)` Kubernetes substitution syntax.
For example, the following Pod specification

View File

@ -24,10 +24,35 @@ The following steps require an egress configuration, for example:
You need to configure the API Server to use the Konnectivity service
and direct the network traffic to the cluster nodes:
1. Make sure that
the `ServiceAccountTokenVolumeProjection` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
is enabled. You can enable
[service account token volume protection](/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection)
by providing the following flags to the kube-apiserver:
```
--service-account-issuer=api
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key
--api-audiences=system:konnectivity-server
```
1. Create an egress configuration file such as `admin/konnectivity/egress-selector-configuration.yaml`.
1. Set the `--egress-selector-config-file` flag of the API Server to the path of
your API Server egress configuration file.
Generate or obtain a certificate and kubeconfig for konnectivity-server.
For example, you can use the OpenSSL command line tool to issue a X.509 certificate,
using the cluster CA certificate `/etc/kubernetes/pki/ca.crt` from a control-plane host.
```bash
openssl req -subj "/CN=system:konnectivity-server" -new -newkey rsa:2048 -nodes -out konnectivity.csr -keyout konnectivity.key -out konnectivity.csr
openssl x509 -req -in konnectivity.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out konnectivity.crt -days 375 -sha256
SERVER=$(kubectl config view -o jsonpath='{.clusters..server}')
kubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-credentials system:konnectivity-server --client-certificate konnectivity.crt --client-key konnectivity.key --embed-certs=true
kubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-cluster kubernetes --server "$SERVER" --certificate-authority /etc/kubernetes/pki/ca.crt --embed-certs=true
kubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-context system:konnectivity-server@kubernetes --cluster kubernetes --user system:konnectivity-server
kubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config use-context system:konnectivity-server@kubernetes
rm -f konnectivity.crt konnectivity.key konnectivity.csr
```
Next, you need to deploy the Konnectivity server and agents.
[kubernetes-sigs/apiserver-network-proxy](https://github.com/kubernetes-sigs/apiserver-network-proxy)
is a reference implementation.

View File

@ -46,7 +46,7 @@ tutorial.
After this tutorial, you will know the following.
- How to deploy a ZooKeeper ensemble using StatefulSet.
- How to consistently configure the ensemble using ConfigMaps.
- How to consistently configure the ensemble.
- How to spread the deployment of ZooKeeper servers in the ensemble.
- How to use PodDisruptionBudgets to ensure service availability during planned maintenance.
@ -770,7 +770,7 @@ In one terminal window, use the following command to watch the Pods in the `zk`
kubectl get pod -w -l app=zk
```
In another window, using the following command to delete the `zkOk.sh` script from the file system of Pod `zk-0`.
In another window, using the following command to delete the `zookeeper-ready` script from the file system of Pod `zk-0`.
```shell
kubectl exec zk-0 -- rm /usr/bin/zookeeper-ready

View File

@ -18,4 +18,4 @@ egressSelections:
# The other supported transport is "tcp". You will need to set up TLS
# config to secure the TCP transport.
uds:
udsName: /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket
udsName: /etc/kubernetes/konnectivity-server/konnectivity-server.socket

View File

@ -22,7 +22,7 @@ spec:
- key: "CriticalAddonsOnly"
operator: "Exists"
containers:
- image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent:v0.0.8
- image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent:v0.0.12
name: konnectivity-agent
command: ["/proxy-agent"]
args: [
@ -32,6 +32,8 @@ spec:
# this is the IP address of the master machine.
"--proxy-server-host=35.225.206.7",
"--proxy-server-port=8132",
"--admin-server-port=8133",
"--health-server-port=8134",
"--service-account-token-path=/var/run/secrets/tokens/konnectivity-agent-token"
]
volumeMounts:
@ -39,7 +41,7 @@ spec:
name: konnectivity-agent-token
livenessProbe:
httpGet:
port: 8093
port: 8134
path: /healthz
initialDelaySeconds: 15
timeoutSeconds: 15

View File

@ -8,34 +8,33 @@ spec:
hostNetwork: true
containers:
- name: konnectivity-server-container
image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-server:v0.0.8
image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-server:v0.0.12
command: ["/proxy-server"]
args: [
"--log-file=/var/log/konnectivity-server.log",
"--logtostderr=false",
"--log-file-max-size=0",
"--logtostderr=true",
# This needs to be consistent with the value set in egressSelectorConfiguration.
"--uds-name=/etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket",
"--uds-name=/etc/kubernetes/konnectivity-server/konnectivity-server.socket",
# The following two lines assume the Konnectivity server is
# deployed on the same machine as the apiserver, and the certs and
# key of the API Server are at the specified location.
"--cluster-cert=/etc/srv/kubernetes/pki/apiserver.crt",
"--cluster-key=/etc/srv/kubernetes/pki/apiserver.key",
"--cluster-cert=/etc/kubernetes/pki/apiserver.crt",
"--cluster-key=/etc/kubernetes/pki/apiserver.key",
# This needs to be consistent with the value set in egressSelectorConfiguration.
"--mode=grpc",
"--server-port=0",
"--agent-port=8132",
"--admin-port=8133",
"--health-port=8134",
"--agent-namespace=kube-system",
"--agent-service-account=konnectivity-agent",
"--kubeconfig=/etc/srv/kubernetes/konnectivity-server/kubeconfig",
"--kubeconfig=/etc/kubernetes/konnectivity-server.conf",
"--authentication-audience=system:konnectivity-server"
]
livenessProbe:
httpGet:
scheme: HTTP
host: 127.0.0.1
port: 8133
port: 8134
path: /healthz
initialDelaySeconds: 30
timeoutSeconds: 60
@ -46,25 +45,28 @@ spec:
- name: adminport
containerPort: 8133
hostPort: 8133
- name: healthport
containerPort: 8134
hostPort: 8134
volumeMounts:
- name: varlogkonnectivityserver
mountPath: /var/log/konnectivity-server.log
readOnly: false
- name: pki
mountPath: /etc/srv/kubernetes/pki
- name: k8s-certs
mountPath: /etc/kubernetes/pki
readOnly: true
- name: kubeconfig
mountPath: /etc/kubernetes/konnectivity-server.conf
readOnly: true
- name: konnectivity-uds
mountPath: /etc/srv/kubernetes/konnectivity-server
mountPath: /etc/kubernetes/konnectivity-server
readOnly: false
volumes:
- name: varlogkonnectivityserver
- name: k8s-certs
hostPath:
path: /var/log/konnectivity-server.log
path: /etc/kubernetes/pki
- name: kubeconfig
hostPath:
path: /etc/kubernetes/konnectivity-server.conf
type: FileOrCreate
- name: pki
hostPath:
path: /etc/srv/kubernetes/pki
- name: konnectivity-uds
hostPath:
path: /etc/srv/kubernetes/konnectivity-server
path: /etc/kubernetes/konnectivity-server
type: DirectoryOrCreate

View File

@ -6,7 +6,7 @@ spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ]
command: [ "/bin/echo", "$(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:

View File

@ -10,16 +10,19 @@ class: training
<section class="call-to-action">
<div class="main-section">
<div class="call-to-action" id="cta-certification">
<div class="logo-certification cta-image cta-image-before" id="logo-cka">
<img src="/images/training/kubernetes-cka-white.svg"/>
</div>
<div class="logo-certification cta-image cta-image-after" id="logo-ckad">
<img src="/images/training/kubernetes-ckad-white.svg"/>
</div>
<div class="cta-text">
<h2>Build your cloud native career</h2>
<p>Kubernetes is at the core of the cloud native movement. Training and certifications from the Linux Foundation and our training partners lets you invest in your career, learn Kubernetes, and make your cloud native projects successful.</p>
</div>
<div class="logo-certification cta-image" id="logo-cka">
<img src="/images/training/kubernetes-cka-white.svg"/>
</div>
<div class="logo-certification cta-image" id="logo-ckad">
<img src="/images/training/kubernetes-ckad-white.svg"/>
</div>
<div class="logo-certification cta-image" id="logo-cks">
<img src="/images/training/kubernetes-cks-white.svg"/>
</div>
</div>
</div>
</section>
@ -74,31 +77,36 @@ class: training
</div>
</div>
<section>
<section id="get-certified">
<div class="main-section padded">
<center>
<h2>Get Kubernetes Certified</h2>
</center>
<h2>Get Kubernetes Certified</h2>
<div class="col-container">
<div class="col-nav">
<center>
<h5>
<b>Certified Kubernetes Application Developer (CKAD)</b>
</h5>
<p>The Certified Kubernetes Application Developer exam certifies that users can design, build, configure, and expose cloud native applications for Kubernetes.</p>
<p>A CKAD can define application resources and use core primitives to build, monitor, and troubleshoot scalable applications and tools in Kubernetes.</p>
<br>
<a href="https://training.linuxfoundation.org/certification/certified-kubernetes-application-developer-ckad/" target="_blank" class="button">Go to Certification</a>
</center>
</div>
<div class="col-nav">
<center>
<h5>
<b>Certified Kubernetes Administrator (CKA)</b>
</h5>
<p>The Certified Kubernetes Administrator (CKA) program provides assurance that CKAs have the skills, knowledge, and competency to perform the responsibilities of Kubernetes administrators.</p>
<p>A certified Kubernetes administrator has demonstrated the ability to do basic installation as well as configuring and managing production-grade Kubernetes clusters.</p>
<br>
<a href="https://training.linuxfoundation.org/certification/certified-kubernetes-administrator-cka/" target="_blank" class="button">Go to Certification</a>
</center>
</div>
<div class="col-nav">
<h5>
<b>Certified Kubernetes Security Specialist (CKS)</b>
</h5>
<p>The Certified Kubernetes Security Specialist program provides assurance that the holder is comfortable and competent with a broad range of best practices. CKS certification covers skills for securing container-based applications and Kubernetes platforms during build, deployment and runtime.</p>
<p><em>Candidates for CKS must hold a current Certified Kubernetes Administrator (CKA) certification to demonstrate they possess sufficient Kubernetes expertise before sitting for the CKS.</em></p>
<br>
<a href="https://training.linuxfoundation.org/certification/certified-kubernetes-security-specialist/" target="_blank" class="button">Go to Certification</a>
</div>
</div>
</div>

View File

@ -34,7 +34,7 @@ este será inmediatamente adquirido por dicho ReplicaSet.
## Cuándo usar un ReplicaSet
Un ReplicaSet garantiza que un número específico de réplicas de un pod se está ejeuctando en todo momento.
Un ReplicaSet garantiza que un número específico de réplicas de un pod se está ejecutando en todo momento.
Sin embargo, un Deployment es un concepto de más alto nivel que gestiona ReplicaSets y
proporciona actualizaciones de forma declarativa de los Pods junto con muchas otras características útiles.
Por lo tanto, se recomienda el uso de Deployments en vez del uso directo de ReplicaSets, a no ser

View File

@ -217,7 +217,7 @@ type: Opaque
data:
username: YWRtaW4=
stringData:
username: administrator
username: administrateur
```
Donnera le secret suivant:
@ -233,10 +233,10 @@ metadata:
uid: 91460ecb-e917-11e8-98f2-025000000001
type: Opaque
data:
username: YWRtaW5pc3RyYXRvcg==
username: YWRtaW5pc3RyYXRldXI=
```
`YWRtaW5pc3RyYXRvcg ==` décode en `administrateur`.
`YWRtaW5pc3RyYXRldXI=` décode en `administrateur`.
Les clés de `data` et `stringData` doivent être composées de caractères alphanumériques, '-', '_' ou '.'.

View File

@ -109,7 +109,7 @@ Pod dapat digunakan untuk menjalankan beberapa aplikasi yang terintegrasi
secara vertikal (misalnya LAMP), namun motivasi utamanya adalah untuk mendukung
berlokasi bersama, mengelola program pembantu, diantaranya adalah:
* sistem pengelolaan konten, pemuat file dan data, manajer _cache_ lokal, dll.
* sistem pengelolaan konten, pemuat berkas dan data, manajer _cache_ lokal, dll.
* catatan dan _checkpoint_ cadangan, kompresi, rotasi, dll.
* pengamat perubahan data, pengintip catatan, adapter pencatatan dan pemantauan,
penerbit peristiwa, dll.

View File

@ -0,0 +1,297 @@
---
title: Podセキュリティの標準
content_type: concept
weight: 10
---
<!-- overview -->
Podに対するセキュリティの設定は通常[Security Context](/docs/tasks/configure-pod-container/security-context/)を使用して適用されます。Security ContextはPod単位での特権やアクセスコントロールの定義を実現します。
クラスターにおけるSecurity Contextの強制やポリシーベースの定義は[Pod Security Policy](/docs/concepts/policy/pod-security-policy/)によって実現されてきました。
_Pod Security Policy_ はクラスターレベルのリソースで、Pod定義のセキュリティに関する設定を制御します。
しかし、PodSecurityPolicyを拡張したり代替する、ポリシーを強制するための多くの方法が生まれてきました。
このページの意図は、推奨されるPodのセキュリティプロファイルを特定の実装から切り離して詳しく説明することです。
<!-- body -->
## ポリシーの種別
まず、幅広いセキュリティの範囲をカバーできる、基礎となるポリシーの定義が必要です。
それらは強く制限をかけるものから自由度の高いものまでをカバーすべきです。
- **_特権_** - 制限のかかっていないポリシーで、可能な限り幅広い権限を提供します。このポリシーは既知の特権昇格を認めます。
- **_ベースライン、デフォルト_** - 制限は最小限にされたポリシーですが、既知の特権昇格を防止します。デフォルト最小の指定のPod設定を許容します。
- **_制限_** - 厳しく制限されたポリシーで、Podを強化するための現在のベストプラクティスに沿っています。
## ポリシー
### 特権
特権ポリシーは意図的に開放されていて、完全に制限がかけられていません。この種のポリシーは通常、特権ユーザーまたは信頼されたユーザーが管理する、システムまたはインフラレベルのワークロードに対して適用されることを意図しています。
特権ポリシーは制限がないことと定義されます。gatekeeperのようにデフォルトで許可される仕組みでは、特権プロファイルはポリシーを設定せず、何も制限を適用しないことにあたります。
一方で、Pod Security Policyのようにデフォルトで拒否される仕組みでは、特権ポリシーでは全ての制限を無効化してコントロールできるようにする必要があります。
### ベースライン、デフォルト
ベースライン、デフォルトのプロファイルは一般的なコンテナ化されたランタイムに適用しやすく、かつ既知の特権昇格を防ぐことを意図しています。
このポリシーはクリティカルではないアプリケーションの運用者または開発者を対象にしています。
次の項目は強制、または無効化すべきです。
<table>
<caption style="display:none">ベースラインポリシーの定義</caption>
<tbody>
<tr>
<td><strong>項目</strong></td>
<td><strong>ポリシー</strong></td>
</tr>
<tr>
<td>ホストのネームスペース</td>
<td>
ホストのネームスペースの共有は無効化すべきです。<br>
<br><b>制限されるフィールド:</b><br>
spec.hostNetwork<br>
spec.hostPID<br>
spec.hostIPC<br>
<br><b>認められる値:</b> false<br>
</td>
</tr>
<tr>
<td>特権コンテナ</td>
<td>
特権を持つPodはほとんどのセキュリティ機構を無効化できるので、禁止すべきです。<br>
<br><b>制限されるフィールド:</b><br>
spec.containers[*].securityContext.privileged<br>
spec.initContainers[*].securityContext.privileged<br>
<br><b>認められる値:</b> false, undefined/nil<br>
</td>
</tr>
<tr>
<td>ケーパビリティー</td>
<td>
<a href="https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities">デフォルト</a>よりも多くのケーパビリティーを与えることは禁止すべきです。<br>
<br><b>制限されるフィールド:</b><br>
spec.containers[*].securityContext.capabilities.add<br>
spec.initContainers[*].securityContext.capabilities.add<br>
<br><b>認められる値:</b> 空 (または既知のリストに限定)<br>
</td>
</tr>
<tr>
<td>HostPathボリューム</td>
<td>
HostPathボリュームは禁止すべきです。<br>
<br><b>制限されるフィールド:</b><br>
spec.volumes[*].hostPath<br>
<br><b>認められる値:</b> undefined/nil<br>
</td>
</tr>
<tr>
<td>ホストのポート</td>
<td>
HostPortは禁止するか、最小限の既知のリストに限定すべきです。<br>
<br><b>制限されるフィールド:</b><br>
spec.containers[*].ports[*].hostPort<br>
spec.initContainers[*].ports[*].hostPort<br>
<br><b>認められる値:</b> 0, undefined (または既知のリストに限定)<br>
</td>
</tr>
<tr>
<td>AppArmor <em>(任意)</em></td>
<td>
サポートされるホストでは、AppArmorの'runtime/default'プロファイルがデフォルトで適用されます。デフォルトのポリシーはポリシーの上書きや無効化を防ぎ、許可されたポリシーのセットを上書きできないよう制限すべきです。<br>
<br><b>制限されるフィールド:</b><br>
metadata.annotations['container.apparmor.security.beta.kubernetes.io/*']<br>
<br><b>認められる値:</b> 'runtime/default', undefined<br>
</td>
</tr>
<tr>
<td>SELinux <em>(任意)</em></td>
<td>
SELinuxのオプションをカスタムで設定することは禁止すべきです。<br>
<br><b>制限されるフィールド:</b><br>
spec.securityContext.seLinuxOptions<br>
spec.containers[*].securityContext.seLinuxOptions<br>
spec.initContainers[*].securityContext.seLinuxOptions<br>
<br><b>認められる値:</b> undefined/nil<br>
</td>
</tr>
<tr>
<td>/procマウントタイプ</td>
<td>
攻撃対象を縮小するため/procのマスクを設定し、必須とすべきです。<br>
<br><b>制限されるフィールド:</b><br>
spec.containers[*].securityContext.procMount<br>
spec.initContainers[*].securityContext.procMount<br>
<br><b>認められる値:</b> undefined/nil, 'Default'<br>
</td>
</tr>
<tr>
<td>Sysctl</td>
<td>
Sysctlはセキュリティ機構を無効化したり、ホストの全てのコンテナに影響を与えたりすることが可能なので、「安全」なサブネットを除いては禁止すべきです。
コンテナまたはPodの中にsysctlがありネームスペースが分離されていて、同じードの別のPodやプロセスから分離されている場合はsysctlは安全だと考えられます。<br>
<br><b>制限されるフィールド:</b><br>
spec.securityContext.sysctls<br>
<br><b>認められる値:</b><br>
kernel.shm_rmid_forced<br>
net.ipv4.ip_local_port_range<br>
net.ipv4.tcp_syncookies<br>
net.ipv4.ping_group_range<br>
undefined/空文字列<br>
</td>
</tr>
</tbody>
</table>
### 制限
制限ポリシーはいくらかの互換性を犠牲にして、Podを強化するためのベストプラクティスを強制することを意図しています。
セキュリティ上クリティカルなアプリケーションの運用者や開発者、また信頼度の低いユーザーも対象にしています。
下記の項目を強制、無効化すべきです。
<table>
<caption style="display:none">制限ポリシーの定義</caption>
<tbody>
<tr>
<td><strong>項目</strong></td>
<td><strong>ポリシー</strong></td>
</tr>
<tr>
<td colspan="2"><em>デフォルトプロファイルにある項目全て</em></td>
</tr>
<tr>
<td>Volumeタイプ</td>
<td>
HostPathボリュームの制限に加え、制限プロファイルではコアでない種類のボリュームの利用をPersistentVolumeにより定義されたものに限定します。<br>
<br><b>制限されるフィールド:</b><br>
spec.volumes[*].hostPath<br>
spec.volumes[*].gcePersistentDisk<br>
spec.volumes[*].awsElasticBlockStore<br>
spec.volumes[*].gitRepo<br>
spec.volumes[*].nfs<br>
spec.volumes[*].iscsi<br>
spec.volumes[*].glusterfs<br>
spec.volumes[*].rbd<br>
spec.volumes[*].flexVolume<br>
spec.volumes[*].cinder<br>
spec.volumes[*].cephFS<br>
spec.volumes[*].flocker<br>
spec.volumes[*].fc<br>
spec.volumes[*].azureFile<br>
spec.volumes[*].vsphereVolume<br>
spec.volumes[*].quobyte<br>
spec.volumes[*].azureDisk<br>
spec.volumes[*].portworxVolume<br>
spec.volumes[*].scaleIO<br>
spec.volumes[*].storageos<br>
spec.volumes[*].csi<br>
<br><b>認められる値:</b> undefined/nil<br>
</td>
</tr>
<tr>
<td>特権昇格</td>
<td>
特権昇格(ファイルモードのset-user-IDまたはset-group-IDのような方法による)は禁止すべきです。<br>
<br><b>制限されるフィールド:</b><br>
spec.containers[*].securityContext.allowPrivilegeEscalation<br>
spec.initContainers[*].securityContext.allowPrivilegeEscalation<br>
<br><b>認められる値:</b> false<br>
</td>
</tr>
<tr>
<td>root以外での実行</td>
<td>
コンテナはroot以外のユーザーで実行する必要があります。<br>
<br><b>制限されるフィールド:</b><br>
spec.securityContext.runAsNonRoot<br>
spec.containers[*].securityContext.runAsNonRoot<br>
spec.initContainers[*].securityContext.runAsNonRoot<br>
<br><b>認められる値:</b> true<br>
</td>
</tr>
<tr>
<td>root以外のグループ <em>(任意)</em></td>
<td>
コンテナをrootのプライマリまたは補助GIDで実行することを禁止すべきです。<br>
<br><b>制限されるフィールド:</b><br>
spec.securityContext.runAsGroup<br>
spec.securityContext.supplementalGroups[*]<br>
spec.securityContext.fsGroup<br>
spec.containers[*].securityContext.runAsGroup<br>
spec.initContainers[*].securityContext.runAsGroup<br>
<br><b>認められる値:</b><br>
0以外<br>
undefined / nil (`*.runAsGroup`を除く)<br>
</td>
</tr>
<tr>
<td>Seccomp</td>
<td>
SeccompのRuntimeDefaultを必須とする、または特定の追加プロファイルを許可することが必要です。<br>
<br><b>制限されるフィールド:</b><br>
spec.securityContext.seccompProfile.type<br>
spec.containers[*].securityContext.seccompProfile<br>
spec.initContainers[*].securityContext.seccompProfile<br>
<br><b>認められる値:</b><br>
'runtime/default'<br>
undefined / nil<br>
</td>
</tr>
</tbody>
</table>
## ポリシーの実例
ポリシーの定義とポリシーの実装を切り離すことによって、ポリシーを強制する機構とは独立して、汎用的な理解や複数のクラスターにわたる共通言語とすることができます。
機構が成熟してきたら、ポリシーごとに下記に定義されます。それぞれのポリシーを強制する方法についてはここでは定義しません。
[**PodSecurityPolicy**](/docs/concepts/policy/pod-security-policy/)
- [特権](https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/policy/privileged-psp.yaml)
- [ベースライン](https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/policy/baseline-psp.yaml)
- [制限](https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/policy/restricted-psp.yaml)
## FAQ
### 特権とデフォルトの間のプロファイルがないのはどうしてですか?
ここで定義されている3つのプロファイルは最も安全(制限)から最も安全ではない(特権)まで、直線的に段階が設定されており、幅広いワークロードをカバーしています。
ベースラインを超える特権が必要な場合、その多くはアプリケーションに特化しているため、その限られた要求に対して標準的なプロファイルを提供することはできません。
これは、このような場合に必ず特権プロファイルを使用すべきだという意味ではなく、場合に応じてポリシーを定義する必要があります。
将来、他のプロファイルの必要性が明らかになった場合、SIG Authはこの方針について再考する可能性があります。
### セキュリティポリシーとセキュリティコンテキストの違いは何ですか?
[Security Context](/docs/tasks/configure-pod-container/security-context/)は実行時のコンテナやPodを設定するものです。
Security ContextはPodのマニフェストの中でPodやコンテナの仕様の一部として定義され、コンテナランタイムへ渡されるパラメータを示します。
セキュリティポリシーはコントロールプレーンの機構で、Security Contextとそれ以外も含め、特定の設定を強制するものです。
2020年2月時点では、ネイティブにサポートされているポリシー強制の機構は[Pod Security
Policy](/docs/concepts/policy/pod-security-policy/)です。これはクラスター全体にわたってセキュリティポリシーを中央集権的に強制するものです。
セキュリティポリシーを強制する他の手段もKubernetesのエコシステムでは開発が進められています。例えば[OPA
Gatekeeper](https://github.com/open-policy-agent/gatekeeper)があります。
### WindowsのPodにはどのプロファイルを適用すればよいですか?
Kubernetesでは、Linuxベースのワークロードと比べてWindowsの使用は制限や差異があります。
特に、PodのSecurityContextフィールドは[Windows環境では効果がありません](/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#v1-podsecuritycontext)。
したがって、現段階では標準化されたセキュリティポリシーは存在しません。
### サンドボックス化されたPodはどのように扱えばよいでしょうか?
現在のところ、Podがサンドボックス化されていると見なされるかどうかを制御できるAPI標準はありません。
サンドボックス化されたPodはサンドボックス化されたランタイム例えばgVisorやKata Containersの使用により特定することは可能ですが、サンドボックス化されたランタイムの標準的な定義は存在しません。
サンドボックス化されたランタイムに対して必要な保護は、それ以外に対するものとは異なります。
例えば、ワークロードがその基になるカーネルと分離されている場合、特権を制限する必要性は小さくなります。
これにより、強い権限を必要とするワークロードが隔離された状態を維持できます。
加えて、サンドボックス化されたワークロードの保護はサンドボックス化の実装に強く依存します。
したがって、全てのサンドボックス化されたワークロードに推奨される単一のポリシーは存在しません。

View File

@ -214,10 +214,6 @@ kubectl get pods -o json | jq -c 'path(..)|[.[]|tostring]|join(".")'
## リソースのアップデート
<<<<<<< HEAD
=======
>>>>>>> 8d357bf1e (finished translating cheartsheet.md)
```bash
kubectl set image deployment/frontend www=image:v2 # frontend Deploymentのwwwコンテナイメージをv2にローリングアップデートします
kubectl rollout history deployment/frontend # frontend Deploymentの改訂履歴を確認します

View File

@ -0,0 +1,116 @@
---
title: Học kiến thức cơ bản về Kubernetes
linkTitle: Học kiến thức cơ bản về Kubernetes
weight: 10
card:
name: Các hướng dẫn
weight: 20
title: Hướng dẫn những điều cơ bản
---
<!DOCTYPE html>
<html lang="en">
<body>
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<div class="layout" id="top">
<main class="content">
<div class="row">
<div class="col-md-9">
<h2>Kiến thức cơ bản về Kubernetes</h2>
<p>Hướng dẫn này cung cấp những kiến thức cơ bản về một cụm Kubernetes. Mỗi mô-đun chứa một số thông tin cơ bản về các tính năng cũng như khái niệm chính của Kubernetes, đồng thời bao gồm một hướng dẫn tương tác trực tuyến. Các hướng dẫn tương tác này giúp bạn quản lý một cluster đơn giản và các ứng dụng được đóng gói của bạn.</p>
<p>Bằng các hướng dẫn tương tác, bạn có thể học cách:</p>
<ul>
<li>Triển khai một ứng dụng container trong một cluster.</li>
<li>Thay đổi quy mô triển khai.</li>
<li>Cập nhật ứng dụng container.</li>
<li>Debug ứng dụng container.</li>
</ul>
<p>Những hướng dẫn này dùng Katacoda để chạy một terminal ảo trên trình duyệt web của bạn chạy Minikube. Không cần phải cài đặt và cấu hình bất kỳ phần mềm nào; mỗi hướng dẫn tương tác chạy trực tiếp từ trình duyệt web của bạn.</p>
</div>
</div>
<br>
<div class="row">
<div class="col-md-9">
<h2>Kubernetes có thế làm những gì?</h2>
<p>Với các dịch vụ web hiện đại, người dùng mong muốn các ứng dụng luôn sẵn sàng hoạt động 24/7 và các lập trình viên muốn triển khai các phiên bản của ứng dụng đó nhiều lần trong ngày. Việc đóng gói ứng dụng vào container giúp giải quyết mục tiêu này, cho phép các ứng dụng được phát hành và cập nhật một cách dễ dàng, nhanh chóng mà không có downtime. Kubernetes giúp bạn đảm bảo các ứng dụng container chạy ở bất kì đâu và bất kì lúc nào bạn muốn, đồng thời giúp chúng tìm thấy các tài nguyên và công cụ cần thiết để chạy. Kubernetes là nền tảng mã nguồn mở, chạy được trong môi trường production, được thiết kế và phát triển bởi Google, kết hợp với những ý tưởng tốt nhất từ cộng đồng.</p>
</div>
</div>
<br>
<div id="basics-modules" class="content__modules">
<h2>Những mô-đun Kubernetes cơ bản</h2>
<div class="row">
<div class="col-md-12">
<div class="row">
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_01.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/"><h5>1. Tạo mới một cụm Kubernetes</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_02.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/"><h5>2. Triển khai ứng dụng</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/explore/explore-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_03.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="/docs/tutorials/kubernetes-basics/explore/explore-intro/"><h5>3. Khám phá ứng dụng</h5></a>
</div>
</div>
</div>
</div>
</div>
<div class="col-md-12">
<div class="row">
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/expose/expose-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_04.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="/docs/tutorials/kubernetes-basics/expose/expose-intro/"><h5>4. Đưa ứng dụng ra public</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/scale/scale-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_05.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="/docs/tutorials/kubernetes-basics/scale/scale-intro/"><h5>5. Scale ứng dụng</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/update/update-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_06.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="/docs/tutorials/kubernetes-basics/update/update-intro/"><h5>6. Cập nhật ứng dụng</h5></a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</main>
</div>
</body>
</html>

View File

@ -19,7 +19,7 @@ Cloud infrastructure technologies let you run Kubernetes on public, private, and
Kubernetes believes in automated, API-driven infrastructure without tight coupling between
components.
-->
使用云基础设施技术,你可以在有云、私有云或者混合云环境中运行 Kubernetes。
使用云基础设施技术,你可以在有云、私有云或者混合云环境中运行 Kubernetes。
Kubernetes 的信条是基于自动化的、API 驱动的基础设施,同时避免组件间紧密耦合。
{{< glossary_definition term_id="cloud-controller-manager" length="all" prepend="组件 cloud-controller-manager 是">}}

View File

@ -102,7 +102,7 @@ Before choosing a guide, here are some considerations:
* [证书](/zh/docs/concepts/cluster-administration/certificates/)节描述了使用不同的工具链生成证书的步骤。
* [Kubernetes 容器环境](/zh/docs/concepts/containers/container-environment/)描述了 Kubernetes 节点上由 Kubelet 管理的容器的环境。
* [控制到 Kubernetes API 的访问](/zh/docs/reference/access-authn-authz/controlling-access/)描述了如何为用户和 service accounts 建立权限许可。
* [控制到 Kubernetes API 的访问](/zh/docs/concepts/security/controlling-access/)描述了如何为用户和 service accounts 建立权限许可。
* [认证](/docs/reference/access-authn-authz/authentication/)节阐述了 Kubernetes 中的身份认证功能,包括许多认证选项。
* [鉴权](/zh/docs/reference/access-authn-authz/authorization/)从认证中分离出来,用于控制如何处理 HTTP 请求。
* [使用准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers) 阐述了在认证和授权之后拦截到 Kubernetes API 服务的请求的插件。

View File

@ -5,6 +5,8 @@ content_type: concept
<!-- overview -->
{{% thirdparty-content %}}
<!--
Add-ons extend the functionality of Kubernetes.
@ -34,6 +36,8 @@ Add-ons 扩展了 Kubernetes 的功能。
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md) is an overlay network provider that can be used with Kubernetes.
* [Knitter](https://github.com/ZTE/Knitter/) is a network solution supporting multiple networking in Kubernetes.
* [Multus](https://github.com/Intel-Corp/multus-cni) is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes.
* [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/) is a networking provider for Kubernetes based on [OVN (Open Virtual Network)](https://github.com/ovn-org/ovn/), a virtual networking implementation that came out of the Open vSwitch (OVS) project. OVN-Kubernetes provides an overlay based networking implementation for Kubernetes, including an OVS based implementation of load balancing and network policy.
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) is OVN based CNI controller plugin to provide cloud native based Service function chaining(SFC), Multiple OVN overlay networking, dynamic subnet creation, dynamic creation of virtual networks, VLAN Provider network, Direct provider network and pluggable with other Multi-network plugins, ideal for edge based cloud native workloads in Multi-cluster networking
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring.
* [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
@ -63,6 +67,15 @@ Add-ons 扩展了 Kubernetes 的功能。
* [Multus](https://github.com/Intel-Corp/multus-cni) 是一个多插件,可在 Kubernetes 中提供多种网络支持,
以支持所有 CNI 插件(例如 CalicoCiliumContivFlannel
而且包含了在 Kubernetes 中基于 SRIOV、DPDK、OVS-DPDK 和 VPP 的工作负载。
* [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/) 是一个 Kubernetes 网络驱动,
基于 [OVNOpen Virtual Network](https://github.com/ovn-org/ovn/)实现,是从 Open vSwitch (OVS)
项目衍生出来的虚拟网络实现。
OVN-Kubernetes 为 Kubernetes 提供基于覆盖网络的网络实现,包括一个基于 OVS 实现的负载均衡器
和网络策略。
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) 是一个基于 OVN 的 CNI
控制器插件提供基于云原生的服务功能链条Service Function ChainingSFC、多种 OVN 覆盖
网络、动态子网创建、动态虚拟网络创建、VLAN 驱动网络、直接驱动网络,并且可以
驳接其他的多网络插件,适用于基于边缘的、多集群联网的云原生工作负载。
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) 容器插件NCP
提供了 VMware NSX-T 与容器协调器(例如 Kubernetes之间的集成以及 NSX-T 与基于容器的
CaaS / PaaS 平台例如关键容器服务PKS和 OpenShift之间的集成。

View File

@ -853,7 +853,7 @@ You can fetch like this:
<!--
In addition to the queued requests,
the output includeas one phantom line for each priority level that is exempt from limitation.
the output includes one phantom line for each priority level that is exempt from limitation.
-->
针对每个优先级别,输出中还包含一条虚拟记录,对应豁免限制。
@ -881,4 +881,4 @@ You can make suggestions and feature requests via
-->
有关API优先级和公平性的设计细节的背景信息
请参阅[增强建议](https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/20190228-priority-and-fairness.md)。
你可以通过 [SIG APIMachinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery) 提出建议和特性请求。
你可以通过 [SIG APIMachinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery) 提出建议和特性请求。

View File

@ -14,9 +14,9 @@ weight: 60
<!-- overview -->
<!--
Application and systems logs can help you understand what is happening inside your cluster. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism; as such, most container engines are likewise designed to support some kind of logging. The easiest and most embraced logging method for containerized applications is to write to the standard output and standard error streams.
Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism; as such, most container engines are likewise designed to support some kind of logging. The easiest and most embraced logging method for containerized applications is to write to the standard output and standard error streams.
-->
应用和系统日志可以让你了解集群内部的运行状况。日志对调试问题和监控集群活动非常有用。
应用日志可以让你了解应用内部的运行状况。日志对调试问题和监控集群活动非常有用。
大部分现代化应用都有某种日志记录机制;同样地,大多数容器引擎也被设计成支持某种日志记录机制。
针对容器化应用,最简单且受欢迎的日志记录方式就是写入标准输出和标准错误流。
@ -45,14 +45,13 @@ the description of how logs are stored and handled on the node to be useful.
In this section, you can see an example of basic logging in Kubernetes that
outputs data to the standard output stream. This demonstration uses
a [pod specification](/examples/debug/counter-pod.yaml) with
a container that writes some text to standard output once per second.
a pod specification with a container that writes some text to standard output
once per second.
-->
## Kubernetes 中的基本日志记录
本节你会看到一个kubernetes 中生成基本日志的例子,该例子中数据被写入到标准输出。
这里通过一个特定的 [Pod 规约](/examples/debug/counter-pod.yaml) 演示创建一个容器,
并令该容器每秒钟向标准输出写入数据。
这里的示例为包含一个容器的 Pod 规约,该容器每秒钟向标准输出写入数据。
{{< codenew file="debug/counter-pod.yaml" >}}

View File

@ -140,6 +140,8 @@ imply any preferential status.
接下来的网络技术是按照首字母排序,顺序本身并无其他意义。
{{% thirdparty-content %}}
<!--
### ACI
@ -267,6 +269,19 @@ BCF 被 Gartner 认为是非常有远见的。
而 BCF 的一条关于 Kubernetes 的本地部署(其中包括 Kubernetes、DC/OS 和在不同地理区域的多个
DC 上运行的 VMware也在[这里](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/)被引用。
<!--
### Calico
[Calico](https://docs.projectcalico.org/) is an open source networking and network security solution for containers, virtual machines, and native host-based workloads. Calico supports multiple data planes including: a pure Linux eBPF dataplane, a standard Linux networking dataplane, and a Windows HNS dataplane. Calico provides a full networking stack but can also be used in conjunction with [cloud provider CNIs](https://docs.projectcalico.org/networking/determine-best-networking#calico-compatible-cni-plugins-and-cloud-provider-integrations) to provide network policy enforcement.
-->
### Calico
[Calico](https://docs.projectcalico.org/) 是一个开源的联网及网络安全方案,
用于基于容器、虚拟机和本地主机的工作负载。
Calico 支持多个数据面,包括:纯 Linux eBPF 的数据面、标准的 Linux 联网数据面
以及 Windwos HNS 数据面。Calico 在提供完整的联网堆栈的同时,还可与
[云驱动 CNIs](https://docs.projectcalico.org/networking/determine-best-networking#calico-compatible-cni-plugins-and-cloud-provider-integrations) 联合使用,以保证网络策略实施。
<!--
### Cilium
@ -637,27 +652,6 @@ OVN 是一个由 Open vSwitch 社区开发的开源的网络虚拟化解决方
它允许创建逻辑交换器、逻辑路由、状态 ACL、负载均衡等等来建立不同的虚拟网络拓扑。
该项目有一个特定的Kubernetes插件和文档 [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes)。
<!--
### Project Calico
[Project Calico](https://docs.projectcalico.org/) is an open source container networking provider and network policy engine.
Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet, for both Linux (open source) and Windows (proprietary - available from [Tigera](https://www.tigera.io/essentials/)). Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking. Calico also provides fine-grained, intent based network security policy for Kubernetes pods via its distributed firewall.
Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel, aka [canal](https://github.com/tigera/canal), or native GCE, AWS or Azure networking.
-->
### Calico 项目 {#project-calico}
[Calico 项目](https://docs.projectcalico.org/) 是一个开源的容器网络提供者和网络策略引擎。
Calico 提供了高度可扩展的网络和网络解决方案,使用基于与 Internet 相同的 IP 网络原理来连接 Kubernetes Pod
适用于 Linux (开放源代码)和 Windows专有-可从 [Tigera](https://www.tigera.io/essentials/) 获得。
可以无需封装或覆盖即可部署 Calico以提供高性能高可扩的数据中心网络。
Calico 还通过其分布式防火墙为 Kubernetes Pod 提供了基于意图的细粒度网络安全策略。
Calico 还可以和其他的网络解决方案(比如 Flannel、[canal](https://github.com/tigera/canal)
或原生 GCE、AWS、Azure 网络等)一起以策略实施模式运行。
<!--
### Romana

View File

@ -174,7 +174,7 @@ The kubelet collects accelerator metrics through cAdvisor. To collect these metr
The responsibility for collecting accelerator metrics now belongs to the vendor rather than the kubelet. Vendors must provide a container that collects metrics and exposes them to the metrics service (for example, Prometheus).
The [`DisableAcceleratorUsageMetrics` feature gate](/docs/references/command-line-tools-reference/feature-gate.md#feature-gates-for-alpha-or-beta-features:~:text= DisableAcceleratorUsageMetrics,-false) disables metrics collected by the kubelet, with a [timeline for enabling this feature by default](https://github.com/kubernetes/enhancements/tree/411e51027db842355bd489691af897afc1a41a5e/keps/sig-node/1867-disable-accelerator-usage-metrics#graduation-criteria).
The [`DisableAcceleratorUsageMetrics` feature gate](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features:~:text= DisableAcceleratorUsageMetrics,-false) disables metrics collected by the kubelet, with a [timeline for enabling this feature by default](https://github.com/kubernetes/enhancements/tree/411e51027db842355bd489691af897afc1a41a5e/keps/sig-node/1867-disable-accelerator-usage-metrics#graduation-criteria).
-->
## 禁用加速器指标
@ -185,7 +185,9 @@ kubelet 在驱动程序上保持打开状态。这意味着为了执行基础结
现在,收集加速器指标的责任属于供应商,而不是 kubelet。供应商必须提供一个收集指标的容器
并将其公开给指标服务(例如 Prometheus
[`DisableAcceleratorUsageMetrics` 特性门控](/zh/docs/references/command-line-tools-reference/feature-gate.md#feature-gates-for-alpha-or-beta-features:~:text= DisableAcceleratorUsageMetrics,-false)禁止由 kubelet 收集的指标,并[带有一条时间线,默认情况下会启用此功能](https://github.com/kubernetes/enhancements/tree/411e51027db842355bd489691af897afc1a41a5e/keps/sig-node/1867-disable-accelerator-usage-metrics#graduation-criteria)。
[`DisableAcceleratorUsageMetrics` 特性门控](/zh/docs/references/command-line-tools-reference/feature-gate.md#feature-gates-for-alpha-or-beta-features:~:text= DisableAcceleratorUsageMetrics,-false)
禁止由 kubelet 收集的指标。
关于[何时会在默认情况下启用此功能也有一定规划](https://github.com/kubernetes/enhancements/tree/411e51027db842355bd489691af897afc1a41a5e/keps/sig-node/1867-disable-accelerator-usage-metrics#graduation-criteria)。
<!--
## Component metrics
@ -233,4 +235,4 @@ cloudprovider_gce_api_request_duration_seconds { request = "list_disk"}
-->
* 阅读有关指标的 [Prometheus 文本格式](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format)
* 查看 [Kubernetes 稳定指标](https://github.com/kubernetes/kubernetes/blob/master/test/instrumentation/testdata/stable-metrics-list.yaml)的列表
* 阅读有关 [Kubernetes 弃用策略](/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior)
* 阅读有关 [Kubernetes 弃用策略](/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior)

View File

@ -2,4 +2,6 @@
title: 概述
weight: 20
description: 获得 Kubernetes 及其构件的高层次概要。
sitemap:
priority: 0.9
---

View File

@ -24,7 +24,7 @@ card:
<!--
When you deploy Kubernetes, you get a cluster.
{{< glossary_definition term_id="cluster" length="all" prepend="A Kubernetes cluster consists of">}}
{{</* glossary_definition term_id="cluster" length="all" prepend="A Kubernetes cluster consists of" */>}}
This document outlines the various components you need to have
a complete and working Kubernetes cluster.
@ -203,8 +203,7 @@ Kubernetes 启动的容器自动将此 DNS 服务器包含在其 DNS 搜索列
-->
### Web 界面(仪表盘)
[Dashboard](/zh/docs/tasks/access-application-cluster/web-ui-dashboard/) 是K
ubernetes 集群的通用的、基于 Web 的用户界面。
[Dashboard](/zh/docs/tasks/access-application-cluster/web-ui-dashboard/) 是Kubernetes 集群的通用的、基于 Web 的用户界面。
它使用户可以管理集群中运行的应用程序以及集群本身并进行故障排除。
<!--

View File

@ -3,7 +3,9 @@ title: Kubernetes API
content_type: concept
weight: 30
description: >
Kubernetes API 使你可以查询和操纵 Kubernetes 中对象的状态。Kubernetes 控制平面的核心是 API 服务器和它暴露的 HTTP API。 用户、集群的不同部分以及外部组件都通过 API 服务器相互通信。
Kubernetes API 使你可以查询和操纵 Kubernetes 中对象的状态。
Kubernetes 控制平面的核心是 API 服务器和它暴露的 HTTP API。
用户、集群的不同部分以及外部组件都通过 API 服务器相互通信。
card:
name: concepts
weight: 30
@ -14,13 +16,17 @@ card:
<!--
The core of Kubernetes' {{< glossary_tooltip text="control plane" term_id="control-plane" >}}
is the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}. The API server
exposes an HTTP API that lets end users, different parts of your cluster, and external components
communicate with one another.
exposes an HTTP API that lets end users, different parts of your cluster, and
external components communicate with one another.
The Kubernetes API lets you query and manipulate the state of objects in the Kubernetes API
(for example: Pods, Namespaces, ConfigMaps, and Events).
API endpoints, resource types and samples are described in the [API Reference](/docs/reference/kubernetes-api/).
Most operations can be performed through the
[kubectl](/docs/reference/kubectl/overview/) command-line interface or other
command-line tools, such as
[kubeadm](/docs/reference/setup-tools/kubeadm/), which in turn use the
API. However, you can also access the API directly using REST calls.
-->
Kubernetes {{< glossary_tooltip text="控制面" term_id="control-plane" >}}
的核心是 {{< glossary_tooltip text="API 服务器" term_id="kube-apiserver" >}}。
@ -29,38 +35,18 @@ API 服务器负责提供 HTTP API以供用户、集群中的不同部分和
Kubernetes API 使你可以查询和操纵 Kubernetes API
中对象例如Pod、Namespace、ConfigMap 和 Event的状态。
API 末端、资源类型以及示例都在[API 参考](/zh/docs/reference/kubernetes-api/)中描述。
<!-- body -->
大部分操作都可以通过 [kubectl](/zh/docs/reference/kubectl/overview/) 命令行接口或
类似 [kubeadm](/zh/docs/reference/setup-tools/kubeadm/) 这类命令行工具来执行,
这些工具在背后也是调用 API。不过你也可以使用 REST 调用来访问这些 API。
<!--
## API changes
Any system that is successful needs to grow and change as new use cases emerge or existing ones change.
Therefore, Kubernetes has design features to allow the Kubernetes API to continuously change and grow.
The Kubernetes project aims to _not_ break compatibility with existing clients, and to maintain that
compatibility for a length of time so that other projects have an opportunity to adapt.
In general, new API resources and new resource fields can be added often and frequently.
Elimination of resources or fields requires following the
[API deprecation policy](/docs/reference/using-api/deprecation-policy/).
What constitutes a compatible change, and how to change the API, are detailed in
[API changes](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme).
Consider using one of the [client libraries](/docs/reference/using-api/client-libraries/)
if you are writing an application using the Kubernetes API.
-->
## API 变更 {#api-changes}
如果你正在编写程序来访问 Kubernetes API可以考虑使用
[客户端库](/zh/docs/reference/using-api/client-libraries/)之一。
任何成功的系统都要随着新的使用案例的出现和现有案例的变化来成长和变化。
为此Kubernetes 的功能特性设计考虑了让 Kubernetes API 能够持续变更和成长的因素。
Kubernetes 项目的目标是 _不要_ 引发现有客户端的兼容性问题,并在一定的时期内
维持这种兼容性,以便其他项目有机会作出适应性变更。
一般而言,新的 API 资源和新的资源字段可以被频繁地添加进来。
删除资源或者字段则要遵从
[API 废弃策略](/docs/reference/using-api/deprecation-policy/)。
关于什么是兼容性的变更,如何变更 API 等详细信息,可参考
[API 变更](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme)。
<!-- body -->
<!--
## OpenAPI specification {#api-specification}
@ -70,7 +56,6 @@ Complete API details are documented using [OpenAPI](https://www.openapis.org/).
The Kubernetes API server serves an OpenAPI spec via the `/openapi/v2` endpoint.
You can request the response format using request headers as follows:
-->
## OpenAPI 规范 {#api-specification}
完整的 API 细节是用 [OpenAPI](https://www.openapis.org/) 来表述的。
@ -142,204 +127,137 @@ Kubernetes API 服务器通过 `/openapi/v2` 末端提供 OpenAPI 规范。
</table>
<!--
Kubernetes implements an alternative Protobuf based serialization format for the API that is primarily intended for intra-cluster communication, documented in the [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md) and the IDL files for each schema are located in the Go packages that define the API objects.
Kubernetes implements an alternative Protobuf based serialization format that
is primarily intended for intra-cluster communication. For more information
about this format, see the [Kubernetes Protobuf serialization](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md) design proposal and the
Interface Definition Language (IDL) files for each schema located in the Go
packages that define the API objects.
-->
Kubernetes 为 API 实现了一种基于 Protobuf 的序列化格式,主要用于集群内部的通信。
相关文档位于[设计提案](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md)。
每种 Schema 对应的 IDL 位于定义 API 对象的 Go 包中。
Kubernetes 为 API 实现了一种基于 Protobuf 的序列化格式,主要用于集群内部通信。
关于此格式的详细信息,可参考
[Kubernetes Protobuf 序列化](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md)
设计提案。每种模式对应的接口描述语言IDL位于定义 API 对象的 Go 包中。
<!--
## API versioning
## API changes
To make it easier to eliminate fields or restructure resource representations, Kubernetes supports
multiple API versions, each at a different API path, such as `/api/v1` or
`/apis/rbac.authorization.k8s.io/v1alpha1`.
Any system that is successful needs to grow and change as new use cases emerge or existing ones change.
Therefore, Kubernetes has designed its features to allow the Kubernetes API to continuously change and grow.
The Kubernetes project aims to _not_ break compatibility with existing clients, and to maintain that
compatibility for a length of time so that other projects have an opportunity to adapt.
In general, new API resources and new resource fields can be added often and frequently.
Elimination of resources or fields requires following the
[API deprecation policy](/docs/reference/using-api/deprecation-policy/).
What constitutes a compatible change, and how to change the API, are detailed in
[API changes](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme).
-->
## API 版本 {#api-versioning}
## API 变更 {#api-changes}
任何成功的系统都要随着新的使用案例的出现和现有案例的变化来成长和变化。
为此Kubernetes 的功能特性设计考虑了让 Kubernetes API 能够持续变更和成长的因素。
Kubernetes 项目的目标是 _不要_ 引发现有客户端的兼容性问题,并在一定的时期内
维持这种兼容性,以便其他项目有机会作出适应性变更。
一般而言,新的 API 资源和新的资源字段可以被频繁地添加进来。
删除资源或者字段则要遵从
[API 废弃策略](/zh/docs/reference/using-api/deprecation-policy/)。
关于什么是兼容性的变更、如何变更 API 等详细信息,可参考
[API 变更](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme)。
<!--
## API groups and versioning
To make it easier to eliminate fields or restructure resource representations,
Kubernetes supports multiple API versions, each at a different API path, such
as `/api/v1` or `/apis/rbac.authorization.k8s.io/v1alpha1`.
-->
## API 组和版本 {#api-groups-and-versioning}
为了简化删除字段或者重构资源表示等工作Kubernetes 支持多个 API 版本,
每一个版本都在不同 API 路径下,例如 `/api/v1`
`/apis/rbac.authorization.k8s.io/v1alpha1`
<!--
Versioning is done at the API level rather than at the resource or field level to ensure that the
API presents a clear, consistent view of system resources and behavior, and to enable controlling
access to end-of-life and/or experimental APIs.
The JSON and Protobuf serialization schemas follow the same guidelines for schema changes - all descriptions below cover both formats.
Versioning is done at the API level rather than at the resource or field level
to ensure that the API presents a clear, consistent view of system resources
and behavior, and to enable controlling access to end-of-life and/or
experimental APIs.
-->
版本化是在 API 级别而不是在资源或字段级别进行的,目的是为了确保 API
为系统资源和行为提供清晰、一致的视图,并能够控制对已废止的和/或实验性 API 的访问。
JSON 和 Protobuf 序列化模式遵循 schema 更改的相同准则 - 下面的所有描述都同时适用于这两种格式。
<!--
To make it easier to evolve and to extend its API, Kubernetes implements
[API groups](/docs/reference/using-api/#api-groups) that can be
[enabled or disabled](/docs/reference/using-api/#enabling-or-disabling).
-->
为了便于演化和扩展其 APIKubernetes 实现了
可被[启用或禁用](/zh/docs/reference/using-api/#enabling-or-disabling)的
[API 组](/docs/reference/using-api/#api-groups)。
<!--
Note that API versioning and Software versioning are only indirectly related. The
[Kubernetes Release Versioning](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md)
proposal describes the relationship between API versioning and software versioning.
Different API versions imply different levels of stability and support. The criteria for each level are described
in more detail in the
[API Changes](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions)
documentation. They are summarized here:
API resources are distinguished by their API group, resource type, namespace
(for namespaced resources), and name. The API server may serve the same
underlying data through multiple API version and handle the conversion between
API versions transparently. All these different versions are actually
representations of the same resource. For example, suppose there are two
versions `v1` and `v1beta1` for the same resource. An object created by the
`v1beta1` version can then be read, updated, and deleted by either the
`v1beta1` or the `v1` versions.
-->
请注意API 版本控制和软件版本控制只有间接相关性。
[Kubernetes 发行版本提案](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md)
中描述了 API 版本与软件版本之间的关系。
不同的 API 版本名称意味着不同级别的软件稳定性和支持程度。
每个级别的判定标准在
[API 变更文档](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions)
中有更详细的描述。
这些标准主要概括如下:
API 资源之间靠 API 组、资源类型、名字空间(对于名字空间作用域的资源而言)和
名字来相互区分。API 服务器可能通过多个 API 版本来向外提供相同的下层数据,
并透明地完成不同 API 版本之间的转换。所有这些不同的版本实际上都是同一资源
的(不同)表现形式。例如,假定同一资源有 `v1``v1beta1` 版本,
使用 `v1beta1` 创建的对象则可以使用 `v1beta1` 或者 `v1` 版本来读取、更改
或者删除。
<!--
- Alpha level:
- The version names contain `alpha` (e.g. `v1alpha1`).
- May be buggy. Enabling the feature may expose bugs. Disabled by default.
- Support for feature may be dropped at any time without notice.
- The API may change in incompatible ways in a later software release without notice.
- Recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support.
Refer to [API versions reference](/docs/reference/using-api/#api-versioning)
for more details on the API version level definitions.
-->
- Alpha 级别:
- 版本名称包含 `alpha`(例如:`v1alpha1`
- API 可能是有缺陷的。启用该功能可能会带来问题,默认情况是禁用的
- 对相关功能的支持可能在没有通知的情况下随时终止
- API 可能在将来的软件发布中出现不兼容性的变更,此类变更不会另行通知
- 由于缺陷风险较高且缺乏长期支持,推荐仅在短暂的集群测试中使用
关于 API 版本级别的详细定义,请参阅
[API 版本参考](/zh/docs/reference/using-api/#api-versioning)。
<!--
- Beta level:
- The version names contain `beta` (e.g. `v2beta3`).
- Code is well tested. Enabling the feature is considered safe. Enabled by default.
- Support for the overall feature will not be dropped, though details may change.
- The schema and/or semantics of objects may change in incompatible ways in a subsequent beta or stable release. When this happens,
we will provide instructions for migrating to the next version. This may require deleting, editing, and re-creating
API objects. The editing process may require some thought. This may require downtime for applications that rely on the feature.
- Recommended for only non-business-critical uses because of potential for incompatible changes in subsequent releases. If you have
multiple clusters which can be upgraded independently, you may be able to relax this restriction.
- **Please do try our beta features and give feedback on them! Once they exit beta, it may not be practical for us to make more changes.**
## API Extension
The Kubernetes API can be extended in one of two ways:
-->
- Beta 级别:
- 版本名称包含 `beta`(例如:`v2beta3`
- 代码已经充分测试过。启用该功能被认为是安全的,功能默认已启用。
- 所支持的功能作为一个整体不会被删除,尽管细节可能会发生变更。
- 对象的模式和/或语义可能会在后续的 beta 发行版或稳定版中以不兼容的方式进行更改。
发生这种情况时,我们将提供如何迁移到新版本的说明。
迁移操作可能需要删除、编辑和重新创建 API 对象。
执行编辑操作时可能需要动些脑筋。
迁移过程中可能需要停用依赖该功能的应用程序。
- 建议仅用于非业务关键性用途,因为后续版本中可能存在不兼容的更改。
如果你有多个可以独立升级的集群,则可以放宽此限制。
- **请尝试我们的 beta 版本功能并且给出反馈!一旦它们结束 beta 阶段,进一步变更可能就不太现实了。**
<!--
- Stable level:
- The version name is `vX` where `X` is an integer.
- Stable versions of features will appear in released software for many subsequent versions.
-->
- 稳定级别:
- 版本名称是 `vX`,其中 `X` 是整数。
- 功能的稳定版本将出现在许多后续版本的发行软件中。
## API 扩展 {#api-extension}
有两种途径来扩展 Kubernetes API
<!--
## API groups
To make it easier to extend the Kubernetes API, Kubernetes implemented [*API groups*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md).
The API group is specified in a REST path and in the `apiVersion` field of a serialized object.
1. [Custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
let you declaratively define how the API server should provide your chosen resource API.
1. You can also extend the Kubernetes API by implementing an
[aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/).
-->
## API 组 {#api-groups}
为了更容易地扩展 Kubernetes APIKubernetes 实现了
[*`API组`*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md)。
API 组在 REST 路径和序列化对象的 `apiVersion` 字段中指定。
<!--
There are several API groups in a cluster:
1. The *core* group, also referred to as the *legacy* group, is at the REST path `/api/v1` and uses `apiVersion: v1`.
1. *Named* groups are at REST path `/apis/$GROUP_NAME/$VERSION`, and use `apiVersion: $GROUP_NAME/$VERSION`
(e.g. `apiVersion: batch/v1`). The Kubernetes [API reference](/docs/reference/kubernetes-api/) has a
full list of available API groups.
-->
集群中存在若干 API 组:
1. *核心Core*组,通常被称为 *遗留Legacy* 组,位于 REST 路径 `/api/v1`
使用 `apiVersion: v1`
1. *命名Named* 组 REST 路径 `/apis/$GROUP_NAME/$VERSION`,使用
`apiVersion: $GROUP_NAME/$VERSION`(例如 `apiVersion: batch/v1`)。
[Kubernetes API 参考](/zh/docs/reference/kubernetes-api/)中枚举了可用的 API 组的完整列表。
<!--
There are two paths to extending the API with [custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/):
1. [CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)
lets you declaratively define how the API server should provide your chosen resource API.
1. You can also [implement your own extension API server](/docs/tasks/extend-kubernetes/setup-extension-api-server/)
and use the [aggregator](/docs/tasks/extend-kubernetes/configure-aggregation-layer/)
to make it seamless for clients.
-->
有两种途径来扩展 Kubernetes API 以支持
[自定义资源](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
1. 使用 [CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)
你可以用声明式方式来定义 API 如何提供你所选择的资源 API。
1. 你也可以选择[实现自己的扩展 API 服务器](/zh/docs/tasks/extend-kubernetes/setup-extension-api-server/)
并使用[聚合器](/zh/docs/tasks/extend-kubernetes/configure-aggregation-layer/)
为客户提供无缝的服务。
<!--
## Enabling or disabling API groups
Certain resources and API groups are enabled by default. They can be enabled or disabled by setting `-runtime-config`
on apiserver. `-runtime-config` accepts comma separated values. For example: to disable batch/v1, set
`-runtime-config=batch/v1=false`, to enable batch/v2alpha1, set `-runtime-config=batch/v2alpha1`.
The flag accepts comma separated set of key=value pairs describing runtime configuration of the apiserver.
Enabling or disabling groups or resources requires restarting apiserver and controller-manager
to pick up the `-runtime-config` changes.
-->
## 启用或禁用 API 组 {#enabling-or-disabling-api-groups}
某些资源和 API 组默认情况下处于启用状态。可以通过为 `kube-apiserver`
设置 `--runtime-config` 命令行选项来启用或禁用它们。
`--runtime-config` 接受逗号分隔的值。
例如:要禁用 `batch/v1`,设置 `--runtime-config=batch/v1=false`
要启用 `batch/v2alpha1`,设置`--runtime-config=batch/v2alpha1`。
该标志接受逗号分隔的一组"key=value"键值对,用以描述 API 服务器的运行时配置。
{{< note >}}
启用或禁用组或资源需要重新启动 `kube-apiserver``kube-controller-manager`
来使得 `--runtime-config` 更改生效。
{{< /note >}}
<!--
## Persistence
Kubernetes stores its serialized state in terms of the API resources by writing them into
{{< glossary_tooltip term_id="etcd" >}}.
-->
## 持久性 {#persistence}
Kubernetes 也将其 API 资源的序列化状态保存起来,写入到 {{< glossary_tooltip term_id="etcd" >}}。
1. 你可以使用[自定义资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
来以声明式方式定义 API 服务器如何提供你所选择的资源 API。
1. 你也可以选择实现自己的
[聚合层](/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
来扩展 Kubernetes API。
## {{% heading "whatsnext" %}}
<!--
[Controlling API Access](/docs/reference/access-authn-authz/controlling-access/) describes
how the cluster manages authentication and authorization for API access.
Overall API conventions are described in the
[API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#api-conventions)
document.
API endpoints, resource types and samples are described in the [API Reference](/docs/reference/kubernetes-api/).
- Learn how to extend the Kubernetes API by adding your own
[CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/).
- [Controlling Access To The Kubernetes API](/docs/concepts/security/controlling-access/) describes
how the cluster manages authentication and authorization for API access.
- Learn about API endpoints, resource types and samples by reading
[API Reference](/docs/reference/kubernetes-api/).
-->
* [控制 API 访问](/zh/docs/reference/access-authn-authz/controlling-access/)
描述了集群如何为 API 访问管理身份认证和权限判定;
* 总体的 API 约定描述位于 [API 约定](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md)中;
* API 末端、资源类型和示例等均在 [API 参考文档](/zh/docs/reference/kubernetes-api/)中描述
- 了解如何通过添加你自己的
[CustomResourceDefinition](/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)
来扩展 Kubernetes API。
- [控制 Kubernetes API 访问](/zh/docs/concepts/security/controlling-access/)
页面描述了集群如何针对 API 访问管理身份认证和鉴权。
- 通过阅读 [API 参考](/zh/docs/reference/kubernetes-api/)
了解 API 端点、资源类型以及示例。

View File

@ -9,7 +9,6 @@ card:
weight: 10
---
<!--
---
reviewers:
- bgrant0607
- mikedanese
@ -19,7 +18,6 @@ weight: 10
card:
name: concepts
weight: 10
---
-->
<!-- overview -->
@ -33,18 +31,22 @@ This page is an overview of Kubernetes.
<!--
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
-->
Kubernetes 是一个可移植的、可扩展的开源平台用于管理容器化的工作负载和服务可促进声明式配置和自动化。Kubernetes 拥有一个庞大且快速增长的生态系统。Kubernetes 的服务、支持和工具广泛可用。
Kubernetes 是一个可移植的、可扩展的开源平台,用于管理容器化的工作负载和服务,可促进声明式配置和自动化。
Kubernetes 拥有一个庞大且快速增长的生态系统。Kubernetes 的服务、支持和工具广泛可用。
<!--
The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes builds upon a [decade and a half of experience that Google has with running production workloads at scale](https://research.google/pubs/pub43438), combined with best-of-breed ideas and practices from the community.
-->
名称 **Kubernetes** 源于希腊语,意为 "舵手" 或 "飞行员"。Google 在 2014 年开源了 Kubernetes 项目。Kubernetes 建立在 [Google 在大规模运行生产工作负载方面拥有十几年的经验](https://research.google/pubs/pub43438)的基础上,结合了社区中最好的想法和实践。
名称 **Kubernetes** 源于希腊语意为“舵手”或“飞行员”。Google 在 2014 年开源了 Kubernetes 项目。
Kubernetes 建立在 [Google 在大规模运行生产工作负载方面拥有十几年的经验](https://research.google/pubs/pub43438)
的基础上,结合了社区中最好的想法和实践。
<!--
## Going back in time
Let's take a look at why Kubernetes is so useful by going back in time.
-->
## 言归正传
## 时光回溯
让我们回顾一下为什么 Kubernetes 如此有用。
<!--
@ -54,24 +56,34 @@ Let's take a look at why Kubernetes is so useful by going back in time.
<!--
**Traditional deployment era:**
Early on, organizations ran applications on physical servers. There was no way to define resource boundaries for applications in a physical server, and this caused resource allocation issues. For example, if multiple applications run on a physical server, there can be instances where one application would take up most of the resources, and as a result, the other applications would underperform. A solution for this would be to run each application on a different physical server. But this did not scale as resources were underutilized, and it was expensive for organizations to maintain many physical servers.
-->
**传统部署时代:**
早期,组织在物理服务器上运行应用程序。无法为物理服务器中的应用程序定义资源边界,这会导致资源分配问题。例如,如果在物理服务器上运行多个应用程序,则可能会出现一个应用程序占用大部分资源的情况,结果可能导致其他应用程序的性能下降。一种解决方案是在不同的物理服务器上运行每个应用程序,但是由于资源利用不足而无法扩展,并且组织维护许多物理服务器的成本很高。
早期,组织在物理服务器上运行应用程序。无法为物理服务器中的应用程序定义资源边界,这会导致资源分配问题。
例如,如果在物理服务器上运行多个应用程序,则可能会出现一个应用程序占用大部分资源的情况,
结果可能导致其他应用程序的性能下降。
一种解决方案是在不同的物理服务器上运行每个应用程序,但是由于资源利用不足而无法扩展,
并且组织维护许多物理服务器的成本很高。
<!--
**Virtualized deployment era:**
As a solution, virtualization was introduced. It allows you to run multiple Virtual Machines (VMs) on a single physical server's CPU. Virtualization allows applications to be isolated between VMs and provides a level of security as the information of one application cannot be freely accessed by another application.
-->
**虚拟化部署时代:**
作为解决方案,引入了虚拟化功能,它允许您在单个物理服务器的 CPU 上运行多个虚拟机VM。虚拟化功能允许应用程序在 VM 之间隔离,并提供安全级别,因为一个应用程序的信息不能被另一应用程序自由地访问。
作为解决方案,引入了虚拟化。虚拟化技术允许你在单个物理服务器的 CPU 上运行多个虚拟机VM
虚拟化允许应用程序在 VM 之间隔离,并提供一定程度的安全,因为一个应用程序的信息
不能被另一应用程序随意访问。
<!--
Virtualization allows better utilization of resources in a physical server and allows better scalability because an application can be added or updated easily, reduces hardware costs, and much more.
Each VM is a full machine running all the components, including its own operating system, on top of the virtualized hardware.
-->
因为虚拟化可以轻松地添加或更新应用程序、降低硬件成本等等,所以虚拟化可以更好地利用物理服务器中的资源,并可以实现更好的可伸缩性。
虚拟化技术能够更好地利用物理服务器上的资源,并且因为可轻松地添加或更新应用程序
而可以实现更好的可伸缩性,降低硬件成本等等。
每个 VM 是一台完整的计算机,在虚拟化硬件之上运行所有组件,包括其自己的操作系统。
@ -80,12 +92,15 @@ Each VM is a full machine running all the components, including its own operatin
Containers are similar to VMs, but they have relaxed isolation properties to share the Operating System (OS) among the applications. Therefore, containers are considered lightweight. Similar to a VM, a container has its own filesystem, CPU, memory, process space, and more. As they are decoupled from the underlying infrastructure, they are portable across clouds and OS distributions.
-->
**容器部署时代:**
容器类似于 VM但是它们具有轻量级的隔离属性可以在应用程序之间共享操作系统OS。因此容器被认为是轻量级的。容器与 VM 类似具有自己的文件系统、CPU、内存、进程空间等。由于它们与基础架构分离因此可以跨云和 OS 分发进行移植。
容器类似于 VM但是它们具有被放宽的隔离属性可以在应用程序之间共享操作系统OS
因此,容器被认为是轻量级的。容器与 VM 类似具有自己的文件系统、CPU、内存、进程空间等。
由于它们与基础架构分离,因此可以跨云和 OS 发行版本进行移植。
<!--
Containers are becoming popular because they have many benefits. Some of the container benefits are listed below:
-->
容器因具有许多优势而变得流行起来。下面列出容器的一些好处:
容器因具有许多优势而变得流行起来。下面列出的是容器的一些好处:
<!--
* Agile application creation and deployment: increased ease and efficiency of container image creation compared to VM image use.
@ -100,13 +115,18 @@ Containers are becoming popular because they have many benefits. Some of the con
* Resource utilization: high efficiency and density.
-->
* 敏捷应用程序的创建和部署:与使用 VM 镜像相比,提高了容器镜像创建的简便性和效率。
* 持续开发、集成和部署:通过快速简单的回滚(由于镜像不可变性),提供可靠且频繁的容器镜像构建和部署。
* 关注开发与运维的分离:在构建/发布时而不是在部署时创建应用程序容器镜像,从而将应用程序与基础架构分离。
* 持续开发、集成和部署:通过快速简单的回滚(由于镜像不可变性),支持可靠且频繁的
容器镜像构建和部署。
* 关注开发与运维的分离:在构建/发布时而不是在部署时创建应用程序容器镜像,
从而将应用程序与基础架构分离。
* 可观察性不仅可以显示操作系统级别的信息和指标,还可以显示应用程序的运行状况和其他指标信号。
* 跨开发、测试和生产的环境一致性:在便携式计算机上与在云中相同地运行。
* 云和操作系统分发的可移植性:可在 Ubuntu、RHEL、CoreOS、本地、Google Kubernetes Engine 和其他任何地方运行。
* 以应用程序为中心的管理:提高抽象级别,从在虚拟硬件上运行 OS 到使用逻辑资源在 OS 上运行应用程序。
* 松散耦合、分布式、弹性、解放的微服务:应用程序被分解成较小的独立部分,并且可以动态部署和管理 - 而不是在一台大型单机上整体运行。
* 跨云和操作系统发行版本的可移植性:可在 Ubuntu、RHEL、CoreOS、本地、
Google Kubernetes Engine 和其他任何地方运行。
* 以应用程序为中心的管理:提高抽象级别,从在虚拟硬件上运行 OS 到使用逻辑资源在
OS 上运行应用程序。
* 松散耦合、分布式、弹性、解放的微服务:应用程序被分解成较小的独立部分,
并且可以动态部署和管理 - 而不是在一台大型单机上整体运行。
* 资源隔离:可预测的应用程序性能。
* 资源利用:高效率和高密度。
@ -118,59 +138,75 @@ Containers are becoming popular because they have many benefits. Some of the con
<!--
Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn't it be easier if this behavior was handled by a system?
-->
容器是打包和运行应用程序的好方式。在生产环境中,您需要管理运行应用程序的容器,并确保不会停机。例如,如果一个容器发生故障,则需要启动另一个容器。如果系统处理此行为,会不会更容易?
容器是打包和运行应用程序的好方式。在生产环境中,你需要管理运行应用程序的容器,并确保不会停机。
例如,如果一个容器发生故障,则需要启动另一个容器。如果系统处理此行为,会不会更容易?
<!--
That's how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of your scaling requirements, failover, deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.
-->
这就是 Kubernetes 的救援方法Kubernetes 为您提供了一个可弹性运行分布式系统的框架。Kubernetes 会满足您的扩展要求、故障转移、部署模式等。例如Kubernetes 可以轻松管理系统的 Canary 部署。
这就是 Kubernetes 来解决这些问题的方法!
Kubernetes 为你提供了一个可弹性运行分布式系统的框架。
Kubernetes 会满足你的扩展要求、故障转移、部署模式等。
例如Kubernetes 可以轻松管理系统的 Canary 部署。
<!--
Kubernetes provides you with:
-->
Kubernetes 为提供:
Kubernetes 为提供:
<!--
* **Service discovery and load balancing**
Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
-->
* **服务发现和负载均衡**
Kubernetes 可以使用 DNS 名称或自己的 IP 地址公开容器如果到容器的流量很大Kubernetes 可以负载均衡并分配网络流量,从而使部署稳定。
* **服务发现和负载均衡**
Kubernetes 可以使用 DNS 名称或自己的 IP 地址公开容器,如果进入容器的流量很大,
Kubernetes 可以负载均衡并分配网络流量,从而使部署稳定。
<!--
* **Storage orchestration**
Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
-->
* **存储编排**
Kubernetes 允许您自动挂载您选择的存储系统,例如本地存储、公共云提供商等。
* **存储编排**
Kubernetes 允许你自动挂载你选择的存储系统,例如本地存储、公共云提供商等。
<!--
* **Automated rollouts and rollbacks**
You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
-->
* **自动部署和回滚**
您可以使用 Kubernetes 描述已部署容器的所需状态,它可以以受控的速率将实际状态更改为所需状态。例如,您可以自动化 Kubernetes 来为您的部署创建新容器,删除现有容器并将它们的所有资源用于新容器。
* **自动部署和回滚**
你可以使用 Kubernetes 描述已部署容器的所需状态,它可以以受控的速率将实际状态
更改为期望状态。例如,你可以自动化 Kubernetes 来为你的部署创建新容器,
删除现有容器并将它们的所有资源用于新容器。
<!--
* **Automatic bin packing**
Kubernetes allows you to specify how much CPU and memory (RAM) each container needs. When containers have resource requests specified, Kubernetes can make better decisions to manage the resources for containers.
-->
* **自动二进制打包**
Kubernetes 允许您指定每个容器所需 CPU 和内存RAM。当容器指定了资源请求时Kubernetes 可以做出更好的决策来管理容器的资源。
* **自动完成装箱计算**
Kubernetes 允许你指定每个容器所需 CPU 和内存RAM
当容器指定了资源请求时Kubernetes 可以做出更好的决策来管理容器的资源。
<!--
* **Self-healing**
Kubernetes restarts containers that fail, replaces containers, kills containers that dont respond to your user-defined health check, and doesnt advertise them to clients until they are ready to serve.
-->
* **自我修复**
Kubernetes 重新启动失败的容器、替换容器、杀死不响应用户定义的运行状况检查的容器,并且在准备好服务之前不将其通告给客户端。
* **自我修复**
Kubernetes 重新启动失败的容器、替换容器、杀死不响应用户定义的
运行状况检查的容器,并且在准备好服务之前不将其通告给客户端。
<!--
* **Secret and configuration management**
Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.
-->
* **密钥与配置管理**
Kubernetes 允许您存储和管理敏感信息例如密码、OAuth 令牌和 ssh 密钥。您可以在不重建容器镜像的情况下部署和更新密钥和应用程序配置,也无需在堆栈配置中暴露密钥。
* **密钥与配置管理**
Kubernetes 允许你存储和管理敏感信息例如密码、OAuth 令牌和 ssh 密钥。
你可以在不重建容器镜像的情况下部署和更新密钥和应用程序配置,也无需在堆栈配置中暴露密钥。
<!--
## What Kubernetes is not
@ -180,7 +216,11 @@ Kubernetes 允许您存储和管理敏感信息例如密码、OAuth 令牌和
<!--
Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) system. Since Kubernetes operates at the container level rather than at the hardware level, it provides some generally applicable features common to PaaS offerings, such as deployment, scaling, load balancing, logging, and monitoring. However, Kubernetes is not monolithic, and these default solutions are optional and pluggable. Kubernetes provides the building blocks for building developer platforms, but preserves user choice and flexibility where it is important.
-->
Kubernetes 不是传统的、包罗万象的 PaaS平台即服务系统。由于 Kubernetes 在容器级别而不是在硬件级别运行,因此它提供了 PaaS 产品共有的一些普遍适用的功能例如部署、扩展、负载均衡、日志记录和监视。但是Kubernetes 不是单一的默认解决方案是可选和可插拔的。Kubernetes 提供了构建开发人员平台的基础,但是在重要的地方保留了用户的选择和灵活性。
Kubernetes 不是传统的、包罗万象的 PaaS平台即服务系统。
由于 Kubernetes 在容器级别而不是在硬件级别运行,它提供了 PaaS 产品共有的一些普遍适用的功能,
例如部署、扩展、负载均衡、日志记录和监视。
但是Kubernetes 不是单体系统,默认解决方案都是可选和可插拔的。
Kubernetes 提供了构建开发人员平台的基础,但是在重要的地方保留了用户的选择和灵活性。
<!--
Kubernetes:
@ -192,22 +232,33 @@ Kubernetes
* Does not deploy source code and does not build your application. Continuous Integration, Delivery, and Deployment (CI/CD) workflows are determined by organization cultures and preferences as well as technical requirements.
* Does not provide application-level services, such as middleware (for example, message buses), data-processing frameworks (for example, Spark), databases (for example, mysql), caches, nor cluster storage systems (for example, Ceph) as built-in services. Such components can run on Kubernetes, and/or can be accessed by applications running on Kubernetes through portable mechanisms, such as the Open Service Broker.
-->
* Kubernetes 不限制支持的应用程序类型。Kubernetes 旨在支持极其多种多样的工作负载,包括无状态、有状态和数据处理工作负载。如果应用程序可以在容器中运行,那么它应该可以在 Kubernetes 上很好地运行。
* Kubernetes 不部署源代码,也不构建您的应用程序。持续集成(CI)、交付和部署CI/CD工作流取决于组织的文化和偏好以及技术要求。
* Kubernetes 不提供应用程序级别的服务作为内置服务例如中间件例如消息中间件、数据处理框架例如Spark、数据库例如mysql、缓存、集群存储系统例如Ceph。这样的组件可以在 Kubernetes 上运行,并且/或者可以由运行在 Kubernetes 上的应用程序通过可移植机制(例如,[开放服务代理](https://openservicebrokerapi.org/))来访问。
* 不限制支持的应用程序类型。
Kubernetes 旨在支持极其多种多样的工作负载,包括无状态、有状态和数据处理工作负载。
如果应用程序可以在容器中运行,那么它应该可以在 Kubernetes 上很好地运行。
* 不部署源代码,也不构建你的应用程序。
持续集成(CI)、交付和部署CI/CD工作流取决于组织的文化和偏好以及技术要求。
* 不提供应用程序级别的服务作为内置服务,例如中间件(例如,消息中间件)、
数据处理框架例如Spark、数据库例如mysql、缓存、集群存储系统
例如Ceph。这样的组件可以在 Kubernetes 上运行,并且/或者可以由运行在
Kubernetes 上的应用程序通过可移植机制(例如,
[开放服务代理](https://openservicebrokerapi.org/))来访问。
<!--
* Does not dictate logging, monitoring, or alerting solutions. It provides some integrations as proof of concept, and mechanisms to collect and export metrics.
* Does not provide nor mandate a configuration language/system (for example, jsonnet). It provides a declarative API that may be targeted by arbitrary forms of declarative specifications.
* Does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems.
* Additionally, Kubernetes is not a mere orchestration system. In fact, it eliminates the need for orchestration. The technical definition of orchestration is execution of a defined workflow: first do A, then B, then C. In contrast, Kubernetes comprises a set of independent, composable control processes that continuously drive the current state towards the provided desired state. It shouldnt matter how you get from A to C. Centralized control is also not required. This results in a system that is easier to use and more powerful, robust, resilient, and extensible.
-->
* Kubernetes 不指定日志记录、监视或警报解决方案。它提供了一些集成作为概念证明,并提供了收集和导出指标的机制。
* Kubernetes 不提供或不要求配置语言/系统(例如 jsonnet它提供了声明性 API该声明性 API 可以由任意形式的声明性规范所构成。
* Kubernetes 不提供也不采用任何全面的机器配置、维护、管理或自我修复系统。
* 此外Kubernetes 不仅仅是一个编排系统,实际上它消除了编排的需要。编排的技术定义是执行已定义的工作流程:首先执行 A然后执行 B再执行 C。相比之下Kubernetes 包含一组独立的、可组合的控制过程,这些过程连续地将当前状态驱动到所提供的所需状态。从 A 到 C 的方式无关紧要,也不需要集中控制,这使得系统更易于使用且功能更强大、健壮、弹性和可扩展性。
* 不要求日志记录、监视或警报解决方案。
它提供了一些集成作为概念证明,并提供了收集和导出指标的机制。
* 不提供或不要求配置语言/系统(例如 jsonnet它提供了声明性 API
该声明性 API 可以由任意形式的声明性规范所构成。
* 不提供也不采用任何全面的机器配置、维护、管理或自我修复系统。
* 此外Kubernetes 不仅仅是一个编排系统,实际上它消除了编排的需要。
编排的技术定义是执行已定义的工作流程:首先执行 A然后执行 B再执行 C。
相比之下Kubernetes 包含一组独立的、可组合的控制过程,
这些过程连续地将当前状态驱动到所提供的所需状态。
如何从 A 到 C 的方式无关紧要,也不需要集中控制,这使得系统更易于使用
且功能更强大、系统更健壮、更为弹性和可扩展。
## {{% heading "whatsnext" %}}
@ -215,5 +266,5 @@ Kubernetes
* Take a look at the [Kubernetes Components](/docs/concepts/overview/components/)
* Ready to [Get Started](/docs/setup/)?
-->
* 查阅 [Kubernetes 组件](/zh/docs/concepts/overview/components/)
* 开始 [Kubernetes 入门](/zh/docs/setup/)?
* 查阅 [Kubernetes 组件](/zh/docs/concepts/overview/components/)
* 开始 [Kubernetes 入门](/zh/docs/setup/)?

View File

@ -121,18 +121,18 @@ _注解Annotations_ 存储的形式是键/值对。有效的注解键分
并允许使用破折号(`-`),下划线(`_`),点(`.`)和字母数字。
前缀是可选的。如果指定则前缀必须是DNS子域一系列由点`.`分隔的DNS标签
总计不超过253个字符后跟斜杠`/`)。
如果省略前缀,则假定注释键对用户是私有的。 由系统组件添加的注释
如果省略前缀,则假定注解键对用户是私有的。 由系统组件添加的注解
(例如,`kube-scheduler``kube-controller-manager``kube-apiserver``kubectl`
或其他第三方组件),必须为终端用户添加注前缀。
或其他第三方组件),必须为终端用户添加注前缀。
<!--
The `kubernetes.io/` and `k8s.io/` prefixes are reserved for Kubernetes core components.
For example, heres the configuration file for a Pod that has the annotation `imageregistry: https://hub.docker.com/` :
For example, here's the configuration file for a Pod that has the annotation `imageregistry: https://hub.docker.com/` :
-->
`kubernetes.io/``k8s.io/` 前缀是为Kubernetes核心组件保留的。
例如,这是Pod的配置文件其注释为 `imageregistry: https://hub.docker.com/`
例如,下面是一个 Pod 的配置文件,其注解中包含 `imageregistry: https://hub.docker.com/`
```yaml
apiVersion: v1

View File

@ -191,11 +191,11 @@ and the `spec` format for a `Deployment` can be found
## {{% heading "whatsnext" %}}
<!--
* [Kubernetes API overview](/docs/reference/using-api/api-overview/) explains some more API concepts
* Learn about the most important basic Kubernetes objects, such as [Pod](/docs/concepts/workloads/pods/pod-overview/).
* Learn about [controllers](/docs/concepts/architecture/controller/) in Kubernetes
* Learn about the most important basic Kubernetes objects, such as [Pod](/docs/concepts/workloads/pods/).
* Learn about [controllers](/docs/concepts/architecture/controller/) in Kubernetes.
* [Using the Kubernetes API](/docs/reference/using-api/) explains some more API concepts.
-->
* [Kubernetes API 总览](/zh/docs/reference/using-api/api-overview/) 提供关于 API 概念的进一步阐述
* 了解最重要的 Kubernetes 基本对象,例如 [Pod](/zh/docs/concepts/workloads/pods/)
* 了解 Kubernetes 中的[控制器](/zh/docs/concepts/architecture/controller/)
* 了解最重要的 Kubernetes 基本对象,例如 [Pod](/zh/docs/concepts/workloads/pods/)。
* 了解 Kubernetes 中的[控制器](/zh/docs/concepts/architecture/controller/)。
* [使用 Kubernetes API](/zh/docs/reference/using-api/) 一节解释了一些 API 概念。

View File

@ -307,7 +307,7 @@ and [`replicationcontrollers`](/docs/concepts/workloads/controllers/replicationc
also use label selectors to specify sets of other resources, such as
[pods](/docs/concepts/workloads/pods/).
-->
### 在 API 对象设置引用
### 在 API 对象设置引用
一些 Kubernetes 对象,例如 [`services`](/zh/docs/concepts/services-networking/service/)
和 [`replicationcontrollers`](/zh/docs/concepts/workloads/controllers/replicationcontroller/)
@ -323,9 +323,11 @@ Labels selectors for both objects are defined in `json` or `yaml` files using ma
-->
#### Service 和 ReplicationController
一个 `Service` 指向的一组 pods 是由标签选择算符定义的。同样,一个 `ReplicationController` 应该管理的 pods 的数量也是由标签选择算符定义的。
一个 `Service` 指向的一组 Pods 是由标签选择算符定义的。同样,一个 `ReplicationController`
应该管理的 pods 的数量也是由标签选择算符定义的。
两个对象的标签选择算符都是在 `json` 或者 `yaml` 文件中使用映射定义的,并且只支持 _基于等值_ 需求的选择算符:
两个对象的标签选择算符都是在 `json` 或者 `yaml` 文件中使用映射定义的,并且只支持
_基于等值_ 需求的选择算符:
```json
"selector": {

View File

@ -140,8 +140,7 @@ The following resource types are supported:
| `limits.memory` | Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. |
| `requests.cpu` | Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value. |
| `requests.memory` | Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value. |
| `hugepages-<size>` | Across all pods in a non-terminal state, the number of
huge page requests of the specified size cannot exceed this value. |
| `hugepages-<size>` | Across all pods in a non-terminal state, the number of huge page requests of the specified size cannot exceed this value. |
| `cpu` | Same as `requests.cpu` |
| `memory` | Same as `requests.memory` |
-->

View File

@ -1,306 +1,306 @@
---
title: Pod 开销
content_type: concept
weight: 20
---
<!-- overview -->
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
<!--
When you run a Pod on a Node, the Pod itself takes an amount of system resources. These
resources are additional to the resources needed to run the container(s) inside the Pod.
_Pod Overhead_ is a feature for accounting for the resources consumed by the Pod infrastructure
on top of the container requests & limits.
-->
在节点上运行 Pod 时Pod 本身占用大量系统资源。这些资源是运行 Pod 内容器所需资源的附加资源。
_POD 开销_ 是一个特性,用于计算 Pod 基础设施在容器请求和限制之上消耗的资源。
<!-- body -->
<!--
## Pod Overhead
-->
## Pod 开销
<!--
In Kubernetes, the Pod's overhead is set at
[admission](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)
time according to the overhead associated with the Pod's
[RuntimeClass](/docs/concepts/containers/runtime-class/).
-->
在 Kubernetes 中Pod 的开销是根据与 Pod 的 [RuntimeClass](/zh/docs/concepts/containers/runtime-class/) 相关联的开销在
[准入](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks) 时设置的。
<!--
When Pod Overhead is enabled, the overhead is considered in addition to the sum of container
resource requests when scheduling a Pod. Similarly, Kubelet will include the Pod overhead when sizing
the Pod cgroup, and when carrying out Pod eviction ranking.
-->
当启用 Pod 开销时,在调度 Pod 时,除了考虑容器资源请求的总和外,还要考虑 Pod 开销。类似地Kubelet 将在确定 Pod cgroup 的大小和执行 Pod 驱逐排序时包含 Pod 开销。
<!--
## Enabling Pod Overhead {#set-up}
-->
## 启用 Pod 开销 {#set-up}
<!--
You need to make sure that the `PodOverhead`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled (it is on by default as of 1.18)
across your cluster, and a `RuntimeClass` is utilized which defines the `overhead` field.
-->
您需要确保在集群中启用了 `PodOverhead` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
(在 1.18 默认是开启的),以及一个用于定义 `overhead` 字段的 `RuntimeClass`
<!--
## Usage example
-->
## 使用示例
<!--
To use the PodOverhead feature, you need a RuntimeClass that defines the `overhead` field. As
an example, you could use the following RuntimeClass definition with a virtualizing container runtime
that uses around 120MiB per Pod for the virtual machine and the guest OS:
-->
要使用 PodOverhead 特性,需要一个定义 `overhead` 字段的 RuntimeClass。
作为例子,可以在虚拟机和寄宿操作系统中通过一个虚拟化容器运行时来定义
RuntimeClass 如下,其中每个 Pod 大约使用 120MiB:
```yaml
---
kind: RuntimeClass
apiVersion: node.k8s.io/v1beta1
metadata:
name: kata-fc
handler: kata-fc
overhead:
podFixed:
memory: "120Mi"
cpu: "250m"
```
<!--
Workloads which are created which specify the `kata-fc` RuntimeClass handler will take the memory and
cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup sizing.
Consider running the given example workload, test-pod:
-->
通过指定 `kata-fc` RuntimeClass 处理程序创建的工作负载会将内存和 cpu 开销计入资源配额计算、节点调度以及 Pod cgroup 分级。
假设我们运行下面给出的工作负载示例 test-pod:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
runtimeClassName: kata-fc
containers:
- name: busybox-ctr
image: busybox
stdin: true
tty: true
resources:
limits:
cpu: 500m
memory: 100Mi
- name: nginx-ctr
image: nginx
resources:
limits:
cpu: 1500m
memory: 100Mi
```
<!--
At admission time the RuntimeClass [admission controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/)
updates the workload's PodSpec to include the `overhead` as described in the RuntimeClass. If the PodSpec already has this field defined,
the Pod will be rejected. In the given example, since only the RuntimeClass name is specified, the admission controller mutates the Pod
to include an `overhead`.
-->
在准入阶段 RuntimeClass [准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/) 更新工作负载的 PodSpec 以包含
RuntimeClass 中定义的 `overhead`. 如果 PodSpec 中该字段已定义,该 Pod 将会被拒绝。
在这个例子中,由于只指定了 RuntimeClass 名称,所以准入控制器更新了 Pod, 包含了一个 `overhead`.
<!--
After the RuntimeClass admission controller, you can check the updated PodSpec:
-->
在 RuntimeClass 准入控制器之后,可以检验一下已更新的 PodSpec:
```bash
kubectl get pod test-pod -o jsonpath='{.spec.overhead}'
```
<!--
The output is:
-->
输出:
```
map[cpu:250m memory:120Mi]
```
<!--
If a ResourceQuota is defined, the sum of container requests as well as the
`overhead` field are counted.
-->
如果定义了 ResourceQuata, 则容器请求的总量以及 `overhead` 字段都将计算在内。
<!--
When the kube-scheduler is deciding which node should run a new Pod, the scheduler considers that Pod's
`overhead` as well as the sum of container requests for that Pod. For this example, the scheduler adds the
requests and the overhead, then looks for a node that has 2.25 CPU and 320 MiB of memory available.
-->
当 kube-scheduler 决定在哪一个节点调度运行新的 Pod 时,调度器会兼顾该 Pod 的 `overhead` 以及该 Pod 的容器请求总量。在这个示例中,调度器将资源请求和开销相加,然后寻找具备 2.25 CPU 和 320 MiB 内存可用的节点。
<!--
Once a Pod is scheduled to a node, the kubelet on that node creates a new {{< glossary_tooltip text="cgroup" term_id="cgroup" >}}
for the Pod. It is within this pod that the underlying container runtime will create containers. -->
一旦 Pod 调度到了某个节点, 该节点上的 kubelet 将为该 Pod 新建一个 {{< glossary_tooltip text="cgroup" term_id="cgroup" >}}. 底层容器运行时将在这个 pod 中创建容器。
<!--
If the resource has a limit defined for each container (Guaranteed QoS or Bustrable QoS with limits defined),
the kubelet will set an upper limit for the pod cgroup associated with that resource (cpu.cfs_quota_us for CPU
and memory.limit_in_bytes memory). This upper limit is based on the sum of the container limits plus the `overhead`
defined in the PodSpec.
-->
如果该资源对每一个容器都定义了一个限制(定义了受限的 Guaranteed QoS 或者 Bustrable QoSkubelet 会为与该资源CPU 的 cpu.cfs_quota_us 以及内存的 memory.limit_in_bytes
相关的 pod cgroup 设定一个上限。该上限基于容器限制总量与 PodSpec 中定义的 `overhead` 之和。
<!--
For CPU, if the Pod is Guaranteed or Burstable QoS, the kubelet will set `cpu.shares` based on the sum of container
requests plus the `overhead` defined in the PodSpec.
-->
对于 CPU, 如果 Pod 的 QoS 是 Guaranteed 或者 Burstable, kubelet 会基于容器请求总量与 PodSpec 中定义的 `overhead` 之和设置 `cpu.shares`.
<!--
Looking at our example, verify the container requests for the workload:
-->
请看这个例子,验证工作负载的容器请求:
```bash
kubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}'
```
<!--
The total container requests are 2000m CPU and 200MiB of memory:
-->
容器请求总计 2000m CPU 和 200MiB 内存:
```
map[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi]
```
<!--
Check this against what is observed by the node:
-->
对照从节点观察到的情况来检查一下:
```bash
kubectl describe node | grep test-pod -B2
```
<!--
The output shows 2250m CPU and 320MiB of memory are requested, which includes PodOverhead:
-->
该输出显示请求了 2250m CPU 以及 320MiB 内存,包含了 PodOverhead 在内:
```
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default test-pod 2250m (56%) 2250m (56%) 320Mi (1%) 320Mi (1%) 36m
```
<!--
## Verify Pod cgroup limits
-->
## 验证 Pod cgroup 限制
<!--
Check the Pod's memory cgroups on the node where the workload is running. In the following example, [`crictl`](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md)
is used on the node, which provides a CLI for CRI-compatible container runtimes. This is an
advanced example to show PodOverhead behavior, and it is not expected that users should need to check
cgroups directly on the node.
First, on the particular node, determine the Pod identifier:ying
-->
在工作负载所运行的节点上检查 Pod 的内存 cgroups. 在接下来的例子中,将在该节点上使用具备 CRI 兼容的容器运行时命令行工具 [`crictl`](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md).
这是一个展示 PodOverhead 行为的进阶示例,用户并不需要直接在该节点上检查 cgroups.
首先在特定的节点上确定该 Pod 的标识符:ying
<!--
```bash
# Run this on the node where the Pod is scheduled
-->
```bash
# 在该 Pod 调度的节点上执行如下命令:
POD_ID="$(sudo crictl pods --name test-pod -q)"
```
<!--
From this, you can determine the cgroup path for the Pod:
-->
可以依此判断该 Pod 的 cgroup 路径:
<!--
```bash
# Run this on the node where the Pod is scheduled
-->
```bash
# 在该 Pod 调度的节点上执行如下命令:
sudo crictl inspectp -o=json $POD_ID | grep cgroupsPath
```
<!--
The resulting cgroup path includes the Pod's `pause` container. The Pod level cgroup is one directory above.
-->
执行结果的 cgroup 路径中包含了该 Pod 的 `pause` 容器。Pod 级别的 cgroup 即上面的一个目录。
```
"cgroupsPath": "/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a"
```
<!--
In this specific case, the pod cgroup path is `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`. Verify the Pod level cgroup setting for memory:
-->
在这个例子中,该 pod 的 cgroup 路径是 `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`。验证内存的 Pod 级别 cgroup 设置:
<!--
```bash
# Run this on the node where the Pod is scheduled.
# Also, change the name of the cgroup to match the cgroup allocated for your pod.
-->
```bash
# 在该 Pod 调度的节点上执行这个命令。
# 另外,修改 cgroup 的名称以匹配为该 pod 分配的 cgroup。
cat /sys/fs/cgroup/memory/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/memory.limit_in_bytes
```
<!--
This is 320 MiB, as expected:
-->
和预期的一样是 320 MiB
```
335544320
```
<!--
### Observability
-->
### 可观察性
<!--
A `kube_pod_overhead` metric is available in [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics)
to help identify when PodOverhead is being utilized and to help observe stability of workloads
running with a defined Overhead. This functionality is not available in the 1.9 release of
kube-state-metrics, but is expected in a following release. Users will need to build kube-state-metrics
from source in the meantime.
-->
在 [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) 中可以通过 `kube_pod_overhead` 指标来协助确定何时使用 PodOverhead 以及协助观察以一个既定开销运行的工作负载的稳定性。
该特性在 kube-state-metrics 的 1.9 发行版本中不可用,不过预计将在后续版本中发布。在此之前,用户需要从源代码构建 kube-state-metrics.
## {{% heading "whatsnext" %}}
* [RuntimeClass](/zh/docs/concepts/containers/runtime-class/)
* [PodOverhead 设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)
---
title: Pod 开销
content_type: concept
weight: 20
---
<!-- overview -->
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
<!--
When you run a Pod on a Node, the Pod itself takes an amount of system resources. These
resources are additional to the resources needed to run the container(s) inside the Pod.
_Pod Overhead_ is a feature for accounting for the resources consumed by the Pod infrastructure
on top of the container requests & limits.
-->
在节点上运行 Pod 时Pod 本身占用大量系统资源。这些资源是运行 Pod 内容器所需资源的附加资源。
_POD 开销_ 是一个特性,用于计算 Pod 基础设施在容器请求和限制之上消耗的资源。
<!-- body -->
<!--
## Pod Overhead
-->
## Pod 开销
<!--
In Kubernetes, the Pod's overhead is set at
[admission](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)
time according to the overhead associated with the Pod's
[RuntimeClass](/docs/concepts/containers/runtime-class/).
-->
在 Kubernetes 中Pod 的开销是根据与 Pod 的 [RuntimeClass](/zh/docs/concepts/containers/runtime-class/) 相关联的开销在
[准入](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks) 时设置的。
<!--
When Pod Overhead is enabled, the overhead is considered in addition to the sum of container
resource requests when scheduling a Pod. Similarly, Kubelet will include the Pod overhead when sizing
the Pod cgroup, and when carrying out Pod eviction ranking.
-->
当启用 Pod 开销时,在调度 Pod 时,除了考虑容器资源请求的总和外,还要考虑 Pod 开销。类似地Kubelet 将在确定 Pod cgroup 的大小和执行 Pod 驱逐排序时包含 Pod 开销。
<!--
## Enabling Pod Overhead {#set-up}
-->
## 启用 Pod 开销 {#set-up}
<!--
You need to make sure that the `PodOverhead`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled (it is on by default as of 1.18)
across your cluster, and a `RuntimeClass` is utilized which defines the `overhead` field.
-->
您需要确保在集群中启用了 `PodOverhead` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
(在 1.18 默认是开启的),以及一个用于定义 `overhead` 字段的 `RuntimeClass`
<!--
## Usage example
-->
## 使用示例
<!--
To use the PodOverhead feature, you need a RuntimeClass that defines the `overhead` field. As
an example, you could use the following RuntimeClass definition with a virtualizing container runtime
that uses around 120MiB per Pod for the virtual machine and the guest OS:
-->
要使用 PodOverhead 特性,需要一个定义 `overhead` 字段的 RuntimeClass。
作为例子,可以在虚拟机和寄宿操作系统中通过一个虚拟化容器运行时来定义
RuntimeClass 如下,其中每个 Pod 大约使用 120MiB:
```yaml
---
kind: RuntimeClass
apiVersion: node.k8s.io/v1beta1
metadata:
name: kata-fc
handler: kata-fc
overhead:
podFixed:
memory: "120Mi"
cpu: "250m"
```
<!--
Workloads which are created which specify the `kata-fc` RuntimeClass handler will take the memory and
cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup sizing.
Consider running the given example workload, test-pod:
-->
通过指定 `kata-fc` RuntimeClass 处理程序创建的工作负载会将内存和 cpu 开销计入资源配额计算、节点调度以及 Pod cgroup 分级。
假设我们运行下面给出的工作负载示例 test-pod:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
runtimeClassName: kata-fc
containers:
- name: busybox-ctr
image: busybox
stdin: true
tty: true
resources:
limits:
cpu: 500m
memory: 100Mi
- name: nginx-ctr
image: nginx
resources:
limits:
cpu: 1500m
memory: 100Mi
```
<!--
At admission time the RuntimeClass [admission controller](/docs/reference/access-authn-authz/admission-controllers/)
updates the workload's PodSpec to include the `overhead` as described in the RuntimeClass. If the PodSpec already has this field defined,
the Pod will be rejected. In the given example, since only the RuntimeClass name is specified, the admission controller mutates the Pod
to include an `overhead`.
-->
在准入阶段 RuntimeClass [准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/) 更新工作负载的 PodSpec 以包含
RuntimeClass 中定义的 `overhead`. 如果 PodSpec 中该字段已定义,该 Pod 将会被拒绝。
在这个例子中,由于只指定了 RuntimeClass 名称,所以准入控制器更新了 Pod, 包含了一个 `overhead`.
<!--
After the RuntimeClass admission controller, you can check the updated PodSpec:
-->
在 RuntimeClass 准入控制器之后,可以检验一下已更新的 PodSpec:
```bash
kubectl get pod test-pod -o jsonpath='{.spec.overhead}'
```
<!--
The output is:
-->
输出:
```
map[cpu:250m memory:120Mi]
```
<!--
If a ResourceQuota is defined, the sum of container requests as well as the
`overhead` field are counted.
-->
如果定义了 ResourceQuata, 则容器请求的总量以及 `overhead` 字段都将计算在内。
<!--
When the kube-scheduler is deciding which node should run a new Pod, the scheduler considers that Pod's
`overhead` as well as the sum of container requests for that Pod. For this example, the scheduler adds the
requests and the overhead, then looks for a node that has 2.25 CPU and 320 MiB of memory available.
-->
当 kube-scheduler 决定在哪一个节点调度运行新的 Pod 时,调度器会兼顾该 Pod 的 `overhead` 以及该 Pod 的容器请求总量。在这个示例中,调度器将资源请求和开销相加,然后寻找具备 2.25 CPU 和 320 MiB 内存可用的节点。
<!--
Once a Pod is scheduled to a node, the kubelet on that node creates a new {{< glossary_tooltip text="cgroup" term_id="cgroup" >}}
for the Pod. It is within this pod that the underlying container runtime will create containers. -->
一旦 Pod 调度到了某个节点, 该节点上的 kubelet 将为该 Pod 新建一个 {{< glossary_tooltip text="cgroup" term_id="cgroup" >}}. 底层容器运行时将在这个 pod 中创建容器。
<!--
If the resource has a limit defined for each container (Guaranteed QoS or Bustrable QoS with limits defined),
the kubelet will set an upper limit for the pod cgroup associated with that resource (cpu.cfs_quota_us for CPU
and memory.limit_in_bytes memory). This upper limit is based on the sum of the container limits plus the `overhead`
defined in the PodSpec.
-->
如果该资源对每一个容器都定义了一个限制(定义了受限的 Guaranteed QoS 或者 Bustrable QoSkubelet 会为与该资源CPU 的 cpu.cfs_quota_us 以及内存的 memory.limit_in_bytes
相关的 pod cgroup 设定一个上限。该上限基于容器限制总量与 PodSpec 中定义的 `overhead` 之和。
<!--
For CPU, if the Pod is Guaranteed or Burstable QoS, the kubelet will set `cpu.shares` based on the sum of container
requests plus the `overhead` defined in the PodSpec.
-->
对于 CPU, 如果 Pod 的 QoS 是 Guaranteed 或者 Burstable, kubelet 会基于容器请求总量与 PodSpec 中定义的 `overhead` 之和设置 `cpu.shares`.
<!--
Looking at our example, verify the container requests for the workload:
-->
请看这个例子,验证工作负载的容器请求:
```bash
kubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}'
```
<!--
The total container requests are 2000m CPU and 200MiB of memory:
-->
容器请求总计 2000m CPU 和 200MiB 内存:
```
map[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi]
```
<!--
Check this against what is observed by the node:
-->
对照从节点观察到的情况来检查一下:
```bash
kubectl describe node | grep test-pod -B2
```
<!--
The output shows 2250m CPU and 320MiB of memory are requested, which includes PodOverhead:
-->
该输出显示请求了 2250m CPU 以及 320MiB 内存,包含了 PodOverhead 在内:
```
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default test-pod 2250m (56%) 2250m (56%) 320Mi (1%) 320Mi (1%) 36m
```
<!--
## Verify Pod cgroup limits
-->
## 验证 Pod cgroup 限制
<!--
Check the Pod's memory cgroups on the node where the workload is running. In the following example, [`crictl`](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md)
is used on the node, which provides a CLI for CRI-compatible container runtimes. This is an
advanced example to show PodOverhead behavior, and it is not expected that users should need to check
cgroups directly on the node.
First, on the particular node, determine the Pod identifier:
-->
在工作负载所运行的节点上检查 Pod 的内存 cgroups. 在接下来的例子中,将在该节点上使用具备 CRI 兼容的容器运行时命令行工具 [`crictl`](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md).
这是一个展示 PodOverhead 行为的进阶示例,用户并不需要直接在该节点上检查 cgroups.
首先在特定的节点上确定该 Pod 的标识符:
<!--
```bash
# Run this on the node where the Pod is scheduled
-->
```bash
# 在该 Pod 调度的节点上执行如下命令:
POD_ID="$(sudo crictl pods --name test-pod -q)"
```
<!--
From this, you can determine the cgroup path for the Pod:
-->
可以依此判断该 Pod 的 cgroup 路径:
<!--
```bash
# Run this on the node where the Pod is scheduled
-->
```bash
# 在该 Pod 调度的节点上执行如下命令:
sudo crictl inspectp -o=json $POD_ID | grep cgroupsPath
```
<!--
The resulting cgroup path includes the Pod's `pause` container. The Pod level cgroup is one directory above.
-->
执行结果的 cgroup 路径中包含了该 Pod 的 `pause` 容器。Pod 级别的 cgroup 即上面的一个目录。
```
"cgroupsPath": "/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a"
```
<!--
In this specific case, the pod cgroup path is `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`. Verify the Pod level cgroup setting for memory:
-->
在这个例子中,该 pod 的 cgroup 路径是 `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`。验证内存的 Pod 级别 cgroup 设置:
<!--
```bash
# Run this on the node where the Pod is scheduled.
# Also, change the name of the cgroup to match the cgroup allocated for your pod.
-->
```bash
# 在该 Pod 调度的节点上执行这个命令。
# 另外,修改 cgroup 的名称以匹配为该 pod 分配的 cgroup。
cat /sys/fs/cgroup/memory/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/memory.limit_in_bytes
```
<!--
This is 320 MiB, as expected:
-->
和预期的一样是 320 MiB
```
335544320
```
<!--
### Observability
-->
### 可观察性
<!--
A `kube_pod_overhead` metric is available in [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics)
to help identify when PodOverhead is being utilized and to help observe stability of workloads
running with a defined Overhead. This functionality is not available in the 1.9 release of
kube-state-metrics, but is expected in a following release. Users will need to build kube-state-metrics
from source in the meantime.
-->
在 [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) 中可以通过 `kube_pod_overhead` 指标来协助确定何时使用 PodOverhead 以及协助观察以一个既定开销运行的工作负载的稳定性。
该特性在 kube-state-metrics 的 1.9 发行版本中不可用,不过预计将在后续版本中发布。在此之前,用户需要从源代码构建 kube-state-metrics.
## {{% heading "whatsnext" %}}
* [RuntimeClass](/zh/docs/concepts/containers/runtime-class/)
* [PodOverhead 设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)

View File

@ -0,0 +1,333 @@
---
title: Kubernetes API 访问控制
content_type: concept
---
<!--
---
reviewers:
- erictune
- lavalamp
title: Controlling Access to the Kubernetes API
content_type: concept
---
-->
<!-- overview -->
<!--
This page provides an overview of controlling access to the Kubernetes API.
-->
本页面概述了对 Kubernetes API 的访问控制。
<!-- body -->
<!--
Users access the [Kubernetes API](/docs/concepts/overview/kubernetes-api/) using `kubectl`,
client libraries, or by making REST requests. Both human users and
[Kubernetes service accounts](/docs/tasks/configure-pod-container/configure-service-account/) can be
authorized for API access.
When a request reaches the API, it goes through several stages, illustrated in the
following diagram:
-->
用户使用 `kubectl`、客户端库或构造 REST 请求访问来 [Kubernetes API](/zh/docs/concepts/overview/kubernetes-api/)。
人类用户和 [Kubernetes 服务账户](/zh/docs/tasks/configure-pod-container/configure-service-account/)都可以被鉴权访问 API。
当请求到达 API 时,它会经历多个阶段,如下图所示:
![Kubernetes API 请求处理步骤示意图](/images/docs/admin/access-control-overview.svg)
<!-- ## Transport security -->
## 传输安全 {#transport-security}
<!--
In a typical Kubernetes cluster, the API serves on port 443, protected by TLS.
The API server presents a certificate. This certificate may be signed using
a private certificate authority (CA), or based on a public key infrastructure linked
to a generally recognized CA.
-->
在典型的 Kubernetes 集群中API 服务器在 443 端口上提供服务,受 TLS 保护。
API 服务器出示证书。
该证书可以使用私有证书颁发机构CA签名也可以基于链接到公认的 CA 的公钥基础架构签名。
<!--
If your cluster uses a private certificate authority, you need a copy of that CA
certifcate configured into your `~/.kube/config` on the client, so that you can
trust the connection and be confident it was not intercepted.
Your client can present a TLS client certificate at this stage.
-->
如果你的集群使用私有证书颁发机构,你需要在客户端的 `~/.kube/config` 文件中提供该 CA 证书的副本,
以便你可以信任该连接并确认该连接没有被拦截。
你的客户端可以在此阶段出示 TLS 客户端证书。
<!-- ## Authentication -->
## 认证 {#authentication}
<!--
Once TLS is established, the HTTP request moves to the Authentication step.
This is shown as step **1** in the diagram.
The cluster creation script or cluster admin configures the API server to run
one or more Authenticator modules.
Authenticators are described in more detail in
[Authentication](/docs/reference/access-authn-authz/authentication/).
-->
如上图步骤 **1** 所示,建立 TLS 后, HTTP 请求将进入认证Authentication步骤。
集群创建脚本或者集群管理员配置 API 服务器,使之运行一个或多个身份认证组件。
身份认证组件在[认证](/zh/docs/reference/access-authn-authz/authentication/)节中有更详细的描述。
<!--
The input to the authentication step is the entire HTTP request; however, it typically
just examines the headers and/or client certificate.
Authentication modules include client certificates, password, and plain tokens,
bootstrap tokens, and JSON Web Tokens (used for service accounts).
Multiple authentication modules can be specified, in which case each one is tried in sequence,
until one of them succeeds.
-->
认证步骤的输入整个 HTTP 请求;但是,通常组件只检查头部或/和客户端证书。
认证模块包含客户端证书、密码、普通令牌、引导令牌和 JSON Web 令牌JWT用于服务账户
可以指定多个认证模块,在这种情况下,服务器依次尝试每个验证模块,直到其中一个成功。
<!--
If the request cannot be authenticated, it is rejected with HTTP status code 401.
Otherwise, the user is authenticated as a specific `username`, and the user name
is available to subsequent steps to use in their decisions. Some authenticators
also provide the group memberships of the user, while other authenticators
do not.
While Kubernetes uses usernames for access control decisions and in request logging,
it does not have a `User` object nor does it store usernames or other information about
users in its API.
-->
如果请求认证不通过,服务器将以 HTTP 状态码 401 拒绝该请求。
反之,该用户被认证为特定的 `username`,并且该用户名可用于后续步骤以在其决策中使用。
部分验证器还提供用户的组成员身份,其他则不提供。
<!-- ## Authorization -->
## 鉴权 {#authorization}
<!--
After the request is authenticated as coming from a specific user, the request must be authorized. This is shown as step **2** in the diagram.
A request must include the username of the requester, the requested action, and the object affected by the action. The request is authorized if an existing policy declares that the user has permissions to complete the requested action.
For example, if Bob has the policy below, then he can read pods only in the namespace `projectCaribou`:
-->
如上图的步骤 **2** 所示,将请求验证为来自特定的用户后,请求必须被鉴权。
请求必须包含请求者的用户名、请求的行为以及受该操作影响的对象。
如果现有策略声明用户有权完成请求的操作,那么该请求被鉴权通过。
例如,如果 Bob 有以下策略,那么他只能在 `projectCaribou` 名称空间中读取 Pod。
```json
{
"apiVersion": "abac.authorization.kubernetes.io/v1beta1",
"kind": "Policy",
"spec": {
"user": "bob",
"namespace": "projectCaribou",
"resource": "pods",
"readonly": true
}
}
```
<!--
If Bob makes the following request, the request is authorized because he is allowed to read objects in the `projectCaribou` namespace:
-->
如果 Bob 执行以下请求,那么请求会被鉴权,因为允许他读取 `projectCaribou` 名称空间中的对象。
```json
{
"apiVersion": "authorization.k8s.io/v1beta1",
"kind": "SubjectAccessReview",
"spec": {
"resourceAttributes": {
"namespace": "projectCaribou",
"verb": "get",
"group": "unicorn.example.org",
"resource": "pods"
}
}
}
```
<!--
If Bob makes a request to write (`create` or `update`) to the objects in the `projectCaribou` namespace, his authorization is denied.
If Bob makes a request to read (`get`) objects in a different namespace such as `projectFish`, then his authorization is denied.
Kubernetes authorization requires that you use common REST attributes to interact with existing organization-wide or cloud-provider-wide access control systems.
It is important to use REST formatting because these control systems might interact with other APIs besides the Kubernetes API.
-->
如果 Bob 在 `projectCaribou` 名字空间中请求写(`create` 或 `update`)对象,其鉴权请求将被拒绝。
如果 Bob 在诸如 `projectFish` 这类其它名字空间中请求读取(`get`)对象,其鉴权也会被拒绝。
Kubernetes 鉴权要求使用公共 REST 属性与现有的组织范围或云提供商范围的访问控制系统进行交互。
使用 REST 格式很重要,因为这些控制系统可能会与 Kubernetes API 之外的 API 交互。
<!--
Kubernetes supports multiple authorization modules, such as ABAC mode, RBAC Mode, and Webhook mode.
When an administrator creates a cluster, they configure the authorization modules that should be used in the API server.
If more than one authorization modules are configured, Kubernetes checks each module,
and if any module authorizes the request, then the request can proceed.
If all of the modules deny the request, then the request is denied (HTTP status code 403).
To learn more about Kubernetes authorization, including details about creating policies using the supported authorization modules,
see [Authorization](/docs/reference/access-authn-authz/authorization/).
-->
Kubernetes 支持多种鉴权模块,例如 ABAC 模式、RBAC 模式和 Webhook 模式等。
管理员创建集群时,他们配置应在 API 服务器中使用的鉴权模块。
如果配置了多个鉴权模块,则 Kubernetes 会检查每个模块,任意一个模块鉴权该请求,请求即可继续;
如果所有模块拒绝了该请求请求将会被拒绝HTTP 状态码 403
要了解更多有关 Kubernetes 鉴权的更多信息,包括有关使用支持鉴权模块创建策略的详细信息,
请参阅[鉴权](/zh/docs/reference/access-authn-authz/authorization/)。
<!-- ## Admission control -->
## 准入控制 {#admission-control}
<!--
Admission Control modules are software modules that can modify or reject requests.
In addition to the attributes available to Authorization modules, Admission
Control modules can access the contents of the object that is being created or modified.
Admission controllers act on requests that create, modify, delete, or connect to (proxy) an object.
Admission controllers do not act on requests that merely read objects.
When multiple admission controllers are configured, they are called in order.
-->
准入控制模块是可以修改或拒绝请求的软件模块。
除鉴权模块可用的属性外,准入控制模块还可以访问正在创建或修改的对象的内容。
准入控制器对创建、修改、删除或(通过代理)连接对象的请求进行操作。
准入控制器不会对仅读取对象的请求起作用。
有多个准入控制器被配置时,服务器将依次调用它们。
<!--
This is shown as step **3** in the diagram.
Unlike Authentication and Authorization modules, if any admission controller module
rejects, then the request is immediately rejected.
In addition to rejecting objects, admission controllers can also set complex defaults for
fields.
The available Admission Control modules are described in [Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/).
Once a request passes all admission controllers, it is validated using the validation routines
for the corresponding API object, and then written to the object store (shown as step **4**).
-->
这一操作如上图的步骤 **3** 所示。
与身份认证和鉴权模块不同,如果任何准入控制器模块拒绝某请求,则该请求将立即被拒绝。
除了拒绝对象之外,准入控制器还可以为字段设置复杂的默认值。
可用的准入控制模块在[准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/)中进行了描述。
请求通过所有准入控制器后,将使用检验例程检查对应的 API 对象,然后将其写入对象存储(如步骤 **4** 所示)。
<!-- ## API server ports and IPs -->
## API 服务器端口和 IP {#api-server-ports-and-ips}
<!--
The previous discussion applies to requests sent to the secure port of the API server
(the typical case). The API server can actually serve on 2 ports:
By default the Kubernetes API server serves HTTP on 2 ports:
-->
前面的讨论适用于发送到 API 服务器的安全端口的请求(典型情况)。 API 服务器实际上可以在 2 个端口上提供服务:
默认情况下Kubernetes API 服务器在 2 个端口上提供 HTTP 服务:
<!--
1. `localhost` port:
- is intended for testing and bootstrap, and for other components of the master node
(scheduler, controller-manager) to talk to the API
- no TLS
- default is port 8080, change with `--insecure-port` flag.
- default IP is localhost, change with `--insecure-bind-address` flag.
- request **bypasses** authentication and authorization modules.
- request handled by admission control module(s).
- protected by need to have host access
2. “Secure port”:
- use whenever possible
- uses TLS. Set cert with `--tls-cert-file` and key with `--tls-private-key-file` flag.
- default is port 6443, change with `--secure-port` flag.
- default IP is first non-localhost network interface, change with `--bind-address` flag.
- request handled by authentication and authorization modules.
- request handled by admission control module(s).
- authentication and authorization modules run.
-->
1. `localhost` 端口:
- 用于测试和引导,以及主控节点上的其他组件(调度器,控制器管理器)与 API 通信
- 没有 TLS
- 默认为端口 8080使用 `--insecure-port` 进行更改
- 默认 IP 为 localhost使用 `--insecure-bind-address` 进行更改
- 请求 **绕过** 身份认证和鉴权模块
- 由准入控制模块处理的请求
- 受需要访问主机的保护
2. “安全端口”:
- 尽可能使用
- 使用 TLS。 用 `--tls-cert-file` 设置证书,用 `--tls-private-key-file` 设置密钥
- 默认端口 6443使用 `--secure-port` 更改
- 默认 IP 是第一个非本地网络接口,使用 `--bind-address` 更改
- 请求须经身份认证和鉴权组件处理
- 请求须经准入控制模块处理
- 身份认证和鉴权模块运行
## {{% heading "whatsnext" %}}
<!--
Read more documentation on authentication, authorization and API access control:
- [Authenticating](/docs/reference/access-authn-authz/authentication/)
- [Authenticating with Bootstrap Tokens](/docs/reference/access-authn-authz/bootstrap-tokens/)
- [Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/)
- [Dynamic Admission Control](/docs/reference/access-authn-authz/extensible-admission-controllers/)
- [Authorization](/docs/reference/access-authn-authz/authorization/)
- [Role Based Access Control](/docs/reference/access-authn-authz/rbac/)
- [Attribute Based Access Control](/docs/reference/access-authn-authz/abac/)
- [Node Authorization](/docs/reference/access-authn-authz/node/)
- [Webhook Authorization](/docs/reference/access-authn-authz/webhook/)
- [Certificate Signing Requests](/docs/reference/access-authn-authz/certificate-signing-requests/)
- including [CSR approval](/docs/reference/access-authn-authz/certificate-signing-requests/#approval-rejection)
and [certificate signing](/docs/reference/access-authn-authz/certificate-signing-requests/#signing)
- Service accounts
- [Developer guide](/docs/tasks/configure-pod-container/configure-service-account/)
- [Administration](/docs/reference/access-authn-authz/service-accounts-admin/)
You can learn about:
- how Pods can use
[Secrets](/docs/concepts/configuration/secret/#service-accounts-automatically-create-and-attach-secrets-with-api-credentials)
to obtain API credentials.
-->
阅读更多有关身份认证、鉴权和 API 访问控制的文档:
- [认证](/zh/docs/reference/access-authn-authz/authentication/)
- [使用 Bootstrap 令牌进行身份认证](/zh/docs/reference/access-authn-authz/bootstrap-tokens/)
- [准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/)
- [动态准入控制](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/)
- [鉴权](/zh/docs/reference/access-authn-authz/authorization/)
- [基于角色的访问控制](/zh/docs/reference/access-authn-authz/rbac/)
- [基于属性的访问控制](/zh/docs/reference/access-authn-authz/abac/)
- [节点鉴权](/zh/docs/reference/access-authn-authz/node/)
- [Webhook 鉴权](/zh/docs/reference/access-authn-authz/webhook/)
- [证书签名请求](/zh/docs/reference/access-authn-authz/certificate-signing-requests/)
- 包括 [CSR 认证](/zh/docs/reference/access-authn-authz/certificate-signing-requests/#approval-rejection)
和[证书签名](/zh/docs/reference/access-authn-authz/certificate-signing-requests/#signing)
- 服务账户
- [开发者指导](/zh/docs/tasks/configure-pod-container/configure-service-account/)
- [管理](/zh/docs/reference/access-authn-authz/service-accounts-admin/)
你可以了解
- Pod 如何使用
[Secrets](/zh/docs/concepts/configuration/secret/#service-accounts-automatically-create-and-attach-secrets-with-api-credentials)
获取 API 凭证.

View File

@ -10,19 +10,43 @@ content_type: concept
weight: 50
-->
{{< toc >}}
<!-- overview -->
<!--
A network policy is a specification of how groups of {{< glossary_tooltip text="pods" term_id="pod">}} are allowed to communicate with each other and other network endpoints.
NetworkPolicy resources use {{< glossary_tooltip text="labels" term_id="label">}} to select pods and define rules which specify what traffic is allowed to the selected pods.
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster. NetworkPolicies are an application-centric construct which allow you to specify how a {{< glossary_tooltip text="pod" term_id="pod">}} is allowed to communicate with various network "entities" (we use the word "entity" here to avoid overloading the more common terms such as "endpoints" and "services", which have specific Kubernetes connotations) over the network.
-->
如果你希望在 IP 地址或端口层面OSI 第 3 层或第 4 层)控制网络流量,
则你可以考虑为集群中特定应用使用 Kubernetes 网络策略NetworkPolicy
NetworkPolicy 是一种以应用为中心的结构,允许你设置如何允许
{{< glossary_tooltip text="Pod" term_id="pod">}} 与网络上的各类网络“实体”
(我们这里使用实体以避免过度使用诸如“端点”和“服务”这类常用术语,
这些术语在 Kubernetes 中有特定含义)通信。
网络策略NetworkPolicy是一种关于 {{< glossary_tooltip text="Pod" term_id="pod">}} 间及与其他网络端点间所允许的通信规则的规范。
<!--
The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers:
NetworkPolicy 资源使用 {{< glossary_tooltip text="标签" term_id="label">}} 选择 Pod并定义选定 Pod 所允许的通信规则。
1. Other pods that are allowed (exception: a pod cannot block access to itself)
2. Namespaces that are allowed
3. IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node)
-->
Pod 可以通信的 Pod 是通过如下三个标识符的组合来辩识的:
1. 其他被允许的 Pods例外Pod 无法阻塞对自身的访问)
2. 被允许的名字空间
3. IP 组块(例外:与 Pod 运行所在的节点的通信总是被允许的,
无论 Pod 或节点的 IP 地址)
<!--
When defining a pod- or namespace- based NetworkPolicy, you use a {{< glossary_tooltip text="selector" term_id="selector">}} to specify what traffic is allowed to and from the Pod(s) that match the selector.
Meanwhile, when IP based NetworkPolicies are created, we define policies based on IP blocks (CIDR ranges).
-->
在定义基于 Pod 或名字空间的 NetworkPolicy 时,你会使用
{{< glossary_tooltip text="选择算符" term_id="selector">}} 来设定哪些流量
可以进入或离开与该算符匹配的 Pod。
同时,当基于 IP 的 NetworkPolicy 被创建时,我们基于 IP 组块CIDR 范围)
来定义策略。
<!-- body -->
@ -31,12 +55,11 @@ NetworkPolicy 资源使用 {{< glossary_tooltip text="标签" term_id="label">}}
Network policies are implemented by the [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). To use network policies, you must be using a networking solution which supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect.
-->
## 前提
## 前置条件 {#prerequisites}
网络策略通过[网络插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
来实现。要使用网络策略,用户必须使用支持 NetworkPolicy 的网络解决方案。
创建一个资源对象而没有控制器来使它生效的话,是没有任何作用的。
来实现。要使用网络策略,必须使用支持 NetworkPolicy 的网络解决方案。
创建一个 NetworkPolicy 资源对象而没有控制器来使它生效的话,是没有任何作用的。
<!--
## Isolated and Non-isolated Pods
@ -47,17 +70,18 @@ Pods become isolated by having a NetworkPolicy that selects them. Once there is
Network policies do not conflict; they are additive. If any policy or policies select a pod, the pod is restricted to what is allowed by the union of those policies' ingress/egress rules. Thus, order of evaluation does not affect the policy result.
-->
## 隔离和非隔离的 Pod
## 隔离和非隔离的 Pod {#isolated-and-non-isolated-pods}
默认情况下Pod 是非隔离的,它们接受任何来源的流量。
Pod 可以通过相关的网络策略进行隔离。一旦命名空间中有网络策略选择了特定的 Pod
该 Pod 会拒绝网络策略所不允许的连接。
(命名空间下其他未被网络策略所选择的 Pod 会继续接收所有的流量)
Pod 在被某 NetworkPolicy 选中时进入被隔离状态。
一旦名字空间中有 NetworkPolicy 选择了特定的 Pod该 Pod 会拒绝该 NetworkPolicy
所不允许的连接。
(名字空间下其他未被 NetworkPolicy 所选择的 Pod 会继续接受所有的流量)
网络策略不会冲突,它们是累积的。
如果任何一个或多个策略选择了一个 Pod, 则该 Pod 受限于这些策略的
ingress/egress 规则的并集。因此评估的顺序并不会影响策略的结果。
入站Ingress/出站Egress规则的并集。因此评估的顺序并不会影响策略的结果。
<!--
## The NetworkPolicy resource {#networkpolicy-resource}
@ -66,10 +90,10 @@ See the [NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param "vers
An example NetworkPolicy might look like this:
-->
## NetworkPolicy 资源 {#networkpolicy-resource}
查看 [NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#networkpolicy-v1-networking-k8s-io) 来了解完整的资源定义。
参阅 [NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#networkpolicy-v1-networking-k8s-io)
来了解资源的完整定义。
下面是一个 NetworkPolicy 的示例:
@ -127,27 +151,41 @@ and [Object Management](/docs/concepts/overview/working-with-objects/object-mana
__spec__: NetworkPolicy [spec](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) has all the information needed to define a particular network policy in the given namespace.
__podSelector__: Each NetworkPolicy includes a `podSelector` which selects the grouping of pods to which the policy applies. The example policy selects pods with the label "role=db". An empty `podSelector` selects all pods in the namespace.
-->
__必需字段__与所有其他的 Kubernetes 配置一样NetworkPolicy 需要 `apiVersion`
`kind``metadata` 字段。关于配置文件操作的一般信息,请参考
[使用 ConfigMap 配置容器](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/),
和[对象管理](/zh/docs/concepts/overview/working-with-objects/object-management)。
__spec__NetworkPolicy [规约](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)
中包含了在一个名字空间中定义特定网络策略所需的所有信息。
__podSelector__每个 NetworkPolicy 都包括一个 `podSelector`,它对该策略所
适用的一组 Pod 进行选择。示例中的策略选择带有 "role=db" 标签的 Pod。
空的 `podSelector` 选择名字空间下的所有 Pod。
<!--
__policyTypes__: Each NetworkPolicy includes a `policyTypes` list which may include either `Ingress`, `Egress`, or both. The `policyTypes` field indicates whether or not the given policy applies to ingress traffic to selected pod, egress traffic from selected pods, or both. If no `policyTypes` are specified on a NetworkPolicy then by default `Ingress` will always be set and `Egress` will be set if the NetworkPolicy has any egress rules.
__ingress__: Each NetworkPolicy may include a list of whitelist `ingress` rules. Each rule allows traffic which matches both the `from` and `ports` sections. The example policy contains a single rule, which matches traffic on a single port, from one of three sources, the first specified via an `ipBlock`, the second via a `namespaceSelector` and the third via a `podSelector`.
__ingress__: Each NetworkPolicy may include a list of allowed `ingress` rules. Each rule allows traffic which matches both the `from` and `ports` sections. The example policy contains a single rule, which matches traffic on a single port, from one of three sources, the first specified via an `ipBlock`, the second via a `namespaceSelector` and the third via a `podSelector`.
__egress__: Each NetworkPolicy may include a list of whitelist `egress` rules. Each rule allows traffic which matches both the `to` and `ports` sections. The example policy contains a single rule, which matches traffic on a single port to any destination in `10.0.0.0/24`.
__egress__: Each NetworkPolicy may include a list of allowed `egress` rules. Each rule allows traffic which matches both the `to` and `ports` sections. The example policy contains a single rule, which matches traffic on a single port to any destination in `10.0.0.0/24`.
-->
__必填字段__: 与所有其他的 Kubernetes 配置一样NetworkPolicy 需要 `apiVersion`、`kind` 和 `metadata` 字段。
关于配置文件操作的一般信息,请参考 [使用 ConfigMap 配置容器](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/),
和[对象管理](/zh/docs/concepts/overview/working-with-objects/object-management)。
__policyTypes__: 每个 NetworkPolicy 都包含一个 `policyTypes` 列表,其中包含
`Ingress``Egress` 或两者兼具。`policyTypes` 字段表示给定的策略是应用于
进入所选 Pod 的入站流量还是来自所选 Pod 的出站流量,或两者兼有。
如果 NetworkPolicy 未指定 `policyTypes` 则默认情况下始终设置 `Ingress`
如果 NetworkPolicy 有任何出口规则的话则设置 `Egress`
__spec__: NetworkPolicy [规约](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) 中包含了在一个命名空间中定义特定网络策略所需的所有信息。
__ingress__: 每个 NetworkPolicy 可包含一个 `ingress` 规则的白名单列表。
每个规则都允许同时匹配 `from``ports` 部分的流量。示例策略中包含一条
简单的规则: 它匹配某个特定端口,来自三个来源中的一个,第一个通过 `ipBlock`
指定,第二个通过 `namespaceSelector` 指定,第三个通过 `podSelector` 指定。
__podSelector__: 每个 NetworkPolicy 都包括一个 `podSelector` ,它对该策略所应用的一组 Pod 进行选择。示例中的策略选择带有 "role=db" 标签的 Pod。空的 `podSelector` 选择命名空间下的所有 Pod。
__policyTypes__: 每个 NetworkPolicy 都包含一个 `policyTypes` 列表,其中包含 `Ingress``Egress` 或两者兼具。`policyTypes` 字段表示给定的策略是否应用于进入所选 Pod 的入口流量或者来自所选 Pod 的出口流量,或两者兼有。如果 NetworkPolicy 未指定 `policyTypes` 则默认情况下始终设置 `Ingress`,如果 NetworkPolicy 有任何出口规则的话则设置 `Egress`
__ingress__: 每个 NetworkPolicy 可包含一个 `ingress` 规则的白名单列表。每个规则都允许同时匹配 `from``ports` 部分的流量。示例策略中包含一条简单的规则: 它匹配一个单一的端口,来自三个来源中的一个, 第一个通过 `ipBlock` 指定,第二个通过 `namespaceSelector` 指定,第三个通过 `podSelector` 指定。
__egress__: 每个 NetworkPolicy 可包含一个 `egress` 规则的白名单列表。每个规则都允许匹配 `to``port` 部分的流量。该示例策略包含一条规则,该规则将单个端口上的流量匹配到 `10.0.0.0/24` 中的任何目的地。
__egress__: 每个 NetworkPolicy 可包含一个 `egress` 规则的白名单列表。
每个规则都允许匹配 `to``port` 部分的流量。该示例策略包含一条规则,
该规则将指定端口上的流量匹配到 `10.0.0.0/24` 中的任何目的地。
<!--
So, the example NetworkPolicy:
@ -162,18 +200,22 @@ So, the example NetworkPolicy:
See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) walkthrough for further examples.
-->
所以,该网络策略示例:
1. 隔离 "default" 命名空间下 "role=db" 的 Pod (如果它们不是已经被隔离的话)。
2. Ingress 规则)允许以下 Pod 连接到 "default" 命名空间下的带有 “role=db” 标签的所有 Pod 的 6379 TCP 端口:
1. 隔离 "default" 名字空间下 "role=db" 的 Pod (如果它们不是已经被隔离的话)。
2. Ingress 规则)允许以下 Pod 连接到 "default" 名字空间下的带有 "role=db"
标签的所有 Pod 的 6379 TCP 端口:
* "default" 名空间下任意带有 "role=frontend" 标签的 Pod
* 带有 "project=myproject" 标签的任意命名空间中的 Pod
* IP 地址范围为 172.17.0.0172.17.0.255 和 172.17.2.0172.17.255.255(即,除了 172.17.1.0/24 之外的所有 172.17.0.0/16
3. Egress 规则)允许从带有 "role=db" 标签的命名空间下的任何 Pod 到 CIDR 10.0.0.0/24 下 5978 TCP 端口的连接。
* "default" 名空间下带有 "role=frontend" 标签的所有 Pod
* 带有 "project=myproject" 标签的所有名字空间中的 Pod
* IP 地址范围为 172.17.0.0172.17.0.255 和 172.17.2.0172.17.255.255
(即,除了 172.17.1.0/24 之外的所有 172.17.0.0/16
查看[声明网络策略](/zh/docs/tasks/administer-cluster/declare-network-policy/) 来进行更多的示例演练。
3. Egress 规则)允许从带有 "role=db" 标签的名字空间下的任何 Pod 到 CIDR
10.0.0.0/24 下 5978 TCP 端口的连接。
参阅[声明网络策略](/zh/docs/tasks/administer-cluster/declare-network-policy/)演练
了解更多示例。
<!--
## Behavior of `to` and `from` selectors
@ -186,16 +228,19 @@ __namespaceSelector__: This selects particular namespaces for which all Pods sho
__namespaceSelector__ *and* __podSelector__: A single `to`/`from` entry that specifies both `namespaceSelector` and `podSelector` selects particular Pods within particular namespaces. Be careful to use correct YAML syntax; this policy:
-->
## 选择器 `to``from` 的行为 {#behavior-of-to-and-from-selectors}
## 选择器 `to``from` 的行为
可以在 `ingress``from` 部分或 `egress``to` 部分中指定四种选择器:
可以在 `ingress` `from` 部分或 `egress` `to` 部分中指定四种选择器:
__podSelector__: 此选择器将在与 NetworkPolicy 相同的名字空间中选择特定的
Pod应将其允许作为入站流量来源或出站流量目的地。
__podSelector__: 这将在与 NetworkPolicy 相同的命名空间中选择特定的 Pod应将其允许作为入口源或出口目的地。
__namespaceSelector__此选择器将选择特定的名字空间应将所有 Pod 用作其
入站流量来源或出站流量目的地。
__namespaceSelector__: 这将选择特定的命名空间,应将所有 Pod 用作其输入源或输出目的地。
__namespaceSelector__ *和* __podSelector__: 一个指定 `namespaceSelector``podSelector``to`/`from` 条目选择特定命名空间中的特定 Pod。注意使用正确的 YAML 语法;这项策略:
__namespaceSelector__ *和* __podSelector__ 一个指定 `namespaceSelector`
`podSelector``to`/`from` 条目选择特定名字空间中的特定 Pod。
注意使用正确的 YAML 语法;下面的策略:
```yaml
...
@ -213,7 +258,8 @@ __namespaceSelector__ *和* __podSelector__: 一个指定 `namespaceSelector`
<!--
contains a single `from` element allowing connections from Pods with the label `role=client` in namespaces with the label `user=alice`. But *this* policy:
-->
`from` 数组中仅包含一个元素,只允许来自标有 `role=client` 的 Pod 且该 Pod 所在的命名空间中标有 `user=alice` 的连接。但是 *这项* 策略:
`from` 数组中仅包含一个元素,只允许来自标有 `role=client` 的 Pod 且
该 Pod 所在的名字空间中标有 `user=alice` 的连接。但是 *这项* 策略:
```yaml
...
@ -230,7 +276,11 @@ contains a single `from` element allowing connections from Pods with the label `
<!--
contains two elements in the `from` array, and allows connections from Pods in the local Namespace with the label `role=client`, *or* from any Pod in any namespace with the label `user=alice`.
-->
`from` 数组中包含两个元素,允许来自本地名字空间中标有 `role=client`
Pod 的连接,*或* 来自任何名字空间中标有 `user=alice` 的任何 Pod 的连接。
<!--
When in doubt, use `kubectl describe` to see how Kubernetes has interpreted the policy.
__ipBlock__: This selects particular IP CIDR ranges to allow as ingress sources or egress destinations. These should be cluster-external IPs, since Pod IPs are ephemeral and unpredictable.
@ -247,18 +297,21 @@ the NetworkPolicy acts on may be the IP of a `LoadBalancer` or of the Pod's node
For egress, this means that connections from pods to `Service` IPs that get rewritten to
cluster-external IPs may or may not be subject to `ipBlock`-based policies.
-->
`from` 数组中包含两个元素,允许来自本地命名空间中标有 `role=client` 的 Pod 的连接,*或* 来自任何命名空间中标有 `user = alice` 的任何 Pod 的连接。
如有疑问,请使用 `kubectl describe` 查看 Kubernetes 如何解释该策略。
__ipBlock__: 这将选择特定的 IP CIDR 范围以用作入口源或出口目的地。 这些应该是群集外部 IP因为 Pod IP 存在时间短暂的且随机产生。
__ipBlock__: 此选择器将选择特定的 IP CIDR 范围以用作入站流量来源或出站流量目的地。
这些应该是集群外部 IP因为 Pod IP 存在时间短暂的且随机产生。
群集的入口和出口机制通常需要重写数据包的源 IP 或目标 IP。在发生这种情况的情况下不确定在 NetworkPolicy 处理之前还是之后发生,并且对于网络插件,云提供商,`Service` 实现等的不同组合,其行为可能会有所不同。
集群的入站和出站机制通常需要重写数据包的源 IP 或目标 IP。
在发生这种情况时,不确定在 NetworkPolicy 处理之前还是之后发生,
并且对于网络插件、云提供商、`Service` 实现等的不同组合,其行为可能会有所不同。
在进入的情况下,这意味着在某些情况下,您可以根据实际的原始源 IP 过滤传入的数据包而在其他情况下NetworkPolicy 所作用的 `源IP` 则可能是 `LoadBalancer` 或 Pod 的节点等。
对入站流量而言,这意味着在某些情况下,你可以根据实际的原始源 IP 过滤传入的数据包,
而在其他情况下NetworkPolicy 所作用的 `源IP` 则可能是 `LoadBalancer`
Pod 的节点等。
对于出口,这意味着从 Pod 到被重写为集群外部 IP 的 `Service` IP 的连接可能会或可能不会受到基于 `ipBlock` 的策略的约束。
对于出站流量而言,这意味着从 Pod 到被重写为集群外部 IP 的 `Service` IP
的连接可能会或可能不会受到基于 `ipBlock` 的策略的约束。
<!--
## Default policies
@ -266,37 +319,38 @@ __ipBlock__: 这将选择特定的 IP CIDR 范围以用作入口源或出口目
By default, if no policies exist in a namespace, then all ingress and egress traffic is allowed to and from pods in that namespace. The following examples let you change the default behavior
in that namespace.
-->
## 默认策略 {#default-policies}
## 默认策略
默认情况下,如果命名空间中不存在任何策略,则所有进出该命名空间中的 Pod 的流量都被允许。以下示例使您可以更改该命名空间中的默认行为。
默认情况下,如果名字空间中不存在任何策略,则所有进出该名字空间中 Pod 的流量都被允许。
以下示例使你可以更改该名字空间中的默认行为。
<!--
### Default deny all ingress traffic
You can create a "default" isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any ingress traffic to those pods.
-->
### 默认拒绝所有入站流量
### 默认拒绝所有入口流量
您可以通过创建选择所有容器但不允许任何进入这些容器的入口流量的 NetworkPolicy 来为命名空间创建 "default" 隔离策略。
你可以通过创建选择所有容器但不允许任何进入这些容器的入站流量的 NetworkPolicy
来为名字空间创建 "default" 隔离策略。
{{< codenew file="service/networking/network-policy-default-deny-ingress.yaml" >}}
<!--
This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated. This policy does not change the default egress isolation behavior.
-->
这样可以确保即使容器没有选择其他任何 NetworkPolicy也仍然可以被隔离。此策略不会更改默认的出口隔离行为。
这样可以确保即使容器没有选择其他任何 NetworkPolicy也仍然可以被隔离。
此策略不会更改默认的出口隔离行为。
<!--
### Default allow all ingress traffic
If you want to allow all traffic to all pods in a namespace (even if policies are added that cause some pods to be treated as "isolated"), you can create a policy that explicitly allows all traffic in that namespace.
-->
### 默认允许所有入站流量
### 默认允许所有入口流量
如果要允许所有流量进入某个命名空间中的所有 Pod即使添加了导致某些 Pod 被视为“隔离”的策略),则可以创建一个策略来明确允许该命名空间中的所有流量。
如果要允许所有流量进入某个名字空间中的所有 Pod即使添加了导致某些 Pod 被视为
“隔离”的策略),则可以创建一个策略来明确允许该名字空间中的所有流量。
{{< codenew file="service/networking/network-policy-allow-all-ingress.yaml" >}}
@ -305,10 +359,10 @@ If you want to allow all traffic to all pods in a namespace (even if policies ar
You can create a "default" egress isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any egress traffic from those pods.
-->
### 默认拒绝所有出站流量
### 默认拒绝所有出口流量
您可以通过创建选择所有容器但不允许来自这些容器的任何出口流量的 NetworkPolicy 来为命名空间创建 "default" egress 隔离策略。
你可以通过创建选择所有容器但不允许来自这些容器的任何出站流量的 NetworkPolicy
来为名字空间创建 "default" egress 隔离策略。
{{< codenew file="service/networking/network-policy-default-deny-egress.yaml" >}}
@ -316,18 +370,18 @@ You can create a "default" egress isolation policy for a namespace by creating a
This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not
change the default ingress isolation behavior.
-->
这样可以确保即使没有被其他任何 NetworkPolicy 选择的 Pod 也不会被允许流出流量。此策略不会更改默认的 ingress 隔离行为。
此策略可以确保即使没有被其他任何 NetworkPolicy 选择的 Pod 也不会被允许流出流量。
此策略不会更改默认的入站流量隔离行为。
<!--
### Default allow all egress traffic
If you want to allow all traffic from all pods in a namespace (even if policies are added that cause some pods to be treated as "isolated"), you can create a policy that explicitly allows all egress traffic in that namespace.
-->
### 默认允许所有出站流量
### 默认允许所有出口流量
如果要允许来自命名空间中所有 Pod 的所有流量(即使添加了导致某些 Pod 被视为“隔离”的策略),则可以创建一个策略,该策略明确允许该命名空间中的所有出口流量。
如果要允许来自名字空间中所有 Pod 的所有流量(即使添加了导致某些 Pod 被视为“隔离”的策略),
则可以创建一个策略,该策略明确允许该名字空间中的所有出站流量。
{{< codenew file="service/networking/network-policy-allow-all-egress.yaml" >}}
@ -336,41 +390,91 @@ If you want to allow all traffic from all pods in a namespace (even if policies
You can create a "default" policy for a namespace which prevents all ingress AND egress traffic by creating the following NetworkPolicy in that namespace.
-->
### 默认拒绝所有入口和所有出站流量
### 默认拒绝所有入口和所有出口流量
您可以为命名空间创建 "default" 策略,以通过在该命名空间中创建以下 NetworkPolicy 来阻止所有入站和出站流量。
你可以为名字空间创建“默认”策略,以通过在该名字空间中创建以下 NetworkPolicy
来阻止所有入站和出站流量。
{{< codenew file="service/networking/network-policy-default-deny-all.yaml" >}}
<!--
This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.
-->
这样可以确保即使没有被其他任何 NetworkPolicy 选择的 Pod 也不会被允许入或出流量。
此策略可以确保即使没有被其他任何 NetworkPolicy 选择的 Pod 也不会被
允许入或出流量。
<!--
## SCTP support
-->
## SCTP 支持
{{< feature-state for_k8s_version="v1.12" state="alpha" >}}
{{< feature-state for_k8s_version="v1.19" state="beta" >}}
<!--
To use this feature, you (or your cluster administrator) will need to enable the `SCTPSupport` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the API server with `--feature-gates=SCTPSupport=true,…`.
As a beta feature, this is enabled by default. To disable SCTP at a cluster level, you (or your cluster administrator) will need to disable the `SCTPSupport` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the API server with `-feature-gates=SCTPSupport=false,...`.
When the feature gate is enabled, you can set the `protocol` field of a NetworkPolicy to `SCTP`.
-->
要启用此特性,你(或你的集群管理员)需要通过为 API server 指定 `--feature-gates=SCTPSupport=true,…`
来启用 `SCTPSupport` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。
启用该特性开关后,用户可以将 NetworkPolicy 的 `protocol` 字段设置为 `SCTP`
作为一个 Beta 特性SCTP 支持默认是被启用的。
要在集群层面禁用 SCTP或你的集群管理员需要为 API 服务器指定
`--feature-gates=SCTPSupport=false,...`
来禁用 `SCTPSupport` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。
启用该特性门控后,用户可以将 NetworkPolicy 的 `protocol` 字段设置为 `SCTP`
<!--
You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin that supports SCTP protocol NetworkPolicies.
-->
{{< note >}}
必须使用支持 SCTP 协议网络策略的 {{< glossary_tooltip text="CNI" term_id="cni" >}} 插件。
必须使用支持 SCTP 协议网络策略的 {{< glossary_tooltip text="CNI" term_id="cni" >}} 插件。
{{< /note >}}
<!--
## What you can't do with network policies (at least, not yet)
As of Kubernetes 1.20, the following functionality does not exist in the NetworkPolicy API, but you might be able to implement workarounds using Operating System components (such as SELinux, OpenVSwitch, IPTables, and so on) or Layer 7 technologies (Ingress controllers, Service Mesh implementations) or admission controllers. In case you are new to network security in Kubernetes, its worth noting that the following User Stories cannot (yet) be implemented using the NetworkPolicy API. Some (but not all) of these user stories are actively being discussed for future releases of the NetworkPolicy API.
-->
## 你通过网络策略(至少目前还)无法完成的工作
到 Kubernetes v1.20 为止NetworkPolicy API 还不支持以下功能,不过
你可能可以使用操作系统组件(如 SELinux、OpenVSwitch、IPTables 等等)
或者第七层技术Ingress 控制器、服务网格实现)或准入控制器来实现一些
替代方案。
如果你对 Kubernetes 中的网络安全性还不太了解,了解使用 NetworkPolicy API
还无法实现下面的用户场景是很值得的。
对这些用户场景中的一部分(而非全部)的讨论扔在进行,或许在将来 NetworkPolicy
API 中会给出一定支持。
<!--
- Forcing internal cluster traffic to go through a common gateway (this might be best served with a service mesh or other proxy).
- Anything TLS related (use a service mesh or ingress controller for this).
- Node specific policies (you can use CIDR notation for these, but you cannot target nodes by their Kubernetes identities specifically).
- Targeting of namespaces or services by name (you can, however, target pods or namespaces by their {{< glossary_tooltip text="labels" term_id="label" >}}, which is often a viable workaround).
- Creation or management of "Policy requests" that are fulfilled by a third party.
-->
- 强制集群内部流量经过某公用网关(这种场景最好通过服务网格或其他代理来实现);
- 与 TLS 相关的场景(考虑使用服务网格或者 Ingress 控制器);
- 特定于节点的策略(你可以使用 CIDR 来表达这一需求不过你无法使用节点在
Kubernetes 中的其他标识信息来辩识目标节点);
- 基于名字来选择名字空间或者服务(不过,你可以使用 {{< glossary_tooltip text="标签" term_id="label" >}}
来选择目标 Pod 或名字空间,这也通常是一种可靠的替代方案);
- 创建或管理由第三方来实际完成的“策略请求”;
<!--
- Default policies which are applied to all namespaces or pods (there are some third party Kubernetes distributions and projects which can do this).
- Advanced policy querying and reachability tooling.
- The ability to target ranges of Ports in a single policy declaration.
- The ability to log network security events (for example connections that are blocked or accepted).
- The ability to explicitly deny policies (currently the model for NetworkPolicies are deny by default, with only the ability to add allow rules).
- The ability to prevent loopback or incoming host traffic (Pods cannot currently block localhost access, nor do they have the ability to block access from their resident node).
-->
- 实现适用于所有名字空间或 Pods 的默认策略(某些第三方 Kubernetes 发行版本
或项目可以做到这点);
- 高级的策略查询或者可达性相关工具;
- 在同一策略声明中选择目标端口范围的能力;
- 生成网络安全事件日志的能力(例如,被阻塞或接收的连接请求);
- 显式地拒绝策略的能力目前NetworkPolicy 的模型默认采用拒绝操作,
其唯一的能力是添加允许策略);
- 禁止本地回路或指向宿主的网络流量Pod 目前无法阻塞 localhost 访问,
它们也无法禁止来自所在节点的访问请求)。
## {{% heading "whatsnext" %}}
<!--
@ -378,9 +482,8 @@ You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin tha
walkthrough for further examples.
- See more [recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for common scenarios enabled by the NetworkPolicy resource.
-->
- 查看 [声明网络策略](/zh/docs/tasks/administer-cluster/declare-network-policy/)
来进行更多的示例演练
- 有关 NetworkPolicy 资源启用的常见场景的更多信息,请参见
- 参阅[声明网络策略](/zh/docs/tasks/administer-cluster/declare-network-policy/)
演练了解更多示例;
- 有关 NetworkPolicy 资源所支持的常见场景的更多信息,请参见
[此指南](https://github.com/ahmetb/kubernetes-network-policy-recipes)。

View File

@ -864,7 +864,7 @@ Kubernetes `ServiceTypes` 允许指定一个需要的类型的 Service默认
* [`NodePort`](#nodeport):通过每个 Node 上的 IP 和静态端口(`NodePort`)暴露服务。
`NodePort` 服务会路由到 `ClusterIP` 服务,这个 `ClusterIP` 服务会自动创建。
通过请求 `<NodeIP>:<NodePort>`,可以从集群的外部访问一个 `NodePort` 服务。
* [`LoadBalancer`](#loadbalancer):使用云提供商的负载衡器,可以向外部暴露服务。
* [`LoadBalancer`](#loadbalancer):使用云提供商的负载衡器,可以向外部暴露服务。
外部的负载均衡器可以路由到 `NodePort` 服务和 `ClusterIP` 服务。
* [`ExternalName`](#externalname):通过返回 `CNAME` 和它的值,可以将服务映射到 `externalName`
字段的内容(例如, `foo.bar.example.com`)。

View File

@ -3,3 +3,116 @@ title: "工作负载"
weight: 50
description: 理解 PodsKubernetes 中可部署的最小计算对象,以及辅助它运行它们的高层抽象对象。
---
<!--
title: "Workloads"
weight: 50
description: >
Understand Pods, the smallest deployable compute object in Kubernetes, and the higher-level abstractions that help you to run them.
no_list: true
-->
{{< glossary_definition term_id="workload" length="short" >}}
<!--
Whether your workload is a single component or several that work together, on Kubernetes you run
it inside a set of [Pods](/docs/concepts/workloads/pods).
In Kubernetes, a Pod represents a set of running {{< glossary_tooltip text="containers" term_id="container" >}}
on your cluster.
A Pod has a defined lifecycle. For example, once a Pod is running in your cluster then
a critical failure on the {{< glossary_tooltip text="node" term_id="node" >}} where that
Pod is running means that all the Pods on that node fail. Kubernetes treats that level
of failure as final: you would need to create a new Pod even if the node later recovers.
-->
无论你的负载是单一组件还是由多个一同工作的组件构成,在 Kubernetes 中你
可以在一组 [Pods](/zh/docs/concepts/workloads/pods) 中运行它。
在 Kubernetes 中Pod 代表的是集群上处于运行状态的一组
{{< glossary_tooltip text="容器" term_id="container" >}}。
Pod 有确定的生命周期。例如,一旦某 Pod 在你的集群中运行Pod 运行所在的
{{< glossary_tooltip text="节点" term_id="node" >}} 出现致命错误时,
所有该节点上的 Pods 都会失败。Kubernetes 将这类失败视为最终状态:
即使节点后来恢复正常运行,你也需要创建新的 Pod。
<!--
However, to make life considerably easier, you don't need to manage each Pod directly.
Instead, you can use _workload resources_ that manage a set of Pods on your behalf.
These resources configure {{< glossary_tooltip term_id="controller" text="controllers" >}}
that make sure the right number of the right kind of Pod are running, to match the state
you specified.
Those workload resources include:
-->
不过,为了让用户的日子略微好过一些,你并不需要直接管理每个 Pod。
相反,你可以使用 _负载资源_ 来替你管理一组 Pods。
这些资源配置 {{< glossary_tooltip term_id="controller" text="控制器" >}}
来确保合适类型的、处于运行状态的 Pod 个数是正确的,与你所指定的状态相一致。
这些工作负载资源包括:
<!--
* [Deployment](/docs/concepts/workloads/controllers/deployment/) and [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
(replacing the legacy resource {{< glossary_tooltip text="ReplicationController" term_id="replication-controller" >}});
* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/);
* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) for running Pods that provide
node-local facilities, such as a storage driver or network plugin;
* [Job](/docs/concepts/workloads/controllers/job/) and
[CronJob](/docs/concepts/workloads/controllers/cron-jobs/)
for tasks that run to completion.
-->
* [Deployment](/zh/docs/concepts/workloads/controllers/deployment/) 和
[ReplicaSet](/zh/docs/concepts/workloads/controllers/replicaset/)
(替换原来的资源 {{< glossary_tooltip text="ReplicationController" term_id="replication-controller" >}}
* [StatefulSet](/zh/docs/concepts/workloads/controllers/statefulset/);
* 用来运行提供节点本地支撑设施(如存储驱动或网络插件)的 Pods 的
[DaemonSet](/zh/docs/concepts/workloads/controllers/daemonset/)
* 用来执行运行到结束为止的
[Job](/zh/docs/concepts/workloads/controllers/job/) 和
[CronJob](/zh/docs/concepts/workloads/controllers/cron-jobs/)。
<!--
There are also two supporting concepts that you might find relevant:
* [Garbage collection](/docs/concepts/workloads/controllers/garbage-collection/) tidies up objects
from your cluster after their _owning resource_ has been removed.
* The [_time-to-live after finished_ controller](/docs/concepts/workloads/controllers/ttlafterfinished/)
removes Jobs once a defined time has passed since they completed.
-->
你可能发现还有两种支撑概念很有用:
* [垃圾收集](/zh/docs/concepts/workloads/controllers/garbage-collection/)机制负责在
对象的 _属主资源_ 被删除时在集群中清理这些对象。
* [_结束后存在时间_ 控制器](/zh/docs/concepts/workloads/controllers/ttlafterfinished/)
会在 Job 结束之后的指定时间间隔之后删除它们。
## {{% heading "whatsnext" %}}
<!--
As well as reading about each resource, you can learn about specific tasks that relate to them:
* [Run a stateless application using a Deployment](/docs/tasks/run-application/run-stateless-application-deployment/)
* Run a stateful application either as a [single instance](/docs/tasks/run-application/run-single-instance-stateful-application/)
or as a [replicated set](/docs/tasks/run-application/run-replicated-stateful-application/)
* [Run Automated Tasks with a CronJob](/docs/tasks/job/automated-tasks-with-cron-jobs/)
-->
除了阅读了解每类资源外,你还可以了解与这些资源相关的任务:
* [使用 Deployment 运行一个无状态的应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/)
* 以[单实例](/zh/docs/tasks/run-application/run-single-instance-stateful-application/)
或者[多副本集合](/zh/docs/tasks/run-application/run-replicated-stateful-application/)
的形式运行有状态的应用;
* [使用 CronJob 运行自动化的任务](/zh/docs/tasks/job/automated-tasks-with-cron-jobs/)
<!--
Once your application is running, you might want to make it available on the internet as
a [Service](/docs/concepts/services-networking/service/) or, for web application only,
using an [Ingress](/docs/concepts/services-networking/ingress).
You can also visit [Configuration](/docs/concepts/configuration/) to learn about Kubernetes'
mechanisms for separating code from configuration.
-->
一旦你的应用处于运行状态,你就可能想要
以[服务](/zh/docs/concepts/services-networking/service/)
使之在互联网上可访问;或者对于 Web 应用而言,使用
[Ingress](/docs/concepts/services-networking/ingress) 资源将其暴露到互联网上。

View File

@ -87,7 +87,7 @@ If you're still not sure which branch to choose, ask in `#sig-docs` on Slack.
场景 | 分支
:---------|:------------
针对当前发行版本的,对现有英文内容的修改或新的英文内容 | `master`
针对功能特性变更的内容 | 功能特性所对应的版本所对应的分支,分支名字模式为 `dev-release-<version>`。例如,如果某功能特性在 `{{< latest-version >}}` 版本发生变化,则对应的文档变化要添加到 `dev-{{< release-branch >}}` 分支。
针对功能特性变更的内容 | 功能特性所对应的版本所对应的分支,分支名字模式为 `dev-<version>`。例如,如果某功能特性在 `v{{< skew nextMinorVersion >}}` 版本发生变化,则对应的文档变化要添加到 ``dev-{{< skew nextMinorVersion >}}`` 分支。
其他语言的内容(本地化)| 基于本地化团队的约定。参见[本地化分支策略](/zh/docs/contribute/localization/#branching-strategy)了解更多信息。
如果你仍不能确定要选择哪个分支,请在 `#sig-docs` Slack 频道上提问。

View File

@ -0,0 +1,42 @@
---
title: 对象
id: object
date: 2020-10-12
full_link: https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#kubernetes-objects
short_description: >
Kubernetes 系统中的实体, 代表了集群的部分状态。
aka:
tags:
- fundamental
---
<!--
---
title: Object
id: object
date: 2020-10-12
full_link: https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#kubernetes-objects
short_description: >
A entity in the Kubernetes system, representing part of the state of your cluster.
aka:
tags:
- fundamental
---
-->
<!--
An entity in the Kubernetes system. The Kubernetes API uses these entities to represent the state
of your cluster.
-->
Kubernetes 系统中的实体。Kubernetes API 用这些实体表示集群的状态。
<!--more-->
<!--
A Kubernetes object is typically a “record of intent”—once you create the object, the Kubernetes
{{< glossary_tooltip text="control plane" term_id="control-plane" >}} works constantly to ensure
that the item it represents actually exists.
By creating an object, you're effectively telling the Kubernetes system what you want that part of
your cluster's workload to look like; this is your cluster's desired state.
-->
Kubernetes 对象通常是一个“目标记录”-一旦你创建了一个对象Kubernetes
{{< glossary_tooltip text="控制平面" term_id="control-plane" >}}
不断工作,以确保它代表的项目确实存在。
创建一个对象相当于告知 Kubernetes 系统:你期望这部分集群负载看起来像什么;这也就是你集群的期望状态。

View File

@ -9,6 +9,7 @@ card:
---
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">
<!--
Kubeadm is a tool built to provide `kubeadm init` and `kubeadm join` as best-practice "fast paths" for creating Kubernetes clusters.
-->
@ -24,7 +25,7 @@ kubeadm 通过执行必要的操作来启动和运行最小可用集群。按照
<!--
Instead, we expect higher-level and more tailored tooling to be built on top of kubeadm, and ideally, using kubeadm as the basis of all deployments will make it easier to create conformant clusters.
-->
相反,我们希望在 Kubeadm 之上构建更高级别以及更加合规的工具,理想情况下,使用 kubeadm 作为所有部署工作的基准将会更加易于创建一致性集群。
相反,我们希望在 kubeadm 之上构建更高级别以及更加合规的工具,理想情况下,使用 kubeadm 作为所有部署工作的基准将会更加易于创建一致性集群。
<!--
## How to install
@ -34,7 +35,7 @@ Instead, we expect higher-level and more tailored tooling to be built on top of
<!--
To install kubeadm, see the [installation guide](/docs/setup/production-environment/tools/kubeadm/install-kubeadm).
-->
要安装 kubeadm, 请查阅[安装指南](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
要安装 kubeadm, 请查阅[安装指南](/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
## {{% heading "whatsnext" %}}
@ -48,11 +49,11 @@ To install kubeadm, see the [installation guide](/docs/setup/production-environm
* [kubeadm version](/docs/reference/setup-tools/kubeadm/kubeadm-version) to print the kubeadm version
* [kubeadm alpha](/docs/reference/setup-tools/kubeadm/kubeadm-alpha) to preview a set of features made available for gathering feedback from the community
-->
* [kubeadm init](/docs/reference/setup-tools/kubeadm/kubeadm-init) 用于搭建控制平面节点
* [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join) 用于搭建工作节点并将其加入到集群中
* [kubeadm upgrade](/docs/reference/setup-tools/kubeadm/kubeadm-upgrade) 用于升级 Kubernetes 集群到新版本
* [kubeadm config](/docs/reference/setup-tools/kubeadm/kubeadm-config) 如果你使用了 v1.7.x 或更低版本的 kubeadm 版本初始化你的集群,则使用 `kubeadm upgrade` 来配置你的集群
* [kubeadm token](/docs/reference/setup-tools/kubeadm/kubeadm-token) 用于管理 `kubeadm join` 使用的令牌
* [kubeadm reset](/docs/reference/setup-tools/kubeadm/kubeadm-reset) 用于恢复通过 `kubeadm init` 或者 `kubeadm join` 命令对节点进行的任何变更
* [kubeadm version](/docs/reference/setup-tools/kubeadm/kubeadm-version) 用于打印 kubeadm 的版本信息
* [kubeadm alpha](/docs/reference/setup-tools/kubeadm/kubeadm-alpha) 用于预览一组可用于收集社区反馈的特性
* [kubeadm init](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init) 用于搭建控制平面节点
* [kubeadm join](/zh/docs/reference/setup-tools/kubeadm/kubeadm-join) 用于搭建工作节点并将其加入到集群中
* [kubeadm upgrade](/zh/docs/reference/setup-tools/kubeadm/kubeadm-upgrade) 用于升级 Kubernetes 集群到新版本
* [kubeadm config](/zh/docs/reference/setup-tools/kubeadm/kubeadm-config) 如果你使用了 v1.7.x 或更低版本的 kubeadm 版本初始化你的集群,则使用 `kubeadm upgrade` 来配置你的集群
* [kubeadm token](/zh/docs/reference/setup-tools/kubeadm/kubeadm-token) 用于管理 `kubeadm join` 使用的令牌
* [kubeadm reset](/zh/docs/reference/setup-tools/kubeadm/kubeadm-reset) 用于恢复通过 `kubeadm init` 或者 `kubeadm join` 命令对节点进行的任何变更
* [kubeadm version](/zh/docs/reference/setup-tools/kubeadm/kubeadm-version) 用于打印 kubeadm 的版本信息
* [kubeadm alpha](/zh/docs/reference/setup-tools/kubeadm/kubeadm-alpha) 用于预览一组可用于收集社区反馈的特性

View File

@ -0,0 +1,256 @@
---
title: 使用 Kubespray 安装 Kubernetes
content_type: concept
weight: 30
---
<!--
title: Installing Kubernetes with Kubespray
content_type: concept
weight: 30
-->
<!-- overview -->
<!--
This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray).
-->
此快速入门有助于使用 [Kubespray](https://github.com/kubernetes-sigs) 安装在 GCE、Azure、OpenStack、AWS、vSphere、Packet裸机、Oracle Cloud Infrastructure实验性或 Baremetal 上托管的 Kubernetes 集群。
<!--
Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:
-->
Kubespray 是一个由 [Ansible](https://docs.ansible.com/) playbooks、[清单inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md)、供应工具和通用 OS/Kubernetes 集群配置管理任务的领域知识组成的。 Kubespray 提供:
<!--
* a highly available cluster
* composable attributes
* support for most popular Linux distributions
* Ubuntu 16.04, 18.04, 20.04
* CentOS/RHEL/Oracle Linux 7, 8
* Debian Buster, Jessie, Stretch, Wheezy
* Fedora 31, 32
* Fedora CoreOS
* openSUSE Leap 15
* Flatcar Container Linux by Kinvolk
* continuous integration tests
-->
* 高可用性集群
* 可组合属性
* 支持大多数流行的 Linux 发行版
* Ubuntu 16.04、18.04、20.04
* CentOS / RHEL / Oracle Linux 7、8
* Debian BusterJessieStretchWheezy
* Fedora 31、32
* Fedora CoreOS
* openSUSE Leap 15
* Kinvolk 的 Flatcar Container Linux
* 持续集成测试
<!--
To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to
[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).
-->
要选择最适合你的用例的工具,请阅读[此比较](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md)以
[kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 和 [kops](/zh/docs/setup/production-environment/tools/kops/) 。
<!-- body -->
<!--
## Creating a cluster
### (1/5) Meet the underlay requirements
-->
## 创建集群
### 1/5满足下层设施要求
<!--
Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements):
-->
按以下[要求](https://github.com/kubernetes-sigs/kubespray#requirements)来配置服务器:
<!--
* **Ansible v2.9 and python-netaddr is installed on the machine that will run Ansible commands**
* **Jinja 2.11 (or newer) is required to run the Ansible Playbooks**
* The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required ([See Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))
* The target servers are configured to allow **IPv4 forwarding**
* **Your ssh key must be copied** to all the servers part of your inventory
* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall
* If kubespray is ran from non-root user account, correct privilege escalation method should be configured in the target servers. Then the `ansible_become` flag or command parameters `--become` or `-b` should be specified
-->
* 在将运行 Ansible 命令的计算机上安装 Ansible v2.9 和 python-netaddr
* **运行 Ansible Playbook 需要 Jinja 2.11(或更高版本)**
* 目标服务器必须有权访问 Internet 才能拉取 Docker 镜像。否则,需要其他配置([请参见离线环境](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md)
* 目标服务器配置为允许 IPv4 转发
* **你的 SSH 密钥必须复制**到清单中的所有服务器部分
* 防火墙不受管理,你将需要按照以前的方式实施自己的规则。为了避免在部署过程中出现任何问题,你应该禁用防火墙
* 如果从非 root 用户帐户运行 kubespray则应在目标服务器中配置正确的特权升级方法。然后应指定“ansible_become” 标志或命令参数 “--become” 或 “-b”
<!--
Kubespray provides the following utilities to help provision your environment:
* [Terraform](https://www.terraform.io/) scripts for the following cloud providers:
* [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)
* [OpenStack](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/openstack)
* [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet)
-->
Kubespray 提供以下实用程序来帮助你设置环境:
* 为以下云驱动提供的 [Terraform](https://www.terraform.io/) 脚本:
* [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)
* [OpenStack](http://sitebeskuethree/contrigetbernform/contribeskubernform/contribeskupernform/https/sitebesku/master/)
* [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet)
<!--
### (2/5) Compose an inventory file
After you provision your servers, create an [inventory file for Ansible](https://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)".
### (3/5) Plan your cluster deployment
Kubespray provides the ability to customize many aspects of the deployment:
-->
### 2/5编写清单文件
设置服务器后,请创建一个 [Ansible 的清单文件](https://docs.ansible.com/ansible/intro_inventory.html)。你可以手动执行此操作,也可以通过动态清单脚本执行此操作。有关更多信息,请参阅“[建立你自己的清单](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)”。
### 3/5规划集群部署
Kubespray 能够自定义部署的许多方面:
<!--
* Choice deployment mode: kubeadm or non-kubeadm
* CNI (networking) plugins
* DNS configuration
* Choice of control plane: native/binary or containerized
* Component versions
* Calico route reflectors
* Component runtime options
* {{< glossary_tooltip term_id="docker" >}}
* {{< glossary_tooltip term_id="containerd" >}}
* {{< glossary_tooltip term_id="cri-o" >}}
* Certificate generation methods
-->
* 选择部署模式: kubeadm 或非 kubeadm
* CNI网络插件
* DNS 配置
* 控制平面的选择:本机/可执行文件或容器化
* 组件版本
* Calico 路由反射器
* 组件运行时选项
* {{< glossary_tooltip term_id="docker" >}}
* {{< glossary_tooltip term_id="containerd" >}}
* {{< glossary_tooltip term_id="cri-o" >}}
* 证书生成方式
<!--
Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes.
-->
可以修改[变量文件](https://docs.ansible.com/ansible/playbooks_variables.html)以进行 Kubespray 定制。
如果你刚刚开始使用 Kubespray请考虑使用 Kubespray 默认设置来部署你的集群并探索 Kubernetes 。
<!--
### (4/5) Deploy a Cluster
Next, deploy your cluster:
Cluster deployment using [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment).
-->
### 4/5部署集群
接下来,部署你的集群:
使用 [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment) 进行j集群部署。
```shell
ansible-playbook -i your/inventory/inventory.ini cluster.yml -b -v \
--private-key=~/.ssh/private_key
```
<!--
Large deployments (100+ nodes) may require [specific adjustments](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md) for best results.
-->
大型部署(超过 100 个节点)可能需要[特定的调整](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md),以获得最佳效果。
<!--
### (5/5) Verify the deployment
Kubespray provides a way to verify inter-pod connectivity and DNS resolve with [Netchecker](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md). Netchecker ensures the netchecker-agents pods can resolve DNS requests and ping each over within the default namespace. Those pods mimic similar behavior of the rest of the workloads and serve as cluster health indicators.
-->
### 5/5验证部署
Kubespray 提供了一种使用 [Netchecker](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md)
验证 Pod 间连接和 DNS 解析的方法。
Netchecker 确保 netchecker-agents pod 可以解析。
DNS 请求并在默认名称空间内对每个请求执行 ping 操作。
这些 Pods 模仿其余工作负载的类似行为,并用作集群运行状况指示器。
<!--
## Cluster operations
Kubespray provides additional playbooks to manage your cluster: _scale_ and _upgrade_.
-->
## 集群操作
Kubespray 提供了其他 Playbooks 来管理集群: _scale__upgrade_
<!--
### Scale your cluster
You can add worker nodes from your cluster by running the scale playbook. For more information, see "[Adding nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#adding-nodes)".
You can remove worker nodes from your cluster by running the remove-node playbook. For more information, see "[Remove nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#remove-nodes)".
-->
### 扩展集群
你可以通过运行 scale playbook 向集群中添加工作节点。有关更多信息,
请参见 “[添加节点](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#adding-nodes)”。
你可以通过运行 remove-node playbook 来从集群中删除工作节点。有关更多信息,
请参见 “[删除节点](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#remove-nodes)”。
<!--
### Upgrade your cluster
You can upgrade your cluster by running the upgrade-cluster playbook. For more information, see "[Upgrades](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/upgrades.md)".
-->
### 升级集群
你可以通过运行 upgrade-cluster Playbook 来升级集群。有关更多信息,请参见
“[升级](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/upgrades.md)”。
<!--
## Cleanup
You can reset your nodes and wipe out all components installed with Kubespray via the [reset playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/reset.yml).
{{< caution >}}
When running the reset playbook, be sure not to accidentally target your production cluster!
{{< /caution >}}
-->
## 清理
你可以通过 [reset](https://github.com/kubernetes-sigs/kubespray/blob/master/reset.yml) Playbook
重置节点并清除所有与 Kubespray 一起安装的组件。
{{< caution >}}
运行 reset playbook 时,请确保不要意外地将生产集群作为目标!
{{< /caution >}}
<!--
## Feedback
* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) (You can get your invite [here](https://slack.k8s.io/))
* [GitHub Issues](https://github.com/kubernetes-sigs/kubespray/issues)
-->
## 反馈
* Slack 频道:[#kubespray](https://kubernetes.slack.com/messages/kubespray/)(你可以在[此处](https://slack.k8s.io/)获得邀请)
* [GitHub 问题](https://github.com/kubernetes-sigs/kubespray/issues)
<!--
## {{% heading "whatsnext" %}}
Check out planned work on Kubespray's [roadmap](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/roadmap.md).
-->
## {{heading“ whatsnext”}}
查看有关 Kubespray 的[路线图](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/roadmap.md)的计划工作。

View File

@ -0,0 +1,25 @@
---
title: Turnkey 云解决方案
content_type: concept
weight: 30
---
<!--
---
title: Turnkey Cloud Solutions
content_type: concept
weight: 30
---
-->
<!-- overview -->
<!--
This page provides a list of Kubernetes certified solution providers. From each
provider page, you can learn how to install and setup production
ready clusters.
-->
本页列示 Kubernetes 认证解决方案供应商。
在每一个供应商分页,你可以学习如何安装和设置生产就绪的集群。
<!-- body -->
{{< cncf-landscape helpers=true category="certified-kubernetes-hosted" >}}

View File

@ -1,4 +0,0 @@
---
title: Turnkey 云解决方案
weight: 30
---

View File

@ -1,44 +0,0 @@
---
reviewers:
- colemickens
- brendandburns
title: 在阿里云上运行 Kubernetes
---
<!--
---
reviewers:
- colemickens
- brendandburns
title: Running Kubernetes on Alibaba Cloud
---
-->
<!--
## Alibaba Cloud Container Service
The [Alibaba Cloud Container Service](https://www.alibabacloud.com/product/container-service) lets you run and manage Docker applications on a cluster of Alibaba Cloud ECS instances. It supports the popular open source container orchestrators: Docker Swarm and Kubernetes.
To simplify cluster deployment and management, use [Kubernetes Support for Alibaba Cloud Container Service](https://www.alibabacloud.com/product/kubernetes). You can get started quickly by following the [Kubernetes walk-through](https://www.alibabacloud.com/help/doc-detail/86737.htm), and there are some [tutorials for Kubernetes Support on Alibaba Cloud](https://yq.aliyun.com/teams/11/type_blog-cid_200-page_1) in Chinese.
To use custom binaries or open source Kubernetes, follow the instructions below.
-->
## 阿里云容器服务
[阿里云容器服务](https://www.alibabacloud.com/product/container-service)使您可以在阿里云 ECS 实例集群上运行和管理 Docker 应用程序。它支持流行的开源容器编排引擎Docker Swarm 和 Kubernetes。
为了简化集群的部署和管理,请使用 [容器服务 Kubernetes 版](https://www.alibabacloud.com/product/kubernetes)。您可以按照 [Kubernetes 演练](https://www.alibabacloud.com/help/doc-detail/86737.htm)快速入门,其中有一些使用中文书写的[容器服务 Kubernetes 版教程](https://yq.aliyun.com/teams/11/type_blog-cid_200-page_1)。
要使用自定义二进制文件或开源版本的 Kubernetes请按照以下说明进行操作。
<!--
## Custom Deployments
The source code for [Kubernetes with Alibaba Cloud provider implementation](https://github.com/AliyunContainerService/kubernetes) is open source and available on GitHub.
For more information, see "[Quick deployment of Kubernetes - VPC environment on Alibaba Cloud](https://www.alibabacloud.com/forum/read-830)" in English and [Chinese](https://yq.aliyun.com/articles/66474).
-->
## 自定义部署
[阿里云 Kubernetes Cloud Provider 实现](https://github.com/AliyunContainerService/kubernetes) 的源代码是开源的,可在 GitHub 上获得。
有关更多信息,请参阅中文版本[快速部署 Kubernetes - 阿里云上的VPC环境](https://yq.aliyun.com/articles/66474)和[英文版本](https://www.alibabacloud.com/forum/read-830)。

View File

@ -1,166 +0,0 @@
---
title: 在 AWS EC2 上运行 Kubernetes
content_type: task
---
<!--
reviewers:
- justinsb
- clove
title: Running Kubernetes on AWS EC2
content_type: task
-->
<!-- overview -->
<!--
This page describes how to install a Kubernetes cluster on AWS.
-->
本页面介绍了如何在 AWS 上安装 Kubernetes 集群。
## {{% heading "prerequisites" %}}
<!--
To create a Kubernetes cluster on AWS, you will need an Access Key ID and a Secret Access Key from AWS.
-->
在 AWS 上创建 Kubernetes 集群,你将需要 AWS 的 Access Key ID 和 Secret Access Key。
<!--
### Supported Production Grade Tools
-->
### 支持的生产级别工具
<!--
* [conjure-up](/docs/getting-started-guides/ubuntu/) is an open-source installer for Kubernetes that creates Kubernetes clusters with native AWS integrations on Ubuntu.
-->
* [conjure-up](/zh/docs/setup/) 是 Kubernetes 的开源安装程序,可在 Ubuntu 上创建与原生 AWS 集成的 Kubernetes 集群。
<!--
* [Kubernetes Operations](https://github.com/kubernetes/kops) - Production Grade K8s Installation, Upgrades, and Management. Supports running Debian, Ubuntu, CentOS, and RHEL in AWS.
-->
* [Kubernetes Operations](https://github.com/kubernetes/kops) - 生产级 K8s 的安装、升级和管理。支持在 AWS 运行 Debian、Ubuntu、CentOS 和 RHEL。
<!--
* [kube-aws](https://github.com/kubernetes-incubator/kube-aws), creates and manages Kubernetes clusters with [Flatcar Linux](https://www.flatcar-linux.org/) nodes, using AWS tools: EC2, CloudFormation and Autoscaling.
-->
* [kube-aws](https://github.com/kubernetes-incubator/kube-aws) 使用 [Flatcar Linux](https://www.flatcar-linux.org/) 节点创建和管理 Kubernetes 集群,它使用了 AWS 工具EC2、CloudFormation 和 Autoscaling。
<!--
* [KubeOne](https://github.com/kubermatic/kubeone) is an open source cluster lifecycle management tool that creates, upgrades and manages Kubernetes Highly-Available clusters.
-->
* [KubeOne](https://github.com/kubermatic/kubeone) 是一个开源集群生命周期管理工具,它可用于创建,升级和管理高可用 Kubernetes 集群。
<!-- steps -->
<!--
## Getting started with your cluster
-->
## 集群入门
<!--
### Command line administration tool: kubectl
-->
### 命令行管理工具kubectl
<!--
The cluster startup script will leave you with a `kubernetes` directory on your workstation.
Alternately, you can download the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases).
Next, add the appropriate binary folder to your `PATH` to access kubectl:
-->
集群启动脚本将在你的工作站上为你提供一个 `kubernetes` 目录。
或者,你可以从[此页面](https://github.com/kubernetes/kubernetes/releases)下载最新的 Kubernetes 版本。
接下来,将适当的二进制文件夹添加到你的 `PATH` 以访问 kubectl
```shell
# macOS
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
# Linux
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
```
<!--
An up-to-date documentation page for this tool is available here: [kubectl manual](/docs/user-guide/kubectl/)
By default, `kubectl` will use the `kubeconfig` file generated during the cluster startup for authenticating against the API.
For more information, please read [kubeconfig files](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
-->
此工具的最新文档页面位于此处:[kubectl 手册](/zh/docs/reference/kubectl/kubectl/)
默认情况下,`kubectl` 将使用在集群启动期间生成的 `kubeconfig` 文件对 API 进行身份验证。
有关更多信息,请阅读 [kubeconfig 文件](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)。
<!--
### Examples
See [a simple nginx example](/docs/tasks/run-application/run-stateless-application-deployment/) to try out your new cluster.
The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/)
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/)
-->
### 示例
请参阅[一个简单的 nginx 示例](/zh/docs/tasks/run-application/run-stateless-application-deployment/)试用你的新集群。
“Guestbook” 应用程序是另一个入门 Kubernetes 的流行示例:[guestbook 示例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/)。
有关更完整的应用程序,请查看[示例目录](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/)。
<!--
## Scaling the cluster
Adding and removing nodes through `kubectl` is not supported. You can still scale the amount of nodes manually through adjustments of the 'Desired' and 'Max' properties within the [Auto Scaling Group](http://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html), which was created during the installation.
-->
## 集群伸缩
不支持通过 `kubectl` 添加和删除节点。你仍然可以通过调整在安装过程中创建的
[Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html)
中的 “Desired” 和 “Max” 属性来手动伸缩节点数量。
<!--
## Tearing down the cluster
Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the
`kubernetes` directory:
-->
## 集群拆除
确保你用于配置集群的环境变量已被导出,然后在运行如下在 Kubernetes 目录的脚本:
```shell
cluster/kube-down.sh
```
<!--
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------------- | ---------- | --------------------------------------------- | ---------| ----------------------------
AWS | kops | Debian | k8s (VPC) | [docs](https://github.com/kubernetes/kops) | | Community ([@justinsb](https://github.com/justinsb))
AWS | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/aws) | | Community
AWS | Juju | Ubuntu | flannel, calico, canal | [docs](/docs/getting-started-guides/ubuntu) | 100% | Commercial, Community
AWS | KubeOne | Ubuntu, CoreOS, CentOS | canal, weavenet | [docs](https://github.com/kubermatic/kubeone) | 100% | Commercial, Community
-->
## 支持等级
IaaS 提供商 | 配置管理 | 操作系统 | 网络 | 文档 | 符合率 | 支持等级
-------------------- | ------------ | ------------- | ---------- | --------------------------------------------- | ---------| ----------------------------
AWS | kops | Debian | k8s (VPC) | [docs](https://github.com/kubernetes/kops) | | Community ([@justinsb](https://github.com/justinsb))
AWS | CoreOS | CoreOS | flannel | [docs](/zh/docs/setup/) | | Community
AWS | Juju | Ubuntu | flannel, calico, canal | [docs](/zh/docs/setup/) | 100% | Commercial, Community
AWS | KubeOne | Ubuntu, CoreOS, CentOS | canal, weavenet | [docs](https://github.com/kubermatic/kubeone) | 100% | Commercial, Community
<!--
## Further reading
Please see the [Kubernetes docs](/docs/) for more details on administering
and using a Kubernetes cluster.
-->
## 进一步阅读
请参阅 [Kubernetes 文档](/zh/docs/)了解有关管理和使用 Kubernetes 集群的更多详细信息。

View File

@ -1,76 +0,0 @@
---
reviewers:
- colemickens
- brendandburns
title: 在 Azure 上运行 Kubernetes
---
<!--
---
reviewers:
- colemickens
- brendandburns
title: Running Kubernetes on Azure
---
-->
<!--
## Azure Kubernetes Service (AKS)
The [Azure Kubernetes Service](https://azure.microsoft.com/en-us/services/kubernetes-service/) offers simple
deployments for Kubernetes clusters.
For an example of deploying a Kubernetes cluster onto Azure via the Azure Kubernetes Service:
**[Microsoft Azure Kubernetes Service](https://docs.microsoft.com/zh-cn/azure/aks/intro-kubernetes)**
-->
## Azure Kubernetes 服务 (AKS)
[Azure Kubernetes 服务](https://azure.microsoft.com/zh-cn/services/kubernetes-service/)提供了简单的
Kubernetes 集群部署方式。
有关通过 Azure Kubernetes 服务将 Kubernetes 集群部署到 Azure 的示例:
**[微软 Azure Kubernetes 服务](https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes)**
<!--
## Custom Deployments: AKS-Engine
The core of the Azure Kubernetes Service is **open source** and available on GitHub for the community
to use and contribute to: **[AKS-Engine](https://github.com/Azure/aks-engine)**. The legacy [ACS-Engine](https://github.com/Azure/acs-engine) codebase has been deprecated in favor of AKS-engine.
AKS-Engine is a good choice if you need to make customizations to the deployment beyond what the Azure Kubernetes
Service officially supports. These customizations include deploying into existing virtual networks, utilizing multiple
agent pools, and more. Some community contributions to AKS-Engine may even become features of the Azure Kubernetes Service.
The input to AKS-Engine is an apimodel JSON file describing the Kubernetes cluster. It is similar to the Azure Resource Manager (ARM) template syntax used to deploy a cluster directly with the Azure Kubernetes Service. The resulting output is an ARM template that can be checked into source control and used to deploy Kubernetes clusters to Azure.
You can get started by following the **[AKS-Engine Kubernetes Tutorial](https://github.com/Azure/aks-engine/blob/master/docs/tutorials/README.md)**.
-->
## 定制部署AKS 引擎
Azure Kubernetes 服务的核心是**开源**,并且可以在 GitHub 上让社区使用和参与贡献:**[AKS 引擎](https://github.com/Azure/aks-engine)**。旧版 [ACS 引擎](https://github.com/Azure/acs-engine) 代码库已被弃用以支持AKS-engine。
如果您需要在 Azure Kubernetes 服务正式支持的范围之外对部署进行自定义,则 AKS 引擎是一个不错的选择。这些自定义包括部署到现有虚拟网络中,利用多个代理程序池等。一些社区对 AKS 引擎的贡献甚至可能成为 Azure Kubernetes 服务的特性。
AKS 引擎的输入是一个描述 Kubernetes 集群的 apimodel JSON 文件。它和用于直接通过 Azure Kubernetes 服务部署集群的 Azure 资源管理器ARM模板语法相似。产生的输出是一个 ARM 模板,可以将其签入源代码管理,并使用它将 Kubernetes 集群部署到 Azure。
您可以按照 **[AKS 引擎 Kubernetes 教程](https://github.com/Azure/aks-engine/blob/master/docs/tutorials/README.md)**开始使用。
<!--
## CoreOS Tectonic for Azure
The CoreOS Tectonic Installer for Azure is **open source** and available on GitHub for the community to use and contribute to: **[Tectonic Installer](https://github.com/coreos/tectonic-installer)**.
Tectonic Installer is a good choice when you need to make cluster customizations as it is built on [Hashicorp's Terraform](https://www.terraform.io/docs/providers/azurerm/) Azure Resource Manager (ARM) provider. This enables users to customize or integrate using familiar Terraform tooling.
You can get started using the [Tectonic Installer for Azure Guide](https://coreos.com/tectonic/docs/latest/install/azure/azure-terraform.html).
-->
## 适用于 Azure 的 CoreOS Tectonic
适用于 Azure 的 CoreOS Tectonic Installer 是**开源的**,它可以让社区在 GitHub 上使用和参与贡献:**[Tectonic Installer](https://github.com/coreos/tectonic-installer)**。
当您需要进行自定义集群时Tectonic Installer是一个不错的选择因为它是基于 [Hashicorp 的 Terraform](https://www.terraform.io/docs/providers/azurerm/)Azure资源管理器ARM提供程序构建的。这使用户可以使用熟悉的 Terraform 工具进行自定义或集成。
您可以开始使用 [在 Azure 上安装 Tectonic 指南](https://coreos.com/tectonic/docs/latest/install/azure/azure-terraform.html)。

View File

@ -1,398 +0,0 @@
---
title: 在谷歌计算引擎上运行 Kubernetes
content_type: task
---
<!--
---
reviewers:
- brendandburns
- jbeda
- mikedanese
- thockin
title: Running Kubernetes on Google Compute Engine
content_type: task
---
-->
<!-- overview -->
<!--
The example below creates a Kubernetes cluster with 3 worker node Virtual Machines and a master Virtual Machine (i.e. 4 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient).
-->
下面的示例创建了一个 Kubernetes 集群,其中包含 3 个工作节点虚拟机和 1 个主虚拟机(即集群中有 4 个虚拟机)。
这个集群是在你的工作站(或你认为方便的任何地方)设置和控制的。
## {{% heading "prerequisites" %}}
<!--
If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) for hosted cluster installation and management.
-->
如果你想要一个简化的入门体验和 GUI 来管理集群,
请考虑尝试[谷歌 Kubernetes 引擎](https://cloud.google.com/kubernetes-engine/)来安装和管理托管集群。
<!--
For an easy way to experiment with the Kubernetes development environment, click the button below
to open a Google Cloud Shell with an auto-cloned copy of the Kubernetes source repo.
-->
有一个简单的方式可以使用 Kubernetes 开发环境进行实验,
就是点击下面的按钮,打开 Google Cloud Shell其中包含了 Kubernetes 源仓库自动克隆的副本。
<!--
[![Open in Cloud Shell](https://gstatic.com/cloudssh/images/open-btn.png)](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/kubernetes/kubernetes&page=editor&open_in_editor=README.md)
-->
[![在 Cloud Shell 中打卡](https://gstatic.com/cloudssh/images/open-btn.png)](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/kubernetes/kubernetes&page=editor&open_in_editor=README.md)
<!--
If you want to use custom binaries or pure open source Kubernetes, please continue with the instructions below.
-->
如果你想要使用定制的二进制或者纯开源的 Kubernetes请继续阅读下面的指导。
<!-- ### Prerequisites -->
### 前提条件 {#prerequisites}
<!--
1. You need a Google Cloud Platform account with billing enabled. Visit the [Google Developers Console](https://console.cloud.google.com) for more details.
1. Install `gcloud` as necessary. `gcloud` can be installed as a part of the [Google Cloud SDK](https://cloud.google.com/sdk/).
1. Enable the [Compute Engine Instance Group Manager API](https://console.developers.google.com/apis/api/replicapool.googleapis.com/overview) in the [Google Cloud developers console](https://console.developers.google.com/apis/library).
1. Make sure that gcloud is set to use the Google Cloud Platform project you want. You can check the current project using `gcloud config list project` and change it via `gcloud config set project <project-id>`.
1. Make sure you have credentials for GCloud by running `gcloud auth login`.
1. (Optional) In order to make API calls against GCE, you must also run `gcloud auth application-default login`.
1. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/instances/#startinstancegcloud) part of the GCE Quickstart.
1. Make sure you can SSH into the VM without interactive prompts. See the [Log in to the instance](https://cloud.google.com/compute/docs/instances/#sshing) part of the GCE Quickstart.
-->
1. 你需要一个启用了计费的谷歌云平台账号。
更多细节请访问[谷歌开发者控制台](https://console.cloud.google.com)。
1. 根据需要安装 `gcloud`
`gcloud` 可作为[谷歌云 SDK](https://cloud.google.com/sdk/) 的一部分安装。
1. 在[谷歌云开发者控制台](https://console.developers.google.com/apis/library)
启用[计算引擎实例组管理器 API](https://console.developers.google.com/apis/api/replicapool.googleapis.com/overview)
1. 确保将 gcloud 设置成使用你想要的谷歌云平台项目。
你可以使用 `gcloud config list project` 检查当前项目,
并通过 `gcloud config set project <project-id>` 修改它。
1. 通过运行 `gcloud auth login`,确保你拥有 GCloud 的凭据。
1. (可选)如果需要调用 GCE 的 API你也必须运行 `gcloud auth application-default login`
1. 确保你能通过命令行启动 GCE 虚拟机。
至少确保你可以完成 GCE 快速入门的[创建实例](https://cloud.google.com/compute/docs/instances/#startinstancegcloud)部分。
1. 确保你在没有交互式提示的情况下 SSH 到虚拟机。
查看 GCE 快速入门的[登录实例](https://cloud.google.com/compute/docs/instances/#sshing)部分。
<!-- steps -->
<!-- ## Starting a cluster -->
## 启动集群
<!--
You can install a client and start a cluster with either one of these commands (we list both in case only one is installed on your machine):
-->
你可以安装一个客户端,并使用这些命令的其中之一来启动集群(我们列出的两种情况,因为你的机器可能只安装了二者之一):
```shell
curl -sS https://get.k8s.io | bash
```
```shell
wget -q -O - https://get.k8s.io | bash
```
<!--
Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster.
-->
这条命令结完成后,你将会有 1 个主虚拟机和 4 个工作虚拟机,它们一起作为 Kubernetes 集群运行。
<!--
By default, some containers will already be running on your cluster. Containers like `fluentd` provide [logging](/docs/concepts/cluster-administration/logging/), while `heapster` provides [monitoring](https://releases.k8s.io/master/cluster/addons/cluster-monitoring/README.md) services.
-->
默认情况下,有一些容器已经在你的集群上运行。
`fluentd` 这样的容器提供[日志记录](/zh/docs/concepts/cluster-administration/logging/)
`heapster` 提供[监控](https://releases.k8s.io/master/cluster/addons/cluster-monitoring/README.md)服务。
<!--
The script run by the commands above creates a cluster with the name/prefix "kubernetes". It defines one specific cluster config, so you can't run it more than once.
-->
由上述命令运行的脚本创建了一个名称/前缀为“kubernetes”的集群。
它定义了一个特定的集群配置,所以此脚本只能运行一次。
<!--
Alternately, you can download and install the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases), then run the `<kubernetes>/cluster/kube-up.sh` script to start the cluster:
-->
或者,你可以通过[这个页面](https://github.com/kubernetes/kubernetes/releases)下载和安装最新版本的 Kubernetes
然后运行 `<kubernetes>/cluster/kube-up.sh` 脚本启动集群:
```shell
cd kubernetes
cluster/kube-up.sh
```
<!--
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
-->
如果你希望在项目中运行多个集群,希望使用一个不同名称,或者不同数量工作节点的集群,
请查看 `<kubernetes>/cluster/gce/config-default.sh` 文件,以便在启动集群之前进行更细粒度的配置。
<!--
If you run into trouble, please see the section on [troubleshooting](/docs/setup/production-environment/turnkey/gce/#troubleshooting), post to the
[Kubernetes Forum](https://discuss.kubernetes.io), or come ask questions on `#gke` Slack channel.
-->
如果你遇到了问题,请参阅[错误排查](#troubleshooting)一节,
发布到 [Kubernetes 论坛](https://discuss.kubernetes.io),或者来 `#gke` Slack 频道中提问。
<!-- The next few steps will show you: -->
接下来的几个步骤会告诉你:
<!--
1. How to set up the command line client on your workstation to manage the cluster
2. Examples of how to use the cluster
3. How to delete the cluster
4. How to start clusters with non-default options (like larger clusters)
-->
1. 如何在你的工作站设置命令行客户端来管理集群
2. 如何使用集群的示例
3. 如何删除集群
4. 如果以非默认选项启动集群(如规模较大的集群)
<!-- ## Installing the Kubernetes command line tools on your workstation -->
## 在你的工作站安装 Kubernetes 命令行工具
<!--
The cluster startup script will leave you with a running cluster and a `kubernetes` directory on your workstation.
-->
集群启动脚本将在你的工作站上留下一个正在运行的集群和一个 `kubernetes` 目录。
<!--
The [kubectl](/docs/reference/kubectl/kubectl/) tool controls the Kubernetes cluster
manager. It lets you inspect your cluster resources, create, delete, and update
components, and much more. You will use it to look at your new cluster and bring
up example apps.
-->
[kubectl](/zh/docs/reference/kubectl/kubectl/) 工具控制 Kubernetes 集群管理器。
它允许你检查集群资源,创建、删除和更新组件等等。
你将使用它来查看新集群并启动示例应用程序。
<!--
You can use `gcloud` to install the `kubectl` command-line tool on your workstation:
-->
你可以使用 `gcloud` 在工作站上安装 `kubectl` 命令行工具:
```shell
gcloud components install kubectl
```
{{< note >}}
<!--
The kubectl version bundled with `gcloud` may be older than the one
downloaded by the get.k8s.io install script. See [Installing kubectl](/docs/tasks/tools/install-kubectl/)
document to see how you can set up the latest `kubectl` on your workstation.
-->
`gcloud` 绑定的 kubectl 版本可能比 get.k8s.io 安装脚本所下载的更老。。
查看[安装 kubectl](/zh/docs/tasks/tools/install-kubectl/) 文档,了解如何在工作站上设置最新的 `kubectl`
{{< /note >}}
<!-- ## Getting started with your cluster -->
## 开始使用你的集群
<!-- ### Inspect your cluster -->
### 检查你的集群
<!--
Once `kubectl` is in your path, you can use it to look at your cluster. E.g., running:
-->
一旦 `kubectl` 存在于你的路径中,你就可以使用它来查看集群,例如,运行:
```
kubectl get --all-namespaces services
```
<!--
should show a set of [services](/docs/concepts/services-networking/service/) that look something like this:
-->
应该显示 [services](/zh/docs/concepts/services-networking/service/) 集合,看起来像这样:
```
NAMESPACE NAME TYPE CLUSTER_IP EXTERNAL_IP PORT(S) AGE
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1d
kube-system kube-dns ClusterIP 10.0.0.2 <none> 53/TCP,53/UDP 1d
kube-system kube-ui ClusterIP 10.0.0.3 <none> 80/TCP 1d
...
```
<!--
Similarly, you can take a look at the set of [pods](/docs/concepts/workloads/pods/) that were created during cluster startup.
You can do this via the
-->
类似的,你可以查看在集群启动时创建的 [pods](/zh/docs/concepts/workloads/pods/) 的集合。
你可以通过命令:
```
kubectl get --all-namespaces pods
```
<!--
You'll see a list of pods that looks something like this (the name specifics will be different):
-->
你将会看到 Pod 的列表,看起来像这样(名称和细节会有所不同):
```
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5f4fbb68df-mc8z8 1/1 Running 0 15m
kube-system fluentd-cloud-logging-kubernetes-minion-63uo 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-c1n9 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-c4og 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-ngua 1/1 Running 0 14m
kube-system kube-ui-v1-curt1 1/1 Running 0 15m
kube-system monitoring-heapster-v5-ex4u3 1/1 Running 1 15m
kube-system monitoring-influx-grafana-v1-piled 2/2 Running 0 15m
```
<!--
Some of the pods may take a few seconds to start up (during this time they'll show `Pending`), but check that they all show as `Running` after a short period.
-->
一些 Pod 启动可能需要几秒钟(在此期间它们会显示 `Pending`
但是在短时间后请检查它们是否都显示为 `Running`
<!-- ### Run some examples -->
### 运行示例
<!--
Then, see [a simple nginx example](/docs/tasks/run-application/run-stateless-application-deployment/) to try out your new cluster.
-->
那么,看[一个简单的 nginx 示例](/zh/docs/tasks/run-application/run-stateless-application-deployment/)来试试你的新集群。
<!--
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/). The [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) is a good "getting started" walkthrough.
-->
要获得完整的应用,请查看 [examples 目录](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/)。
[guestbook 示例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/)
是一个很好的“入门”演练。
<!-- ## Tearing down the cluster -->
## 拆除集群
<!-- To remove/delete/teardown the cluster, use the `kube-down.sh` script. -->
要移除/删除/拆除集群,请使用 `kube-down.sh` 脚本。
```shell
cd kubernetes
cluster/kube-down.sh
```
<!--
Likewise, the `kube-up.sh` in the same directory will bring it back up. You do not need to rerun the `curl` or `wget` command: everything needed to setup the Kubernetes cluster is now on your workstation.
-->
同样地,同一目录下的 `kube-up.sh` 脚本会让集群重新运行起来。
你不需要再次运行 `curl``wget` 命令:现在 Kubernetes 集群所需的一切都在你的工作站上。
<!-- ## Customizing -->
## 定制
<!--
The script above relies on Google Storage to stage the Kubernetes release. It
then will start (by default) a single master VM along with 3 worker VMs. You
can tweak some of these parameters by editing `kubernetes/cluster/gce/config-default.sh`
You can view a transcript of a successful cluster creation
[here](https://gist.github.com/satnam6502/fc689d1b46db9772adea).
-->
上面的脚本依赖于谷歌存储来保存 Kubernetes 发行版本。
该脚本然后(默认情况下)会启动 1 个主虚拟机和 3 个工作虚拟机。
你可以通过编辑 `kubernetes/cluster/gce/config-default.sh` 来调整这些参数。
你可以在[这里](https://gist.github.com/satnam6502/fc689d1b46db9772adea)查看成功创建集群的记录。
<!-- ## Troubleshooting -->
## 故障排除 {#troubleshooting}
<!-- ### Project settings -->
### 项目设置
<!--
You need to have the Google Cloud Storage API, and the Google Cloud Storage
JSON API enabled. It is activated by default for new projects. Otherwise, it
can be done in the Google Cloud Console. See the [Google Cloud Storage JSON
API Overview](https://cloud.google.com/storage/docs/json_api/) for more
details.
-->
你需要启用 Google Cloud Storage API 和 Google Cloud Storage JSON API。
默认情况下,对新项目都是激活的。
如果未激活,可以在谷歌云控制台设置。
更多细节,请查看[谷歌云存储 JSON API 概览](https://cloud.google.com/storage/docs/json_api/)。
<!--
Also ensure that-- as listed in the [Prerequisites section](#prerequisites)-- you've enabled the `Compute Engine Instance Group Manager API`, and can start up a GCE VM from the command line as in the [GCE Quickstart](https://cloud.google.com/compute/docs/quickstart) instructions.
-->
也要确保——正如在[前提条件](#prerequisites)中列出的那样——
你已经启用了 `Compute Engine Instance Group Manager API`
并且可以像 [GCE 快速入门](https://cloud.google.com/compute/docs/quickstart)指导那样从命令行启动 GCE 虚拟机。
<!-- ### Cluster initialization hang -->
### 集群初始化过程停滞
<!--
If the Kubernetes startup script hangs waiting for the API to be reachable, you can troubleshoot by SSHing into the master and node VMs and looking at logs such as `/var/log/startupscript.log`.
-->
如果 Kubernetes 启动脚本停滞,等待 API 可达,
你可以 SSH 登录到主虚拟机和工作虚拟机,
通过查看 `/var/log/startupscript.log` 日志来排除故障。
<!--
**Once you fix the issue, you should run `kube-down.sh` to cleanup** after the partial cluster creation, before running `kube-up.sh` to try again.
-->
**一旦解决了这个问题,你应该在部分集群创建之后运行 `kube-down.sh` 来进行清理**,然后再运行 `kube-up.sh` 重试。
### SSH
<!--
If you're having trouble SSHing into your instances, ensure the GCE firewall
isn't blocking port 22 to your VMs. By default, this should work but if you
have edited firewall rules or created a new non-default network, you'll need to
expose it: `gcloud compute firewall-rules create default-ssh --network=<network-name>
--description "SSH allowed from anywhere" --allow tcp:22`
-->
如果在 SSH 登录实例时遇到困难,确保 GCE 防火墙没有阻塞你虚拟机的 22 端口。
默认情况下应该可用,但是如果你编辑了防火墙规则或者创建了一个新的非默认网络,
你需要公开它:`gcloud compute firewall-rules create default-ssh --network=<network-name> --description "SSH allowed from anywhere" --allow tcp:22`
<!--
Additionally, your GCE SSH key must either have no passcode or you need to be
using `ssh-agent`.
-->
此外,你的 GCE SSH 密钥不能有密码,否则你需要使用 `ssh-agent`
<!-- ### Networking -->
### 网络
<!--
The instances must be able to connect to each other using their private IP. The
script uses the "default" network which should have a firewall rule called
"default-allow-internal" which allows traffic on any port on the private IPs.
If this rule is missing from the default network or if you change the network
being used in `cluster/config-default.sh` create a new rule with the following
field values:
-->
虚拟机实例必须能够使用它们的私有 IP 彼此连接。
该脚本使用 "default" 网络,此网络应该有一个名为 "default-allow-internal" 的防火墙规则,
此规则允许通过私有 IP 上的任何端口进行通信。
如果默认网络中缺少此规则,或者更改了 `cluster/config-default.sh` 中使用的网络,
用以下字段值创建一个新规则:
<!--
* Source Ranges: `10.0.0.0/8`
* Allowed Protocols and Port: `tcp:1-65535;udp:1-65535;icmp`
-->
* 源范围:`10.0.0.0/8`
* 允许的协议和端口:`tcp:1-65535;udp:1-65535;icmp`
<!-- ## Support Level -->
## 支持等级
<!--
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
GCE | Saltstack | Debian | GCE | [docs](/docs/setup/production-environment/turnkey/gce/) | | Project
-->
IaaS 提供商 | 配置管理 | 操作系统 | 网络 | 文档 | 符合率 | 支持等级
---------- | --------- | ------ | ---- | --------------------------------------------------------- | ----- | -------
GCE | Saltstack | Debian | GCE | [docs](/zh/docs/setup/production-environment/turnkey/gce/) | | Project

View File

@ -1,162 +0,0 @@
---
reviewers:
- bradtopol
title: 使用 IBM Cloud Private 在多个云上运行 Kubernetes
---
<!--
---
reviewers:
- bradtopol
title: Running Kubernetes on Multiple Clouds with IBM Cloud Private
---
-->
<!--
IBM® Cloud Private is a turnkey cloud solution and an on-premises turnkey cloud solution. IBM Cloud Private delivers pure upstream Kubernetes with the typical management components that are required to run real enterprise workloads. These workloads include health management, log management, audit trails, and metering for tracking usage of workloads on the platform.
-->
IBM® Cloud Private 是一个 一站式云解决方案并且是一个本地的一站式云解决方案。 IBM Cloud Private 提供纯上游 Kubernetes以及运行实际企业工作负载所需的典型管理组件。这些工作负载包括健康管理、日志管理、审计跟踪以及用于跟踪平台上工作负载使用情况的计量。
<!--
IBM Cloud Private is available in a community edition and a fully supported enterprise edition. The community edition is available at no charge from [Docker Hub](https://hub.docker.com/r/ibmcom/icp-inception/). The enterprise edition supports high availability topologies and includes commercial support from IBM for Kubernetes and the IBM Cloud Private management platform. If you want to try IBM Cloud Private, you can use either the hosted trial, the tutorial, or the self-guided demo. You can also try the free community edition. For details, see [Get started with IBM Cloud Private](https://www.ibm.com/cloud/private/get-started).
-->
IBM Cloud Private 提供了社区版和全支持的企业版。可从 [Docker Hub](https://hub.docker.com/r/ibmcom/icp-inception/) 免费获得社区版本。企业版支持高可用性拓扑,并包括 IBM 对 Kubernetes 和 IBM Cloud Private 管理平台的商业支持。如果您想尝试 IBM Cloud Private您可以使用托管试用版、教程或自我指导演示。您也可以尝试免费的社区版。有关详细信息请参阅 [IBM Cloud Private 入门](https://www.ibm.com/cloud/private/get-started)。
<!--
For more information, explore the following resources:
* [IBM Cloud Private](https://www.ibm.com/cloud/private)
* [Reference architecture for IBM Cloud Private](https://github.com/ibm-cloud-architecture/refarch-privatecloud)
* [IBM Cloud Private documentation](https://www.ibm.com/support/knowledgecenter/SSBS6K/product_welcome_cloud_private.html)
-->
有关更多信息,请浏览以下资源:
* [IBM Cloud Private](https://www.ibm.com/cloud/private)
* [IBM Cloud Private 参考架构](https://github.com/ibm-cloud-architecture/refarch-privatecloud)
* [IBM Cloud Private 文档](https://www.ibm.com/support/knowledgecenter/SSBS6K/product_welcome_cloud_private.html)
<!--
## IBM Cloud Private and Terraform
The following modules are available where you can deploy IBM Cloud Private by using Terraform:
* AWS: [Deploy IBM Cloud Private to AWS](https://github.com/ibm-cloud-architecture/terraform-icp-aws)
* Azure: [Deploy IBM Cloud Private to Azure](https://github.com/ibm-cloud-architecture/terraform-icp-azure)
* IBM Cloud: [Deploy IBM Cloud Private cluster to IBM Cloud](https://github.com/ibm-cloud-architecture/terraform-icp-ibmcloud)
* OpenStack: [Deploy IBM Cloud Private to OpenStack](https://github.com/ibm-cloud-architecture/terraform-icp-openstack)
* Terraform module: [Deploy IBM Cloud Private on any supported infrastructure vendor](https://github.com/ibm-cloud-architecture/terraform-module-icp-deploy)
* VMware: [Deploy IBM Cloud Private to VMware](https://github.com/ibm-cloud-architecture/terraform-icp-vmware)
-->
## IBM Cloud Private 和 Terraform
您可以利用一下模块使用 Terraform 部署 IBM Cloud Private
* AWS[将 IBM Cloud Private 部署到 AWS](https://github.com/ibm-cloud-architecture/terraform-icp-aws)
* Azure[将 IBM Cloud Private 部署到 Azure](https://github.com/ibm-cloud-architecture/terraform-icp-azure)
* IBM Cloud[将 IBM Cloud Private 集群部署到 IBM Cloud](https://github.com/ibm-cloud-architecture/terraform-icp-ibmcloud)
* OpenStack[将IBM Cloud Private 部署到 OpenStack](https://github.com/ibm-cloud-architecture/terraform-icp-openstack)
* Terraform 模块:[在任何支持的基础架构供应商上部署 IBM Cloud Private](https://github.com/ibm-cloud-architecture/terraform-module-icp-deploy)
* VMware[将 IBM Cloud Private 部署到 VMware](https://github.com/ibm-cloud-architecture/terraform-icp-vmware)
<!--
## IBM Cloud Private on AWS
-->
## AWS 上的 IBM Cloud Private
<!--
You can deploy an IBM Cloud Private cluster on Amazon Web Services (AWS) by using either AWS CloudFormation or Terraform.
-->
您可以使用 AWS CloudFormation 或 Terraform 在 Amazon Web ServicesAWS上部署 IBM Cloud Private 集群。
<!--
IBM Cloud Private has a Quick Start that automatically deploys IBM Cloud Private into a new virtual private cloud (VPC) on the AWS Cloud. A regular deployment takes about 60 minutes, and a high availability (HA) deployment takes about 75 minutes to complete. The Quick Start includes AWS CloudFormation templates and a deployment guide.
-->
IBM Cloud Private 快速入门可以自动将 IBM Cloud Private 部署到 AWS Cloud 上的新虚拟私有云VPC中。常规部署大约需要60分钟而高可用性HA部署大约需要75分钟。快速入门包括 AWS CloudFormation 模板和部署指南。
<!--
This Quick Start is for users who want to explore application modernization and want to accelerate meeting their digital transformation goals, by using IBM Cloud Private and IBM tooling. The Quick Start helps users rapidly deploy a high availability (HA), production-grade, IBM Cloud Private reference architecture on AWS. For all of the details and the deployment guide, see the [IBM Cloud Private on AWS Quick Start](https://aws.amazon.com/quickstart/architecture/ibm-cloud-private/).
-->
这个快速入门适用于希望探索应用程序现代化并希望通过使用 IBM Cloud Private 和 IBM 工具加速实现其数字化转换目标的用户。快速入门可帮助用户在 AWS 上快速部署高可用性HA、生产级的 IBM Cloud Private 参考架构。有关所有详细信息和部署指南,请参阅 [IBM Cloud Private 在 AWS 上的快速入门 ](https://aws.amazon.com/quickstart/architecture/ibm-cloud-private/)。
<!--
IBM Cloud Private can also run on the AWS cloud platform by using Terraform. To deploy IBM Cloud Private in an AWS EC2 environment, see [Installing IBM Cloud Private on AWS](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_aws.md).
-->
IBM Cloud Private 也可以通过使用 Terraform 在 AWS 云平台上运行。要在 AWS EC2 环境中部署 IBM Cloud Private请参阅[在 AWS 上安装 IBM Cloud Private](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_aws.md)。
<!--
## IBM Cloud Private on Azure
You can enable Microsoft Azure as a cloud provider for IBM Cloud Private deployment and take advantage of all the IBM Cloud Private features on the Azure public cloud. For more information, see [IBM Cloud Private on Azure](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/supported_environments/azure_overview.html).
-->
## Azure 上的 IBM Cloud Private
您可以启用 Microsoft Azure 作为 IBM Cloud Private 部署的云提供者,并利用 Azure 公共云上的所有 IBM Cloud Private 功能。有关更多信息,请参阅 [Azure 上的 IBM Cloud Private](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/supported_environments/azure_overview.html)。
<!--
## IBM Cloud Private with Red Hat OpenShift
-->
## 带有 Red Hat OpenShift 的 IBM Cloud Private
<!--
You can deploy IBM certified software containers that are running on IBM Cloud Private onto Red Hat OpenShift.
-->
您可以将在 IBM Cloud Private 上运行的 IBM 认证的软件容器部署到 Red Hat OpenShift 上。
<!--
Integration capabilities:
* Supports Linux® 64-bit platform in offline-only installation mode
* Single-master configuration
* Integrated IBM Cloud Private cluster management console and catalog
* Integrated core platform services, such as monitoring, metering, and logging
* IBM Cloud Private uses the OpenShift image registry
-->
整合能力:
* 在仅脱机安装模式下支持 Linux®64 位平台
* 单主控节点配置
* 集成的 IBM Cloud Private 集群管理控制台和目录
* 集成的核心平台服务,例如监控、计量和日志
* IBM Cloud Private 使用 OpenShift 镜像仓库
<!--
For more information see, [IBM Cloud Private on OpenShift](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/supported_environments/openshift/overview.html).
-->
有关更多信息,请参阅 [OpenShift 上的 IBM Cloud Private](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/supported_environments/openshift/overview.html)。
<!--
## IBM Cloud Private on VirtualBox
To install IBM Cloud Private to a VirtualBox environment, see [Installing IBM Cloud Private on VirtualBox](https://github.com/ibm-cloud-architecture/refarch-privatecloud-virtualbox).
-->
## VirtualBox 上的 IBM Cloud Private
要将 IBM Cloud Private 安装到 VirtualBox 环境,请参阅[在 VirtualBox 上安装 IBM Cloud Private](https://github.com/ibm-cloud-architecture/refarch-privatecloud-virtualbox)。
<!--
## IBM Cloud Private on VMware
-->
## VMware 上的 IBM Cloud Private
<!--
You can install IBM Cloud Private on VMware with either Ubuntu or RHEL images. For details, see the following projects:
-->
您可以使用 Ubuntu 或 RHEL 镜像在 VMware 上安装 IBM Cloud Private。有关详细信息请参见以下项目
<!--
* [Installing IBM Cloud Private with Ubuntu](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_prem_ubuntu.md)
* [Installing IBM Cloud Private with Red Hat Enterprise](https://github.com/ibm-cloud-architecture/refarch-privatecloud/tree/master/icp-on-rhel)
-->
* [使用 Ubuntu 安装IBM Cloud Private](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_prem_ubuntu.md)
* [使用 Red Hat Enterprise 安装 IBM Cloud Private](https://github.com/ibm-cloud-architecture/refarch-privatecloud/tree/master/icp-on-rhel)
<!--
The IBM Cloud Private Hosted service automatically deploys IBM Cloud Private Hosted on your VMware vCenter Server instances. This service brings the power of microservices and containers to your VMware environment on IBM Cloud. With this service, you can extend the same familiar VMware and IBM Cloud Private operational model and tools from on-premises into the IBM Cloud.
-->
IBM Cloud Private Hosted 服务会自动在您的 VMware vCenter Server 实例上部署 IBM Cloud Private Hosted。此服务将微服务和容器的功能带到 IBM Cloud上的VMware 环境中。使用此服务,您可以将同样熟悉的 VMware 和 IBM Cloud Private 操作模型和工具从本地扩展到 IBM Cloud。
<!--
For more information, see [IBM Cloud Private Hosted service](https://cloud.ibm.com/docs/vmwaresolutions?topic=vmwaresolutions-icp_overview).
-->
有关更多信息,请参阅 [IBM Cloud Private Hosted 服务](https://cloud.ibm.com/docs/vmwaresolutions?topic=vmwaresolutions-icp_overview)。

View File

@ -1,44 +0,0 @@
---
title: 在腾讯云容器服务上运行 Kubernetes
---
<!--
---
title: Running Kubernetes on Tencent Kubernetes Engine
---
-->
<!--
## Tencent Kubernetes Engine
[Tencent Cloud Tencent Kubernetes Engine (TKE)](https://intl.cloud.tencent.com/product/tke) provides native Kubernetes container management services. You can deploy and manage a Kubernetes cluster with TKE in just a few steps. For detailed directions, see [Deploy Tencent Kubernetes Engine](https://intl.cloud.tencent.com/document/product/457/11741).
TKE is a [Certified Kubernetes product](https://www.cncf.io/certification/software-conformance/).It is fully compatible with the native Kubernetes API.
-->
## 腾讯云容器服务
[腾讯云容器服务TKE](https://intl.cloud.tencent.com/product/tke)提供本地 Kubernetes 容器管理服务。您只需几个步骤即可使用 TKE 部署和管理 Kubernetes 集群。有关详细说明,请参阅[部署腾讯云容器服务](https://intl.cloud.tencent.com/document/product/457/11741)。
TKE 是[认证的 Kubernetes 产品](https://www.cncf.io/certification/software-conformance/)。它与原生 Kubernetes API 完全兼容。
<!--
## Custom Deployment
The core of Tencent Kubernetes Engine is open source and available [on GitHub](https://github.com/TencentCloud/tencentcloud-cloud-controller-manager/).
When using TKE to create a Kubernetes cluster, you can choose managed mode or independent deployment mode. In addition, you can customize the deployment as needed; for example, you can choose an existing Cloud Virtual Machine instance for cluster creation or enable Kube-proxy in IPVS mode.
-->
## 定制部署
腾讯 Kubernetes Engine 的核心是开源的,并且可以在 [GitHub](https://github.com/TencentCloud/tencentcloud-cloud-controller-manager/) 上使用。
使用 TKE 创建 Kubernetes 集群时,可以选择托管模式或独立部署模式。另外,您可以根据需要自定义部署。例如,您可以选择现有的 Cloud Virtual Machine 实例来创建集群,也可以在 IPVS 模式下启用 Kube-proxy。
<!--
## What's Next
To learn more, see the [TKE documentation](https://intl.cloud.tencent.com/document/product/457).
-->
## 下一步
要了解更多信息,请参阅 [TKE 文档](https://intl.cloud.tencent.com/document/product/457)。

View File

@ -59,8 +59,8 @@ kubectl create namespace qos-example
For a Pod to be given a QoS class of Guaranteed:
* Every Container in the Pod must have a memory limit and a memory request, and they must be the same.
* Every Container in the Pod must have a CPU limit and a CPU request, and they must be the same.
* Every Container, including init containers, in the Pod must have a memory limit and a memory request, and they must be the same.
* Every Container, including init containers, in the Pod must have a CPU limit and a CPU request, and they must be the same.
Here is the configuration file for a Pod that has one Container. The Container has a memory limit and a
memory request, both equal to 200 MiB. The Container has a CPU limit and a CPU request, both equal to 700 milliCPU:
@ -69,8 +69,8 @@ memory request, both equal to 200 MiB. The Container has a CPU limit and a CPU r
对于 QoS 类为 Guaranteed 的 Pod
* Pod 中的每个容器必须指定内存请求和内存限制,并且两者要相等。
* Pod 中的每个容器必须指定 CPU 请求和 CPU 限制,并且两者要相等。
* Pod 中的每个容器,包含初始化容器,必须指定内存请求和内存限制,并且两者要相等。
* Pod 中的每个容器,包含初始化容器,必须指定 CPU 请求和 CPU 限制,并且两者要相等。
下面是包含一个容器的 Pod 配置文件。
容器设置了内存请求和内存限制,值都是 200 MiB。

View File

@ -201,7 +201,7 @@ JSON/YAML 格式的 Pod 定义文件。
<!--
1. Create a YAML file and store it on a web server so that you can pass the URL of that file to the kubelet.
-->
1. 创建一个 YAML 文件,并保存在保存在 web 服务上,为 kubelet 生成一个 URL。
1. 创建一个 YAML 文件,并保存在 web 服务上,为 kubelet 生成一个 URL。
```yaml
apiVersion: v1

View File

@ -3,7 +3,7 @@ title: Pod 水平自动扩缩
feature:
title: 水平扩缩
description: >
使用一个简单的命令、一个UI或基于CPU使用情况自动对应用程序进行扩缩。
使用一个简单的命令、一个 UI 或基于 CPU 使用情况自动对应用程序进行扩缩。
content_type: concept
weight: 90
@ -12,14 +12,14 @@ weight: 90
<!-- overview -->
<!--
The Horizontal Pod Autoscaler automatically scales the number of pods
in a replication controller, deployment or replica set based on observed CPU utilization (or, with
The Horizontal Pod Autoscaler automatically scales the number of Pods
in a replication controller, deployment, replica set or stateful set based on observed CPU utilization (or, with
[custom metrics](https://git.k8s.io/community/contributors/design-proposals/instrumentation/custom-metrics-api.md)
support, on some other application-provided metrics). Note that Horizontal
Pod Autoscaling does not apply to objects that can't be scaled, for example, DaemonSets.
-->
Pod 水平自动扩缩Horizontal Pod Autoscaler
可以基于 CPU 利用率自动扩缩 ReplicationController、Deployment 和 ReplicaSet 中的 Pod 数量。
可以基于 CPU 利用率自动扩缩 ReplicationController、Deployment、ReplicaSet 和 StatefulSet 中的 Pod 数量。
除了 CPU 利用率,也可以基于其他应程序提供的[自定义度量指标](https://git.k8s.io/community/contributors/design-proposals/instrumentation/custom-metrics-api.md)
来执行自动扩缩。
Pod 自动扩缩不适用于无法扩缩的对象,比如 DaemonSet。
@ -61,12 +61,12 @@ or the custom metrics API (for all other metrics).
<!--
* For per-pod resource metrics (like CPU), the controller fetches the metrics
from the resource metrics API for each pod targeted by the HorizontalPodAutoscaler.
from the resource metrics API for each Pod targeted by the HorizontalPodAutoscaler.
Then, if a target utilization value is set, the controller calculates the utilization
value as a percentage of the equivalent resource request on the containers in
each pod. If a target raw value is set, the raw metric values are used directly.
each Pod. If a target raw value is set, the raw metric values are used directly.
The controller then takes the mean of the utilization or the raw value (depending on the type
of target specified) across all targeted pods, and produces a ratio used to scale
of target specified) across all targeted Pods, and produces a ratio used to scale
the number of desired replicas.
-->
* 对于按 Pod 统计的资源指标(如 CPU控制器从资源指标 API 中获取每一个
@ -76,8 +76,8 @@ or the custom metrics API (for all other metrics).
接下来,控制器根据平均的资源使用率或原始值计算出扩缩的比例,进而计算出目标副本数。
<!--
Please note that if some of the pod's containers do not have the relevant resource request set,
CPU utilization for the pod will not be defined and the autoscaler will
Please note that if some of the Pod's containers do not have the relevant resource request set,
CPU utilization for the Pod will not be defined and the autoscaler will
not take any action for that metric. See the [algorithm
details](#algorithm-details) section below for more information about
how the autoscaling algorithm works.
@ -96,12 +96,12 @@ or the custom metrics API (for all other metrics).
* For object metrics and external metrics, a single metric is fetched, which describes
the object in question. This metric is compared to the target
value, to produce a ratio as above. In the `autoscaling/v2beta2` API
version, this value can optionally be divided by the number of pods before the
version, this value can optionally be divided by the number of Pods before the
comparison is made.
-->
* 如果pod 使用对象指标和外部指标(每个指标描述一个对象信息)。
* 如果 Pod 使用对象指标和外部指标(每个指标描述一个对象信息)。
这个指标将直接根据目标设定值相比较,并生成一个上面提到的扩缩比例。
`autoscaling/v2beta2` 版本API中这个指标也可以根据 Pod 数量平分后再计算。
`autoscaling/v2beta2` 版本 API 中,这个指标也可以根据 Pod 数量平分后再计算。
<!--
The HorizontalPodAutoscaler normally fetches metrics from a series of aggregated APIs (`metrics.k8s.io`,
@ -136,7 +136,7 @@ each of their current states. More details on scale sub-resource can be found
[here](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#scale-subresource).
-->
自动扩缩控制器使用 scale 子资源访问相应可支持扩缩的控制器(如副本控制器、
Deployments 和 ReplicaSet
Deployment 和 ReplicaSet
`scale` 是一个可以动态设定副本数量和检查当前状态的接口。
关于 scale 子资源的更多信息,请参考[这里](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#scale-subresource).
@ -182,7 +182,7 @@ metric across all Pods in the HorizontalPodAutoscaler's scale target.
Before checking the tolerance and deciding on the final values, we take
pod readiness and missing metrics into consideration, however.
-->
如果 HorizontalPodAutoscaler 指定的是`targetAverageValue` 或 `targetAverageUtilization`
如果 HorizontalPodAutoscaler 指定的是 `targetAverageValue``targetAverageUtilization`
那么将会把指定 Pod 度量值的平均值做为 `currentMetricValue`
然而,在检查容忍度和决定最终扩缩值前,我们仍然会把那些无法获取指标的 Pod 统计进去。
@ -193,7 +193,7 @@ shut down) and all failed Pods are discarded.
If a particular Pod is missing metrics, it is set aside for later; Pods
with missing metrics will be used to adjust the final scaling amount.
-->
所有被标记了删除时间戳Pod 正在关闭过程中)的 Pod 和 失败的 Pod 都会被忽略。
所有被标记了删除时间戳Pod 正在关闭过程中)的 Pod 和失败的 Pod 都会被忽略。
如果某个 Pod 缺失度量值,它将会被搁置,只在最终确定扩缩数量时再考虑。
@ -229,7 +229,7 @@ default is 5 minutes.
The `currentMetricValue / desiredMetricValue` base scale ratio is then
calculated using the remaining pods not set aside or discarded from above.
-->
在排除掉被搁置的 Pod 后,扩缩比例就会根据`currentMetricValue/desiredMetricValue`
在排除掉被搁置的 Pod 后,扩缩比例就会根据 `currentMetricValue/desiredMetricValue`
计算出来。
<!--
@ -274,20 +274,23 @@ used.
<!--
If multiple metrics are specified in a HorizontalPodAutoscaler, this
calculation is done for each metric, and then the largest of the desired
replica counts is chosen. If any of those metrics cannot be converted
replica counts is chosen. If any of these metrics cannot be converted
into a desired replica count (e.g. due to an error fetching the metrics
from the metrics APIs), scaling is skipped.
from the metrics APIs) and a scale down is suggested by the metrics which
can be fetched, scaling is skipped. This means that the HPA is still capable
of scaling up if one or more metrics give a `desiredReplicas` greater than
the current value.
-->
如果创建 HorizontalPodAutoscaler 时指定了多个指标,
那么会按照每个指标分别计算扩缩副本数,取最大的进行扩缩。
如果任何一个指标无法顺利的计算出扩缩副本数(比如,通过 API 获取指标时出错),
那么本次扩缩会被跳过。
那么会按照每个指标分别计算扩缩副本数,取最大值进行扩缩。
如果任何一个指标无法顺利地计算出扩缩副本数(比如,通过 API 获取指标时出错),
并且可获取的指标建议缩容,那么本次扩缩会被跳过。
这表示,如果一个或多个指标给出的 `desiredReplicas` 值大于当前值HPA 仍然能实现扩容。
<!--
Finally, just before HPA scales the target, the scale recommendation is recorded. The
controller considers all recommendations within a configurable window choosing the
highest recommendation from within that window.
This value can be configured using the `--horizontal-pod-autoscaler-downscale-stabilization` flag, which defaults to 5 minutes.
highest recommendation from within that window. This value can be configured using the `--horizontal-pod-autoscaler-downscale-stabilization` flag, which defaults to 5 minutes.
This means that scaledowns will occur gradually, smoothing out the impact of rapidly
fluctuating metric values.
-->
@ -321,13 +324,13 @@ API 的 beta 版本(`autoscaling/v2beta2`)引入了基于内存和自定义
<!--
When you create a HorizontalPodAutoscaler API object, make sure the name specified is a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
More details about the API object can be found at
[HorizontalPodAutoscaler Object](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
[HorizontalPodAutoscaler Object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#horizontalpodautoscaler-v1-autoscaling).
-->
创建 HorizontalPodAutoscaler 对象时,需要确保所给的名称是一个合法的
[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
有关 API 对象的更多信息,请查阅[HorizontalPodAutoscaler 对象设计文档](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object)。
有关 API 对象的更多信息,请查阅
[HorizontalPodAutoscaler 对象设计文档](/zh/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#horizontalpodautoscaler-v1-autoscaling)。
<!--
## Support for Horizontal Pod Autoscaler in kubectl
@ -360,13 +363,12 @@ The detailed documentation of `kubectl autoscale` can be found [here](/docs/refe
<!--
## Autoscaling during rolling update
Currently in Kubernetes, it is possible to perform a [rolling update](/docs/tasks/run-application/rolling-update-replication-controller/)
by managing replication controllers directly,
or by using the deployment object, which manages the underlying replica sets for you.
Currently in Kubernetes, it is possible to perform a rolling update by using the deployment object,
which manages the underlying replica sets for you.
Horizontal Pod Autoscaler only supports the latter approach: the Horizontal Pod Autoscaler is bound to the deployment object,
it sets the size for the deployment object, and the deployment is responsible for setting sizes of underlying replica sets.
-->
## 滚动升级时扩缩 {#autoscaling-during-roling-update}
## 滚动升级时扩缩 {#autoscaling-during-rolling-update}
目前在 Kubernetes 中,可以针对 ReplicationController 或 Deployment 执行
滚动更新,它们会为你管理底层副本数。
@ -375,13 +377,12 @@ HPA 设置副本数量时Deployment 会设置底层副本数。
<!--
Horizontal Pod Autoscaler does not work with rolling update using direct manipulation of replication controllers,
i.e. you cannot bind a Horizontal Pod Autoscaler to a replication controller and do rolling update (e.g. using `kubectl rolling-update`).
i.e. you cannot bind a Horizontal Pod Autoscaler to a replication controller and do rolling update.
The reason this doesn't work is that when rolling update creates a new replication controller,
the Horizontal Pod Autoscaler will not be bound to the new replication controller.
-->
通过直接操控副本控制器执行滚动升级时HPA 不能工作,
也就是说你不能将 HPA 绑定到某个 RC 再执行滚动升级
(例如使用 `kubectl rolling-update` 命令)。
也就是说你不能将 HPA 绑定到某个 RC 再执行滚动升级。
HPA 不能工作的原因是它无法绑定到滚动更新时所新创建的副本控制器。
<!--
@ -492,25 +493,35 @@ APIs, cluster administrators must ensure that:
集群管理员需要确保下述条件,以保证 HPA 控制器能够访问这些 API
<!--
* The [API aggregation layer](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) is enabled.
* The [API aggregation layer](/docs/tasks/extend-kubernetes/configure-aggregation-layer/) is enabled.
* The corresponding APIs are registered:
* For resource metrics, this is the `metrics.k8s.io` API, generally provided by [metrics-server](https://github.com/kubernetes-incubator/metrics-server).
* For resource metrics, this is the `metrics.k8s.io` API, generally provided by [metrics-server](https://github.com/kubernetes-sigs/metrics-server).
It can be launched as a cluster addon.
* For custom metrics, this is the `custom.metrics.k8s.io` API. It's provided by "adapter" API servers provided by metrics solution vendors.
Check with your metrics pipeline, or the [list of known solutions](https://github.com/kubernetes/metrics/blob/master/IMPLEMENTATIONS.md#custom-metrics-api).
If you would like to write your own, check out the [boilerplate](https://github.com/kubernetes-incubator/custom-metrics-apiserver) to get started.
If you would like to write your own, check out the [boilerplate](https://github.com/kubernetes-sigs/custom-metrics-apiserver) to get started.
* For external metrics, this is the `external.metrics.k8s.io` API. It may be provided by the custom metrics adapters provided above.
* The `--horizontal-pod-autoscaler-use-rest-clients` is `true` or unset. Setting this to false switches to Heapster-based autoscaling, which is deprecated.
-->
* 启用了 [API 聚合层](/zh/docs/tasks/extend-kubernetes/configure-aggregation-layer/)
* 相应的 API 已注册:
* 对于资源指标,将使用 `metrics.k8s.io` API一般由 [metrics-server](https://github.com/kubernetes-incubator/metrics-server) 提供。
它可以做为集群插件启动。
* 对于自定义指标,将使用 `custom.metrics.k8s.io` API。
* 对于自定义指标,将使用 `custom.metrics.k8s.io` API。
它由其他度量指标方案厂商的“适配器Adapter” API 服务器提供。
确认你的指标流水线,或者查看[已知方案列表](https://github.com/kubernetes/metrics/blob/master/IMPLEMENTATIONS.md#custom-metrics-api)。
如果你想自己编写,请从 [boilerplate](https://github.com/kubernetes-sigs/custommetrics-apiserver)开始。
* 对于外部指标,将使用 `external.metrics.k8s.io` API。可能由上面的自定义指标适配器提供。
* `--horizontal-pod-autoscaler-use-rest-clients` 参数设置为 `true` 或者不设置。
如果设置为 false则会切换到基于 Heapster 的自动扩缩,这个特性已经被弃用了。
@ -533,6 +544,236 @@ and [the walkthrough for using external metrics](/docs/tasks/run-application/hor
[使用自定义指标的教程](/zh/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics)
和[使用外部指标的教程](/zh/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects)。
<!--
## Support for configurable scaling behavior
Starting from
[v1.18](https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/20190307-configurable-scale-velocity-for-hpa.md)
the `v2beta2` API allows scaling behavior to be configured through the HPA
`behavior` field. Behaviors are specified separately for scaling up and down in
`scaleUp` or `scaleDown` section under the `behavior` field. A stabilization
window can be specified for both directions which prevents the flapping of the
number of the replicas in the scaling target. Similarly specifying scaling
policies controls the rate of change of replicas while scaling.
-->
## 支持可配置的扩缩 {#support-for-configurable-scaling-behaviour}
从 [v1.18](https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/20190307-configurable-scale-velocity-for-hpa.md)
开始,`v2beta2` API 允许通过 HPA 的 `behavior` 字段配置扩缩行为。
`behavior` 字段中的 `scaleUp``scaleDown` 分别指定扩容和缩容行为。
可以两个方向指定一个稳定窗口,以防止扩缩目标中副本数量的波动。
类似地,指定扩缩策略可以控制扩缩时副本数的变化率。
<!--
### Scaling Policies
One or more scaling policies can be specified in the `behavior` section of the spec.
When multiple policies are specified the policy which allows the highest amount of
change is the policy which is selected by default. The following example shows this behavior
while scaling down:
-->
### 扩缩策略 {#scaling-policies}
在 spec 字段的 `behavior` 部分可以指定一个或多个扩缩策略。
当指定多个策略时,默认选择允许更改最多的策略。
下面的例子展示了缩容时的行为:
```yaml
behavior:
scaleDown:
policies:
- type: Pods
value: 4
periodSeconds: 60
- type: Percent
value: 10
periodSeconds: 60
```
<!--
When the number of pods is more than 40 the second policy will be used for scaling down.
For instance if there are 80 replicas and the target has to be scaled down to 10 replicas
then during the first step 8 replicas will be reduced. In the next iteration when the number
of replicas is 72, 10% of the pods is 7.2 but the number is rounded up to 8. On each loop of
the autoscaler controller the number of pods to be change is re-calculated based on the number
of current replicas. When the number of replicas falls below 40 the first policy _(Pods)_ is applied
and 4 replicas will be reduced at a time.
-->
当 Pod 数量超过 40 个时,第二个策略将用于缩容。
例如,如果有 80 个副本,并且目标必须缩小到 10 个副本,那么在第一步中将减少 8 个副本。
在下一轮迭代中,当副本的数量为 72 时10% 的 Pod 数为 7.2,但是这个数字向上取整为 8。
在 autoscaler 控制器的每个循环中,将根据当前副本的数量重新计算要更改的 Pod 数量。
当副本数量低于 40 时,应用第一个策略 _Pods_ ,一次减少 4 个副本。
<!--
`periodSeconds` indicates the length of time in the past for which the policy must hold true.
The first policy allows at most 4 replicas to be scaled down in one minute. The second policy
allows at most 10% of the current replicas to be scaled down in one minute.
-->
`periodSeconds` 表示策略的时间长度必须保证有效。
第一个策略允许在一分钟内最多缩小 4 个副本。
第二个策略最多允许在一分钟内缩小当前副本的 10%。
<!--
The policy selection can be changed by specifying the `selectPolicy` field for a scaling
direction. By setting the value to `Min` which would select the policy which allows the
smallest change in the replica count. Setting the value to `Disabled` completely disables
scaling in that direction.
-->
可以指定扩缩方向的 `selectPolicy` 字段来更改策略选择。
通过设置 `Min` 的值,它将选择副本数变化最小的策略。
将该值设置为 `Disabled` 将完全禁用该方向的缩放。
<!--
### Stabilization Window
The stabilization window is used to restrict the flapping of replicas when the metrics
used for scaling keep fluctuating. The stabilization window is used by the autoscaling
algorithm to consider the computed desired state from the past to prevent scaling. In
the following example the stabilization window is specified for `scaleDown`.
-->
### 稳定窗口 {#stabilization-window}
当用于扩缩的指标持续抖动时,使用稳定窗口来限制副本数上下振动。
自动扩缩算法使用稳定窗口来考虑过去计算的期望状态,以防止扩缩。
在下面的例子中,稳定化窗口被指定为 `scaleDown`
```yaml
scaleDown:
stabilizationWindowSeconds: 300
```
<!--
When the metrics indicate that the target should be scaled down the algorithm looks
into previously computed desired states and uses the highest value from the specified
interval. In above example all desired states from the past 5 minutes will be considered.
-->
当指标显示目标应该缩容时,自动扩缩算法查看之前计算的期望状态,并使用指定时间间隔内的最大值。
在上面的例子中,过去 5 分钟的所有期望状态都会被考虑。
<!--
### Default Behavior
To use the custom scaling not all fields have to be specified. Only values which need to be
customized can be specified. These custom values are merged with default values. The default values
match the existing behavior in the HPA algorithm.
-->
### 默认行为 {#default-behavior}
要使用自定义扩缩,不必指定所有字段。
只有需要自定义的字段才需要指定。
这些自定义值与默认值合并。
默认值与 HPA 算法中的现有行为匹配。
```yaml
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 100
periodSeconds: 15
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 15
- type: Pods
value: 4
periodSeconds: 15
selectPolicy: Max
```
<!--
For scaling down the stabilization window is _300_ seconds(or the value of the
`--horizontal-pod-autoscaler-downscale-stabilization` flag if provided). There is only a single policy
for scaling down which allows a 100% of the currently running replicas to be removed which
means the scaling target can be scaled down to the minimum allowed replicas.
For scaling up there is no stabilization window. When the metrics indicate that the target should be
scaled up the target is scaled up immediately. There are 2 policies where 4 pods or a 100% of the currently
running replicas will be added every 15 seconds till the HPA reaches its steady state.
-->
用于缩小稳定窗口的时间为 _300_ 秒(或是 `--horizontal-pod-autoscaler-downscale-stabilization` 参数设定值)。
只有一种缩容的策略,允许 100% 删除当前运行的副本,这意味着扩缩目标可以缩小到允许的最小副本数。
对于扩容,没有稳定窗口。当指标显示目标应该扩容时,目标会立即扩容。
这里有两种策略,每 15 秒添加 4 个 Pod 或 100% 当前运行的副本数,直到 HPA 达到稳定状态。
<!--
### Example: change downscale stabilization window
To provide a custom downscale stabilization window of 1 minute, the following
behavior would be added to the HPA:
-->
### 示例:更改缩容稳定窗口
将下面的 behavior 配置添加到 HPA 中,可提供一个 1 分钟的自定义缩容稳定窗口:
```yaml
behavior:
scaleDown:
stabilizationWindowSeconds: 60
```
<!--
### Example: limit scale down rate
To limit the rate at which pods are removed by the HPA to 10% per minute, the
following behavior would be added to the HPA:
-->
### 示例:限制缩容速率
将下面的 behavior 配置添加到 HPA 中,可限制 Pod 被 HPA 删除速率为每分钟 10%
```yaml
behavior:
scaleDown:
policies:
- type: Percent
value: 10
periodSeconds: 60
```
<!--
To ensure that no more than 5 Pods are removed per minute, you can add a second scale-down
policy with a fixed size of 5, and set `selectPolicy` to minimum. Setting `selectPolicy` to `Min` means
that the autoscaler chooses the policy that affects the smallest number of Pods:
-->
为了确保每分钟删除的 Pod 数不超过 5 个,可以添加第二个缩容策略,大小固定为 5并将 `selectPolicy` 设置为最小值。
`selectPolicy` 设置为 `Min` 意味着 autoscaler 会选择影响 Pod 数量最小的策略:
```yaml
behavior:
scaleDown:
policies:
- type: Percent
value: 10
periodSeconds: 60
- type: Pods
value: 5
periodSeconds: 60
selectPolicy: Min
```
<!--
### Example: disable scale down
The `selectPolicy` value of `Disabled` turns off scaling the given direction.
So to prevent downscaling the following policy would be used:
-->
### 示例:禁用缩容
`selectPolicy` 的值 `Disabled` 会关闭对给定方向的缩容。
因此使用以下策略,将会阻止缩容:
```yaml
behavior:
scaleDown:
selectPolicy: Disabled
```
## {{% heading "whatsnext" %}}
<!--

898
content/zh/docs/test.md Normal file
View File

@ -0,0 +1,898 @@
---
title: 测试页面(中文版)
main_menu: false
---
<!--
title: Docs smoke test page
main_menu: false
-->
<!--
This page serves two purposes:
- Demonstrate how the Kubernetes documentation uses Markdown
- Provide a "smoke test" document we can use to test HTML, CSS, and template
changes that affect the overall documentation.
-->
本页面服务于两个目的:
- 展示 Kubernetes 中文版文档中应如何使用 Markdown
- 提供一个测试用文档,用来测试可能影响所有文档的 HTML、CSS 和模板变更
<!--
## Heading levels
The above heading is an H2. The page title renders as an H1. The following
sections show H3-H6.
### H3
This is in an H3 section.
#### H4
This is in an H4 section.
##### H5
This is in an H5 section.
###### H6
This is in an H6 section.
-->
## 标题级别
上面的标题是 H2 级别。页面标题Title会渲染为 H1。以下各节分别展示 H3-H6
的渲染结果。
### H3
此处为 H3 节内容。
#### H4
此处为 H4 节内容。
##### H5
此处为 H5 节内容。
###### H6
此处为 H6 节内容。
<!--
## Inline elements
Inline elements show up within the text of paragraph, list item, admonition, or
other block-level element.
-->
## 内联元素Inline elements
内联元素显示在段落文字、列表条目、提醒信息或者块级别元素之内。
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis
nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu
fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in
culpa qui officia deserunt mollit anim id est laborum.
### 内联文本风格
<!--
- **bold**
- _italic_
- ***bold italic***
- ~~strikethrough~~
- <u>underline</u>
- _<u>underline italic</u>_
- **<u>underline bold</u>**
- ***<u>underline bold italic</u>***
- `monospace text`
- **`monospace bold`**
-->
- **粗体字**
- _斜体字_
- ***粗斜体字***
- ~~删除线~~
- <u>下划线</u>
- _<u>带下划线的斜体</u>_
- ***<u>带下划线的粗斜体</u>***
- `monospace text` <- 等宽字体
- **`monospace bold`** <- 粗等宽字体
## 列表
<!--
Markdown doesn't have strict rules about how to process lists. When we moved
from Jekyll to Hugo, we broke some lists. To fix them, keep the following in
mind:
- Make sure you indent sub-list items **4 spaces** rather than the 2 that you
may be used to. Counter-intuitively, you need to indent block-level content
within a list item an extra 4 spaces too.
- To end a list and start another, you need a HTML comment block on a new line
between the lists, flush with the left-hand border. The first list won't end
otherwise, no matter how many blank lines you put between it and the second.
-->
Markdown 在如何处理列表方面没有严格的规则。在我们从 Jekyll 迁移到 Hugo 时,
我们遇到了一些问题。为了处理这些问题,请注意以下几点:
- 确保你将子列表的条目缩进**四个空格**而不是你可能熟悉的两个空格。
有一点是不那么直观的,你需要将列表中的块级别内容多缩进四个空格。
- 要结束一个列表并开始一个新的列表,你需要在两个列表之间添加加一个 HTML 注释块,
并将其置于独立的一行,左边顶边对齐。否则前一个列表不会结束,无论你在它与
第二个列表之间放多少个空行。
<!--
### Bullet lists
- This is a list item
* This is another list item in the same list
- You can mix `-` and `*`
- To make a sub-item, indent two tabstops (4 spaces). **This is different
from Jekyll and Kramdown.**
- This is a sub-sub-item. Indent two more tabstops (4 more spaces).
- Another sub-item.
-->
### 项目符号列表
- 此为列表条目
* 此为另一列表条目,位于同一列表中
- 你可以将 `-``*` 混合使用
- 要开始子列表,缩进两个 TAB (四个空格)。**Jekyll 和 Markdown
在这点上有所不同**。
- 这是另一个子子条目。进一步多缩进两个空格。
- 另一个子条目
<!-- separate lists -->
<!--
- This is a new list. With Hugo, you need to use a HTML comment to separate two
consecutive lists. **The HTML comment needs to be at the left margin.**
- Bullet lists can have paragraphs or block elements within them.
Indent the content to be one tab stop beyond the text of the bullet
point. **This paragraph and the code block line up with the second `l` in
`Bullet` above.**
```bash
ls -l
```
- And a sub-list after some block-level content
-->
- 这是一个新的列表。使用 Hugo 时,你需要用一行 HTML 注释将两个紧挨着的列表分开。
**这里的 HTML 注释需要按左侧顶边对齐。**
- 项目符号列表可以中包含文字段落或块元素。
段落内容与第一行文字左侧对齐。
**此段文字和下面的代码段都与前一行中的“项”字对齐。**
```bash
ls -l
```
- 在块级内容之后还可以有子列表内容。
<!--
- A bullet list item can contain a numbered list.
1. Numbered sub-list item 1
2. Numbered sub-list item 2
-->
- 项目符号列表条目中还可以包含编号列表。
1. 编号子列表条目一
1. 编号子列表条目二
- 项目符号列表条目中包含编号列表的另一种形式(推荐形式)。让子列表的编号数字
与项目符号列表文字左对齐。
1. 编号子列表条目一,左侧编号与前一行的“项”字左对齐。
1. 编号子列表条目二,条目文字与数字之间多了一个空格。
<!--
### Numbered lists
1. This is a list item
2. This is another list item in the same list. The number you use in Markdown
does not necessarily correlate to the number in the final output. By
convention, we keep them in sync.
3. {{<note>}}
For single-digit numbered lists, using two spaces after the period makes
interior block-level content line up better along tab-stops.
{{</note>}}
-->
### 编号列表
1. 此为列表条目
1. 此为列表中的第二个条目。在 Markdown 源码中所给的编号数字与最终输出的数字
可能不同。建议在紧凑列表中编号都使用 1。如果条目之间有其他内容比如注释
掉的英文)存在,则需要显式给出编号。
2. {{<note>}}
对于单个数字的编号列表,在句点(`.`)后面加两个空格。这样有助于将列表的
内容更好地对齐。
{{</note>}}
<!-- separate lists -->
<!--
1. This is a new list. With Hugo, you need to use a HTML comment to separate
two consecutive lists. **The HTML comment needs to be at the left margin.**
2. Numbered lists can have paragraphs or block elements within them.
Just indent the content to be one tab stop beyond the text of the bullet
point. **This paragraph and the code block line up with the `m` in
`Numbered` above.**
```bash
ls -l
```
- And a sub-list after some block-level content. This is at the same
"level" as the paragraph and code block above, despite being indented
more.
-->
1. 这是一个新的列表。 使用 Hugo 时,你需要用 HTML 注释将两个紧挨着的列表分开。
**HTML 注释需要按左边顶边对齐。**
2. 编号列表条目中也可以包含额外的段落或者块元素。
后续段落应该按编号列表文字的第一行左侧对齐。
**此段落及下面的代码段都与本条目中的第一个字“编”对齐。**
```bash
ls -l
```
- 编号列表条目中可以在块级内容之后有子列表。子列表的符号项要与上层列表条目
文字左侧对齐。
### 中文译文的编号列表格式 1
<!--
1. English item 1
-->
1. 译文条目一
<!--
1. English item 2
-->
2. 译文条目二,由于前述原因,条目 2 与 1 之间存在注释行,如果此条目不显式给出
起始编号,会被 Hugo 当做两个独立的列表。
### 中文译文的编号列表格式 2
<!--
1. English item 1
-->
1. 译文条目一
<!-- trunk of english text -->
中文译文段落。
<!--
```shell
# list services
kubectl get svc
```
-->
带注释的代码段(**注意以上英文注释 `<!--``-->` 的缩进空格数**)。
```shell
# 列举服务
kubectl get svc
```
<!--
1. English item 2
-->
2. 译文条目二,由于前述原因,条目 2 与 1 之间存在注释行,如果此条目不显式给出
起始编号,会被 Hugo 当做两个独立的列表。
<!--
### Tab lists
Tab lists can be used to conditionally display content, e.g., when multiple
options must be documented that require distinct instructions or context.
-->
### 标签列表
标签列表可以用来有条件地显式内容,例如,当有多种选项可供选择时,每个选项
可能需要完全不同的指令或者上下文。
<!--
{{</* tabs name="tab_lists_example" */>}}
{{%/* tab name="Choose one..." */%}}
请注意这里对英文原文短代码的处理。目的是确保其中的 tabs 短代码失效。
由于 Hugo 的局限性,如果不作类似处理,这里的 tabs 尽管已经被包含在
HTML 注释块中,仍然会生效!
Please select an option.
{{%/* /tab */%}}
-->
{{< tabs name="tab_lists_example" >}}
{{% tab name="请选择..." %}}
请选择一个选项。
{{% /tab %}}
<!--
{{%/* tab name="Formatting tab lists" */%}}
-->
{{% tab name="在标签页中格式化列表" %}}
<!--
Tabs may also nest formatting styles.
1. Ordered
1. (Or unordered)
1. Lists
```bash
echo 'Tab lists may contain code blocks!'
```
-->
标签页中也可以包含嵌套的排版风格,其中的英文注释处理也同正文中
的处理基本一致。
1. 编号列表
1. (或者没有编号的)
1. 列表
```bash
echo '标签页里面也可以有代码段!'
```
{{% /tab %}}
<!--
{{%/* tab name="Nested headers" */%}}
-->
{{% tab name="嵌套的子标题" %}}
<!--
### Headers in Tab list
Nested header tags may also be included.
-->
### 在标签页中的子标题
标签页中也可以包含嵌套的子标题。
<!--
{{</* warning */>}}
Headers within tab lists will not appear in the Table of Contents.
{{</* /warning */>}}
-->
{{< warning >}}
标签页中的子标题不会在目录中出现。
{{< /warning >}}
{{% /tab %}}
{{< /tabs >}}
<!--
### Checklists
Checklists are technically bullet lists, but the bullets are suppressed by CSS.
- [ ] This is a checklist item
- [x] This is a selected checklist item
-->
### 检查项列表 Checklists
检查项列表本质上也是一种项目符号列表,只是这里的项目符号部分被 CSS 压制了。
- [ ] 此为第一个检查项
- [x] 此为被选中的检查项
<!--
## Code blocks
You can create code blocks two different ways by surrounding the code block with
three back-tick characters on lines before and after the code block. **Only use
back-ticks (code fences) for code blocks.** This allows you to specify the
language of the enclosed code, which enables syntax highlighting. It is also more
predictable than using indentation.
-->
## 代码段
你可以用两种方式来创建代码块。一种方式是将在代码块之前和之后分别加上包含三个
反引号的独立行。**反引号应该仅用于代码段。**
用这种方式标记代码段时,你还可以指定所包含的代码的编程语言,从而启用语法加亮。
这种方式也比使用空格缩进的方式可预测性更好。
<!--
```
this is a code block created by back-ticks
```
-->
```
这是用反引号创建的代码段
```
<!--
The back-tick method has some advantages.
- It works nearly every time
- It is more compact when viewing the source code.
- It allows you to specify what language the code block is in, for syntax
highlighting.
- It has a definite ending. Sometimes, the indentation method breaks with
languages where spacing is significant, like Python or YAML.
-->
反引号标记代码段的方式有以下优点:
- 这种方式几乎总是能正确工作
- 在查看源代码时,内容相对紧凑
- 允许你指定代码块的编程语言,以便启用语法加亮
- 代码段的结束位置有明确标记。有时候,采用缩进空格的方式会使得一些对空格
很敏感的语言(如 Python、YAML很难处理。
<!--
To specify the language for the code block, put it directly after the first
grouping of back-ticks:
-->
要为代码段指定编程语言,可以在第一组反引号之后加上编程语言名称:
```bash
ls -l
```
<!--
Common languages used in Kubernetes documentation code blocks include:
- `bash` / `shell` (both work the same)
- `go`
- `json`
- `yaml`
- `xml`
- `none` (disables syntax highlighting for the block)
-->
Kubernetes 文档中代码块常用语言包括:
- `bash` / `shell` (二者几乎完全相同)
- `go`
- `json`
- `yaml`
- `xml`
- `none` (禁止对代码块执行语法加亮)
<!--
### Code blocks containing Hugo shortcodes
To show raw Hugo shortcodes as in the above example and prevent Hugo
from interpreting them, use C-style comments directly after the `<` and before
the `>` characters. The following example illustrates this (view the Markdown
source for this page).
-->
### 包含 Hugo 短代码的代码块
如果要像上面的例子一样显示 Hugo 短代码Shortcode不希望 Hugo 将其当做短代码来处理,
可以在 `<``>` 之间使用 C 语言风格的注释。
下面的示例展示如何实现这点(查看本页的 Markdown 源码):
```none
{{</* codenew file="pods/storage/gce-volume.yaml" */>}}
```
<!--
## Links
To format a link, put the link text inside square brackets, followed by the
link target in parentheses. [Link to Kubernetes.io](https://kubernetes.io/) or
[Relative link to Kubernetes.io](/)
You can also use HTML, but it is not preferred.
<a href="https://kubernetes.io/">Link to Kubernetes.io</a>
-->
## 链接
要格式化链接,将链接显示文本放在方括号中,后接用圆括号括起来的链接目标。
[指向 Kubernetes.io 的连接](https://kubernetes.io/) 或
[到 Kubernetes.io 的相对链接](/)。
你也可以使用 HTML但这种方式不是推荐的方式。
<a href="https://kubernetes.io/">到 Kubernetes.io 的链接</a>
### 中文链接
中文版本文档中的链接要注意以下两点:
- 指向 Kubernetes 文档的站内链接,需要在英文链接之前添加前缀 `/zh`
例如,原链接目标为 `/docs/foo/bar` 时,译文中的链接目标应为
`/zh/docs/foo/bar`。例如:
- 英文版本链接 [Kubernetes Components](/docs/concepts/overview/components/)
- 对应中文链接 [Kubernetes 组件](/zh/docs/concepts/overview/components/)
- 英文页面子标题会生成对应锚点Anchor例如子标题 `## Using object` 会生成
对应标签 `#using-objects`。在翻译为中文之后,对应锚点可能会失效。对此,有
两种方法处理。假定译文中存在以下子标题:
```
<!--
## Clean up
You can do this ...
-->
## 清理现场
你可以这样 ...
```
并且在本页或其他页面有指向 `#clean-up` 的链接如下:
```
..., please refer to the [clean up](#clean-up) section.
```
第一种处理方法是将链接改为中文锚点,即将引用该子标题的文字全部改为中文锚点。
例如:
```
..., 请参考[清理工作](#清理现场)一节。
```
第二种方式(也是推荐的方式)是将原来可能生成的锚点(尽管在英文原文中未明确
给出)显式标记在译文的子标题上。
```
<!--
## Clean up
You can do this ...
-->
## 清理现场 {#clean-up}
你可以这样 ...
```
之所以优选第二种方式是因为可以避免文档站点中其他引用此子标题的链接失效。
<!--
## Images
To format an image, use similar syntax to [links](#links), but add a leading `!`
character. The square brackets contain the image's alt text. Try to always use
alt text so that people using screen readers can get some benefit from the
image.
-->
## 图片
要显示图片,可以使用与链接类似的语法(`[links](#links)`),不过要在整个链接
之前添加一个感叹号(`!`)。方括号中给出的是图片的替代文本。
请坚持为图片设定替代文本,这样使用屏幕阅读器的人也能够了解图片中包含的是什么。
![pencil icon](/images/pencil.png)
<!--
To specify extended attributes, such as width, title, caption, etc, use the
<a href="https://gohugo.io/content-management/shortcodes/#figure">figure shortcode</a>,
which is preferred to using a HTML `<img>` tag. Also, if you need the image to
also be a hyperlink, use the `link` attribute, rather than wrapping the whole
figure in Markdown link syntax as shown below.
-->
要设置扩展的属性,例如 width、title、caption 等等,可以使用
<a href="https://gohugo.io/content-management/shortcodes/#figure">figure</a>
短代码,而不是使用 HTML 的 `<img>` 标签。
此外,如果你需要让图片本身变成超链接,可以使用短代码的 `link` 属性,而不是
将整个图片放到 Markdown 的链接语法之内。下面是一个例子:
<!--
{{</* figure src="/static/images/pencil.png" title="Pencil icon" caption="Image used to illustrate the figure shortcode" width="200px" */>}}
-->
{{< figure src="/images/pencil.png" title="铅笔图标" caption="用来展示 figure 短代码的图片" width="200px" >}}
<!--
Even if you choose not to use the figure shortcode, an image can also be a link. This
time the pencil icon links to the Kubernetes website. Outer square brackets enclose
the entire image tag, and the link target is in the parentheses at the end.
[![pencil icon](/images/pencil.png)](https://kubernetes.io)
You can also use HTML for images, but it is not preferred.
<img src="/images/pencil.png" alt="pencil icon" />
-->
即使你不想使用 figure 短代码,图片也可以展示为链接。这里,铅笔图标指向
Kubernetes 网站。外层的方括号将整个 image 标签封装起来,链接目标在
末尾的圆括号之间给出。
[![pencil icon](/images/pencil.png)](https://kubernetes.io)
你也可以使用 HTML 来嵌入图片,不过这种方式是不推荐的。
<img src="/images/pencil.png" alt="铅笔图标" />
<!--
## Tables
Simple tables have one row per line, and columns are separated by `|`
characters. The header is separated from the body by cells containing nothing
but at least three `-` characters. For ease of maintenance, try to keep all the
cell separators even, even if you heed to use extra space.
-->
## 表格
简单的表格可能每行只有一个独立的数据行,各个列之间用 `|` 隔开。
表格的标题行与表格内容之间用独立的一行隔开,在这一行中每个单元格的内容
只有 `-` 字符,且至少三个。出于方便维护考虑,请尝试将各个单元格间的
分割线对齐,尽管这样意味着你需要多输入几个空格。
<!--
| Heading cell 1 | Heading cell 2 |
|----------------|----------------|
| Body cell 1 | Body cell 2 |
-->
| 标题单元格 1 | 标题单元格 2 |
|----------------|----------------|
| 内容单元格 1 | 内容单元格 2 |
<!--
The header is optional. Any text separated by `|` will render as a table.
-->
标题行是可选的。所有用 `|` 隔开的内容都会被渲染成表格。
<!--
Markdown tables have a hard time with block-level elements within cells, such as
list items, code blocks, or multiple paragraphs. For complex or very wide
tables, use HTML instead.
-->
Markdown 表格在处理块级元素方面还很笨拙。例如在单元格中嵌入列表条目、代码段、
或者在其中划分多个段落方面的能力都比较差。对于复杂的或者很宽的表格,可以使用
HTML。
<table>
<thead>
<tr>
<!-- th>Heading cell 1</th -->
<th>标题单元格 1</th>
<!-- th>Heading cell 2</th -->
<th>标题单元格 2</th>
</tr>
</thead>
<tbody>
<tr>
<!-- td>Body cell 1</td -->
<td>内容单元格 1</td>
<!-- td>Body cell 2</td -->
<td>内容单元格 2</td>
</tr>
</tbody>
</table>
<!--
## Visualizations with Mermaid
You can use [Mermaid JS](https://mermaidjs.github.io) visualizations.
The Mermaid JS version is specified in [/layouts/partials/head.html](https://github.com/kubernetes/website/blob/master/layouts/partials/head.html)
-->
## 使用 Mermaid 来可视化
你可以使用 [Mermaid JS](https://mermaidjs.github.io) 来进行可视化展示。
Mermaid JS 版本在 [/layouts/partials/head.html](https://github.com/kubernetes/website/blob/master/layouts/partials/head.html)
中设置。
<!--
{{</* mermaid */>}}
graph TD;
A->B;
A->C;
B->D;
C->D;
{{</* mermaid */>}}
-->
```
{{</* mermaid */>}}
graph TD;
甲-->乙;
甲-->丙;
乙-->丁;
丙-->丁;
{{</*/ mermaid */>}}
```
<!--
Produces:
-->
会产生:
<!--
{{</* mermaid */>}}
graph TD;
A->B;
A->C;
B->D;
C->D;
{{</*/ mermaid */>}}
-->
{{< mermaid >}}
graph TD;
甲-->乙;
甲-->丙;
乙-->丁;
丙-->丁;
{{</ mermaid >}}
<!--
```
{{</* mermaid */>}}
sequenceDiagram
Alice ->> Bob: Hello Bob, how are you?
Bob->>John: How about you John?
Bob-x Alice: I am good thanks!
Bob-x John: I am good thanks!
Note right of John: Bob thinks a long<br/>long time, so long<br/>that the text does<br/>not fit on a row.
Bob->Alice: Checking with John...
Alice->John: Yes... John, how are you?
{{</*/ mermaid */>}}
```
-->
```
{{</* mermaid */>}}
sequenceDiagram
张三 ->> 李四: 李四,锄禾日当午?
李四-->>王五: 王五,锄禾日当午?
李四--x 张三: 汗滴禾下土!
李四-x 王五: 汗滴禾下土!
Note right of 王五: 李四想啊想啊<br/>一直想啊想,太阳<br/>都下山了,他还没想出来<br/>,文本框都放不下了。
李四-->张三: 跑去问王五...
张三->王五: 好吧... 王五,白日依山尽?
{{</*/ mermaid */>}}
```
<!--
Produces:
-->
产生:
<!--
{{< mermaid >}}
sequenceDiagram
Alice ->> Bob: Hello Bob, how are you?
Bob->>John: How about you John?
Bob-x Alice: I am good thanks!
Bob-x John: I am good thanks!
Note right of John: Bob thinks a long<br/>long time, so long<br/>that the text does<br/>not fit on a row.
Bob->Alice: Checking with John...
Alice->John: Yes... John, how are you?
{{</ mermaid >}}
-->
{{< mermaid >}}
sequenceDiagram
张三 ->> 李四: 李四,锄禾日当午?
李四-->>王五: 王五,锄禾日当午?
李四--x 张三: 汗滴禾下土!
李四-x 王五: 汗滴禾下土!
Note right of 王五: 李四想啊想啊一直想,<br/>想到太阳都下山了,<br/>他还没想出来,<br/>文本框都放不下了。
李四-->张三: 跑去问王五...
张三->王五: 好吧... 王五,白日依山尽?
{{</ mermaid >}}
<!--
<br>More [examples](https://mermaid-js.github.io/mermaid/#/examples) from the offical docs.
-->
<br>在官方网站上有更多的[示例](https://mermaid-js.github.io/mermaid/#/examples)。
<!--
## Sidebars and Admonitions
Sidebars and admonitions provide ways to add visual importance to text. Use
them sparingly.
-->
## 侧边栏和提醒框
侧边栏和提醒框可以为文本提供直观的重要性强调效果,可以偶尔一用。
<!--
### Sidebars
A sidebar offsets text visually, but without the visual prominence of
[admonitions](#admonitions).
-->
### 侧边栏Sidebar
侧边栏可以将文字横向平移,只是其显示效果可能不像[提醒](#admonitions)那么明显。
<!--
> This is a sidebar.
>
> You can have paragraphs and block-level elements within a sidebar.
>
> You can even have code blocks.
>
> ```bash
> sudo dmesg
> ```
-->
> 此为侧边栏。
>
> 你可以在侧边栏内排版段落和块级元素。
>
> 你甚至可以在其中包含代码块。
>
> ```bash
> sudo dmesg
> ```
<!--
### Admonitions
Admonitions (notes, warnings, etc) use Hugo shortcodes.
-->
### 提醒框 {#admonitions}
提醒框(说明、警告等等)都是用 Hugo 短代码的形式展现。
<!--
{{< note >}}
Notes catch the reader's attention without a sense of urgency.
You can have multiple paragraphs and block-level elements inside an admonition.
| Or | a | table |
{{< /note >}}
-->
{{< note >}}
说明信息用来引起读者的注意,但不过分强调其紧迫性。
你可以在提醒框内包含多个段落和块级元素。
| 甚至 | 包含 | 表格 |
{{< /note >}}
<!--
{{< caution >}}
The reader should proceed with caution.
{{< /caution >}}
-->
{{< caution >}}
读者继续此操作时要格外小心。
{{< /caution >}}
<!--
{{< warning >}}
Warnings point out something that could cause harm if ignored.
{{< /warning >}}
-->
{{< warning >}}
警告信息试图为读者指出一些不应忽略的、可能引发问题的事情。
{{< /warning >}}
注意,在较老的 Hugo 版本中,直接将 `note`、`warning` 或 `caution` 短代码
括入 HTML 注释当中是有问题的。这些短代码仍然会起作用。目前,在 0.70.0
以上版本中似乎已经修复了这一问题。
<!--
## Includes
To add shortcodes to includes.
-->
## 包含其他页面
要包含其他页面,可使用短代码。
{{< note >}}
{{< include "task-tutorial-prereqs.md" >}}
{{< /note >}}
<!--
## Katacoda Embedded Live Environment
-->
## 嵌入的 Katacoda 环境
{{< kat-button >}}

View File

@ -0,0 +1,10 @@
---
title: "示例:配置 java 微服务"
weight: 10
---
<!--
---
title: "Example: Configuring a Java Microservice"
weight: 10
---
-->

View File

@ -0,0 +1,37 @@
---
title: "互动教程 - 配置 java 微服务"
weight: 20
---
<!--
---
title: "Interactive Tutorial - Configuring a Java Microservice"
weight: 20
---
-->
<!DOCTYPE html>
<html lang="en">
<body>
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
<script src="https://katacoda.com/embed.js"></script>
<div class="layout" id="top">
<main class="content katacoda-content">
<div class="katacoda">
<div class="katacoda__alert">
<!-- To interact with the Terminal, please use the desktop/tablet version -->
如需要与终端交互,请使用台式机/平板电脑版
</div>
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/9" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;"></div>
</div>
</main>
</div>
</body>
</html>

View File

@ -0,0 +1,93 @@
---
title: "使用 MicroProfile、ConfigMaps、Secrets 实现外部化应用配置"
content_type: tutorial
weight: 10
---
<!--
---
title: "Externalizing config using MicroProfile, ConfigMaps and Secrets"
content_type: tutorial
weight: 10
---
-->
<!-- overview -->
<!--
In this tutorial you will learn how and why to externalize your microservices configuration. Specifically, you will learn how to use Kubernetes ConfigMaps and Secrets to set environment variables and then consume them using MicroProfile Config.
-->
在本教程中,你会学到如何以及为什么要实现外部化微服务应用配置。
具体来说,你将学习如何使用 Kubernetes ConfigMaps 和 Secrets 设置环境变量,
然后在 MicroProfile config 中使用它们。
## {{% heading "prerequisites" %}}
<!--
### Creating Kubernetes ConfigMaps & Secrets
There are several ways to set environment variables for a Docker container in Kubernetes, including: Dockerfile, kubernetes.yml, Kubernetes ConfigMaps, and Kubernetes Secrets. In the tutorial, you will learn how to use the latter two for setting your environment variables whose values will be injected into your microservices. One of the benefits for using ConfigMaps and Secrets is that they can be re-used across multiple containers, including being assigned to different environment variables for the different containers.
-->
### 创建 Kubernetes ConfigMaps 和 Secrets {#creating-kubernetes-configmaps-secrets}
在 Kubernetes 中,为 docker 容器设置环境变量有几种不同的方式,比如:
Dockerfile、kubernetes.yml、Kubernetes ConfigMaps、和 Kubernetes Secrets。
在本教程中,你将学到怎么用后两个方式去设置你的环境变量,而环境变量的值将注入到你的微服务里。
使用 ConfigMaps 和 Secrets 的一个好处是他们能在多个容器间复用,
比如赋值给不同的容器中的不同环境变量。
<!--
ConfigMaps are API Objects that store non-confidential key-value pairs. In the Interactive Tutorial you will learn how to use a ConfigMap to store the application's name. For more information regarding ConfigMaps, you can find the documentation [here].
Although Secrets are also used to store key-value pairs, they differ from ConfigMaps in that they're intended for confidential/sensitive information and are stored using Base64 encoding. This makes secrets the appropriate choice for storing such things as credentials, keys, and tokens, the former of which you'll do in the Interactive Tutorial. For more information on Secrets, you can find the documentation [here](/docs/concepts/configuration/secret/).
-->
ConfigMaps 是存储非机密键值对的 API 对象。
在互动教程中,你会学到如何用 ConfigMap 来保存应用名字。
ConfigMap 的更多信息,你可以在[这里](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)找到文档。
Secrets 尽管也用来存储键值对,但区别于 ConfigMaps 的是:它针对机密/敏感数据,且存储格式为 Base64 编码。
secrets 的这种特性使得它适合于存储证书、密钥、令牌,上述内容你将在交互教程中实现。
Secrets 的更多信息,你可以在[这里](/zh/docs/concepts/configuration/secret/)找到文档。
<!--
### Externalizing Config from Code
Externalized application configuration is useful because configuration usually changes depending on your environment. In order to accomplish this, we'll use Java's Contexts and Dependency Injection (CDI) and MicroProfile Config. MicroProfile Config is a feature of MicroProfile, a set of open Java technologies for developing and deploying cloud-native microservices.
-->
### 从代码外部化配置
外部化应用配置之所以有用处,是因为配置常常根据环境的不同而变化。
为了实现此功能,我们用到了 Java 上下文和依赖注入Contexts and Dependency Injection, CDI、MicroProfile 配置。
MicroProfile config 是 MicroProfile 的功能特性,
是一组开放 Java 技术,用于开发、部署云原生微服务。
<!--
CDI provides a standard dependency injection capability enabling an application to be assembled from collaborating, loosely-coupled beans. MicroProfile Config provides apps and microservices a standard way to obtain config properties from various sources, including the application, runtime, and environment. Based on the source's defined priority, the properties are automatically combined into a single set of properties that the application can access via an API. Together, CDI & MicroProfile will be used in the Interactive Tutorial to retrieve the externally provided properties from the Kubernetes ConfigMaps and Secrets and get injected into your application code.
Many open source frameworks and runtimes implement and support MicroProfile Config. Throughout the interactive tutorial, you'll be using Open Liberty, a flexible open-source Java runtime for building and running cloud-native apps and microservices. However, any MicroProfile compatible runtime could be used instead.
-->
CDI 提供一套标准的依赖注入能力,使得应用程序可以由相互协作的、松耦合的 beans 组装而成。
MicroProfile Config 为 app 和微服务提供从各种来源,比如应用、运行时、环境,获取配置参数的标准方法。
基于来源定义的优先级,属性可以自动的合并到单独一组应用可以通过 API 访问到的属性。
CDI & MicroProfile 都会被用在互动教程中,
用来从 Kubernetes ConfigMaps 和 Secrets 获得外部提供的属性,并注入应用程序代码中。
很多开源框架、运行时支持 MicroProfile Config。
对于整个互动教程,你都可以使用开放的库、灵活的开源 Java 运行时,去构建并运行云原生的 apps 和微服务。
然而,任何 MicroProfile 兼容的运行时都可以用来做替代品。
## {{% heading "objectives" %}}
<!--
* Create a Kubernetes ConfigMap and Secret
* Inject microservice configuration using MicroProfile Config
-->
* 创建 Kubernetes ConfigMap 和 Secret
* 使用 MicroProfile Config 注入微服务配置
<!-- lessoncontent -->
<!--
## Example: Externalizing config using MicroProfile, ConfigMaps and Secrets
### [Start Interactive Tutorial](/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive/)
-->
## 示例:使用 MicroProfile、ConfigMaps、Secrets 实现外部化应用配置
### [启动互动教程](/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive/)

View File

@ -91,7 +91,7 @@ weight: 10
<div class="col-md-8">
<!-- <p>You can create and manage a Deployment by using the Kubernetes command line interface, <b>Kubectl</b>. Kubectl uses the Kubernetes API to interact with the cluster. In this module, you'll learn the most common Kubectl commands needed to create Deployments that run your applications on a Kubernetes cluster.</p> -->
<p>您可以使用 Kubernetes 命令行界面创建和管理 Deployment<b>Kubectl</b>.Kubectl 使用 Kubernetes API 与集群进行交互。在本单元中,您将学习创建在 Kubernetes 集群上运行应用程序的 Deployment 所需的最常见的 Kubectl 命令。</p>
<p>您可以使用 Kubernetes 命令行界面 <b>Kubectl</b> 创建和管理 Deployment。Kubectl 使用 Kubernetes API 与集群进行交互。在本单元中,您将学习创建在 Kubernetes 集群上运行应用程序的 Deployment 所需的最常见的 Kubectl 命令。</p>
<!-- <p>When you create a Deployment, you'll need to specify the container image for your application and the number of replicas that you want to run. You can change that information later by updating your Deployment; Modules <a href="/docs/tutorials/kubernetes-basics/scale-intro/">5</a> and <a href="/docs/tutorials/kubernetes-basics/update-intro/">6</a> of the bootcamp discuss how you can scale and update your Deployments.</p> -->
<p>创建 Deployment 时,您需要指定应用程序的容器映像以及要运行的副本数。您可以稍后通过更新 Deployment 来更改该信息; 模块 <a href="/zh/docs/tutorials/kubernetes-basics/scale-intro/">5</a><a href="/zh/docs/tutorials/kubernetes-basics/update-intro/">6</a> 讨论了如何扩展和更新 Deployments。</p>
@ -133,4 +133,4 @@ weight: 10
</div>
</body>
</html>
</html>

View File

@ -43,7 +43,7 @@ weight: 10
<!--
<p>When you created a Deployment in Module <a href="/docs/tutorials/kubernetes-basics/deploy-intro/">2</a>, Kubernetes created a <b>Pod</b> to host your application instance. A Pod is a Kubernetes abstraction that represents a group of one or more application containers (such as Docker), and some shared resources for those containers. Those resources include:</p>
-->
<p>在模块 <a href="/docs/tutorials/kubernetes-basics/deploy-intro/">2</a>创建 Deployment 时, Kubernetes 添加了一个 <b>Pod</b> 来托管你的应用实例。Pod 是 Kubernetes 抽象出来的,表示一组一个或多个应用程序容器(如 Docker以及这些容器的一些共享资源。这些资源包括:</p>
<p>在模块 <a href="/zh/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/">2</a>创建 Deployment 时, Kubernetes 添加了一个 <b>Pod</b> 来托管你的应用实例。Pod 是 Kubernetes 抽象出来的,表示一组一个或多个应用程序容器(如 Docker以及这些容器的一些共享资源。这些资源包括:</p>
<!--
<ul>
<li>Shared storage, as Volumes</li>
@ -169,7 +169,7 @@ weight: 10
<!--
<p>In Module <a href="/docs/tutorials/kubernetes-basics/deploy/deploy-intro/">2</a>, you used Kubectl command-line interface. You'll continue to use it in Module 3 to get information about deployed applications and their environments. The most common operations can be done with the following kubectl commands:</p>
-->
<p>在模块 <a href="/docs/tutorials/kubernetes-basics/deploy/deploy-intro/">2</a>,您使用了 Kubectl 命令行界面。 您将继续在第3单元中使用它来获取有关已部署的应用程序及其环境的信息。 最常见的操作可以使用以下 kubectl 命令完成:</p>
<p>在模块 <a href="/zh/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/">2</a>,您使用了 Kubectl 命令行界面。 您将继续在第3单元中使用它来获取有关已部署的应用程序及其环境的信息。 最常见的操作可以使用以下 kubectl 命令完成:</p>
<!--
<ul>
<li><b>kubectl get</b> - list resources</li>

View File

@ -35,7 +35,7 @@ weight: 10
<h3>Kubernetes Service 总览</h3>
<!-- <p>Kubernetes <a href="/docs/concepts/workloads/pods/pod-overview/">Pods</a> are mortal. Pods in fact have a <a href="/docs/concepts/workloads/pods/pod-lifecycle/">lifecycle</a>. When a worker node dies, the Pods running on the Node are also lost. A <a href="/docs/concepts/workloads/controllers/replicaset/">ReplicaSet</a> might then dynamically drive the cluster back to desired state via creation of new Pods to keep your application running. As another example, consider an image-processing backend with 3 replicas. Those replicas are exchangeable; the front-end system should not care about backend replicas or even if a Pod is lost and recreated. That said, each Pod in a Kubernetes cluster has a unique IP address, even Pods on the same Node, so there needs to be a way of automatically reconciling changes among Pods so that your applications continue to function.</p>-->
<p> Kubernetes <a href="/zh/docs/concepts/workloads/pods/pod-overview/">Pod</a> 是转瞬即逝的。 Pod 实际上拥有 <a href="/zh/docs/concepts/workloads/pods/pod-lifecycle/">生命周期</a>。 当一个工作 Node 挂掉后, 在 Node 上运行的 Pod 也会消亡。 <a href="/zh/docs/concepts/workloads/controllers/replicaset/">ReplicaSet</a> 会自动地通过创建新的 Pod 驱动集群回到目标状态,以保证应用程序正常运行。 换一个例子考虑一个具有3个副本数的用作图像处理的后端程序。这些副本是可替换的; 前端系统不应该关心后端副本,即使 Pod 丢失或重新创建。也就是说Kubernetes 集群中的每个 Pod (即使是在同一个 Node 上的 Pod )都有一个惟一的 IP 地址,因此需要一种方法自动协调 Pod 之间的变更,以便应用程序保持运行。</p>
<p> Kubernetes <a href="/zh/docs/concepts/workloads/pods/">Pod</a> 是转瞬即逝的。 Pod 实际上拥有 <a href="/zh/docs/concepts/workloads/pods/pod-lifecycle/">生命周期</a>。 当一个工作 Node 挂掉后, 在 Node 上运行的 Pod 也会消亡。 <a href="/zh/docs/concepts/workloads/controllers/replicaset/">ReplicaSet</a> 会自动地通过创建新的 Pod 驱动集群回到目标状态,以保证应用程序正常运行。 换一个例子考虑一个具有3个副本数的用作图像处理的后端程序。这些副本是可替换的; 前端系统不应该关心后端副本,即使 Pod 丢失或重新创建。也就是说Kubernetes 集群中的每个 Pod (即使是在同一个 Node 上的 Pod )都有一个惟一的 IP 地址,因此需要一种方法自动协调 Pod 之间的变更,以便应用程序保持运行。</p>
<!-- <p>A Service in Kubernetes is an abstraction which defines a logical set of Pods and a policy by which to access them. Services enable a loose coupling between dependent Pods. A Service is defined using YAML <a href="/docs/concepts/configuration/overview/#general-configuration-tips">(preferred)</a> or JSON, like all Kubernetes objects. The set of Pods targeted by a Service is usually determined by a <i>LabelSelector</i> (see below for why you might want a Service without including <code>selector</code> in the spec).</p>-->
<p> Kubernetes 中的服务(Service)是一种抽象概念,它定义了 Pod 的逻辑集和访问 Pod 的协议。Service 使从属 Pod 之间的松耦合成为可能。 和其他 Kubernetes 对象一样, Service 用 YAML <a href="/zh/docs/concepts/configuration/overview/#general-configuration-tips">(更推荐)</a> 或者 JSON 来定义. Service 下的一组 Pod 通常由 <i>LabelSelector</i> (请参阅下面的说明为什么您可能想要一个 spec 中不包含<code>selector</code>的服务)来标记。</p>

View File

@ -33,7 +33,7 @@ weight: 10
<!-- <p>In the previous modules we created a <a href="/docs/concepts/workloads/controllers/deployment/"> Deployment</a>,
and then exposed it publicly via a <a href="/docs/concepts/services-networking/service/">Service</a>.
The Deployment created only one Pod for running our application. When traffic increases, we will need to scale the application to keep up with user demand.</p> -->
<p>在之前的模块中,我们创建了一个 <a href="/docs/user-guide/deployments/"> Deployment</a>,然后通过 <a href="/docs/user-guide/services/">Service</a>让其可以开放访问。Deployment 仅为跑这个应用程序创建了一个 Pod。 当流量增加时,我们需要扩容应用程序满足用户需求。</p>
<p>在之前的模块中,我们创建了一个 <a href="/zh/docs/concepts/workloads/controllers/deployment/"> Deployment</a>,然后通过 <a href="/zh/docs/concepts/services-networking/service/">Service</a>让其可以开放访问。Deployment 仅为跑这个应用程序创建了一个 Pod。 当流量增加时,我们需要扩容应用程序满足用户需求。</p>
<!-- <p><b>Scaling</b> is accomplished by changing the number of replicas in a Deployment</p> -->
<p><b>扩缩</b> 是通过改变 Deployment 中的副本数量来实现的。</p>
@ -99,7 +99,7 @@ weight: 10
<!-- <p>Scaling out a Deployment will ensure new Pods are created and scheduled to Nodes with available resources. Scaling in will reduce the number of Pods to the new desired state. Kubernetes also supports
<a href="/docs/user-guide/horizontal-pod-autoscaling/">autoscaling</a> of Pods, but it is outside of the scope of this tutorial. Scaling to zero is also possible, and it will terminate all Pods of the specified Deployment.</p> -->
<p>扩展 Deployment 将创建新的 Pods并将资源调度请求分配到有可用资源的节点上收缩 会将 Pods 数量减少至所需的状态。Kubernetes 还支持 Pods 的<a href="/docs/user-guide/horizontal-pod-autoscaling/">自动缩放</a>,但这并不在本教程的讨论范围内。将 Pods 数量收缩到0也是可以的但这会终止 Deployment 上所有已经部署的 Pods。</p>
<p>扩展 Deployment 将创建新的 Pods并将资源调度请求分配到有可用资源的节点上收缩 会将 Pods 数量减少至所需的状态。Kubernetes 还支持 Pods 的<a href="/zh/docs/tasks/run-application/horizontal-pod-autoscale/">自动缩放</a>,但这并不在本教程的讨论范围内。将 Pods 数量收缩到0也是可以的但这会终止 Deployment 上所有已经部署的 Pods。</p>
<!-- <p>Running multiple instances of an application will require a way to distribute the traffic to all of them. Services have an integrated load-balancer that will distribute network traffic to all Pods of an exposed Deployment.
Services will monitor continuously the running Pods using endpoints, to ensure the traffic is sent only to available Pods.</p> -->

File diff suppressed because it is too large Load Diff

View File

@ -1,7 +1,7 @@
# i18n strings for the English (main) site.
# NOTE: Please keep the entries in alphabetical order when editing
[announcement_title]
other = "<img src=\"images/kccnc-na-virtual-2020-white.svg\" style=\"float: right; height: 80px;\" /><a href=\"https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kuberntes.io&utm_medium=search&utm_campaign=KC_CNC_Virtual\">KubeCon + CloudNativeCon NA 2020</a> <em>virtual</em>."
other = "<img src=\"/images/kccnc-na-virtual-2020-white.svg\" style=\"float: right; height: 80px;\" /><a href=\"https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kuberntes.io&utm_medium=search&utm_campaign=KC_CNC_Virtual\">KubeCon + CloudNativeCon NA 2020</a> <em>virtual</em>."
[announcement_message]
other = "4 days of incredible opportunities to collaborate, learn, and share with the entire community!<br />November 17 20 2020"

View File

@ -58,9 +58,9 @@ document.addEventListener("DOMContentLoaded", function () {
{{- end -}}
<div id="frameHolder">
{{ if ( .Get "category" ) }}
<iframe frameborder="0" id="landscape" scrolling="no" src="https://landscape.cncf.io/category={{ .Get "category" }}&format=logo-mode&grouping=category&embed=yes" style="width: 1px; min-width: 100%"></iframe>
<iframe frameborder="0" id="landscape" scrolling="no" src="https://landscape.cncf.io/category={{ .Get "category" }}&format=card-mode&grouping=category&embed=yes" style="width: 1px; min-width: 100%"></iframe>
{{ else }}
<iframe frameborder="0" id="landscape" scrolling="no" src="https://landscape.cncf.io/format=logo-mode;embed=yes" style="width: 1px; min-width: 100%" title="CNCF Landscape"></iframe>
<iframe frameborder="0" id="landscape" scrolling="no" src="https://landscape.cncf.io/format=card-mode;embed=yes" style="width: 1px; min-width: 100%" title="CNCF Landscape"></iframe>
{{ end }}
<script src="https://landscape.cncf.io/iframeResizer.js"></script>
</div>

View File

@ -241,14 +241,13 @@ a {
}
.fullbutton {
display: block;
display: inline-block;
margin: auto;
margin-top: 2rem;
width: 156px;
background-color: #0662EE;
color: white;
font-size: 18px;
padding: 2%;
padding: 2% 2.5%;
letter-spacing: 0.07em;
font-weight: bold;

View File

@ -1,29 +1,28 @@
#hero {
body.cid-training body.cid-training #hero {
text-align: left;
padding-bottom: 20px;
}
.section {
body.cid-training body.cid-training .section {
clear: both;
padding: 0px;
margin-bottom: 2em;
width: 100%;
}
section.call-to-action {
body.cid-training section.call-to-action {
color: #ffffff;
background-color: #3371e3;
}
section.call-to-action .main-section {
body.cid-training section.call-to-action .main-section {
max-width: initial;
}
section.call-to-action .main-section > div.call-to-action {
body.cid-training section.call-to-action .main-section > div.call-to-action {
display: flex;
flex-direction: row;
flex-wrap: wrap;
flex-basis: auto;
justify-content: center;
align-items: center;
margin: initial;
@ -31,54 +30,57 @@ section.call-to-action .main-section > div.call-to-action {
padding-bottom: 5rem;
}
section.call-to-action .main-section > div.call-to-action > div {
order: 1;
body.cid-training section.call-to-action .main-section > div.call-to-action > div {
padding: 20px;
}
section.call-to-action .main-section .cta-text {
text-align: center;
flex: 1 1 auto;
max-width: 50vw;
body.cid-training section.call-to-action .main-section .cta-text {
width: 100%;
flex-basis: 100%;
}
/* if max() and min() are available, use them */
section.call-to-action .main-section .cta-text {
body.cid-training section.call-to-action .main-section .cta-text > * {
margin-left: auto;
margin-right: auto;
text-align: center;
/* if max() and min() are available, use them */
min-width: min(20em, 50vw);
max-width: min(1000px, 50vw);
}
section.call-to-action .main-section .cta-image.cta-image-before {
order: 0
body.cid-training section.call-to-action .main-section .cta-image {
max-width: max(20vw,150px);
flex-basis: auto;
}
section.call-to-action .main-section .cta-image.cta-image-after {
order: 2
}
section.call-to-action .main-section .cta-image > img {
body.cid-training section.call-to-action .main-section .cta-image > img {
display: block;
width: 150px;
margin: auto;
}
.col {
body.cid-training #logo-cks {
order: 2; /* central */
}
body.cid-training .col {
display: flex;
flex-direction: row;
float: left;
margin: 1% 0 1% 1.6%;
}
.col:first-child { margin-left: 0; }
body.cid-training .col:first-child { margin-left: 0; }
.col-container {
body.cid-training .col-container {
display: flex; /* Make the container element behave like a table */
flex-direction: row;
width: 100%; /* Set full-width to expand the whole page */
padding-bottom: 30px;
}
.col-nav {
body.cid-training .col-nav {
display: flex;
flex-grow: 1;
width: 18%;
@ -87,44 +89,62 @@ section.call-to-action .main-section .cta-image > img {
border: 5px solid white;
}
body.cid-training #get-certified .col-nav {
flex-flow: column nowrap;
justify-content: space-between;
}
body.cid-training #get-certified .col-nav > * {
flex-grow: 0;
flex-shrink: 0;
flex-basis: auto;
}
body.cid-training #get-certified .col-nav > br {
flex-grow: 1;
display: block;
min-height: 2em;
}
body.cid-training #get-certified .col-nav a.button {
flex: initial;
width: auto;
margin-left: auto;
margin-right: auto;
}
@media only screen and (max-width: 840px) {
section.call-to-action .main-section .cta-image.cta-image-before, section.call-to-action .main-section .cta-image.cta-image-after {
order: 0;
}
section.call-to-action .main-section .cta-image > img {
body.cid-training section.call-to-action .main-section .cta-image > img {
margin: 0;
}
section.call-to-action .main-section .cta-image > img {
body.cid-training section.call-to-action .main-section .cta-image > img {
width: 7rem;
}
section.call-to-action .main-section > div.call-to-action > div {
body.cid-training section.call-to-action .main-section > div.call-to-action > div {
padding: 0 2rem 0 2rem;
}
section.call-to-action .main-section .cta-text {
flex: 2 0 75vw;
max-width: initial;
padding: 5vw 0 0 0;
}
}
/* GO FULL WIDTH AT LESS THAN 480 PIXELS */
@media only screen and (max-width: 480px) {
.col { margin: 1% 0 1% 0%;}
.col-container { flex-direction: column; }
body.cid-training .col { margin: 1% 0 1% 0%;}
body.cid-training .col-container { flex-direction: column; }
body.cid-training #logo-cks { order: initial; }
}
@media only screen and (max-width: 650px) {
.col-nav {
body.cid-training .col-nav {
display: block;
width: 100%;
}
.col-container { flex-direction: column; }
body.cid-training .col-container { flex-direction: column; }
}
.button{
body.cid-training .button {
max-width: 100%;
box-sizing: border-box;
margin: 1em 0;
@ -136,47 +156,48 @@ section.call-to-action .main-section .cta-image > img {
font-size: 16px;
background-color: #3371e3;
text-decoration: none;
text-align: center;
}
h5 {
body.cid-training h5 {
font-size: 16px;
font-weight: 500;
line-height: 1.5em;
margin-bottom: 2em;
}
.white-text {
body.cid-training .white-text {
color: #fff;
}
.padded {
body.cid-training .padded {
padding-top: 100px;
padding-bottom: 100px;
}
.blue-bg {
body.cid-training .blue-bg {
background-color: #3371e3;
}
.lighter-gray-bg {
body.cid-training .lighter-gray-bg {
background-color: #f4f4f4;
}
.two-thirds-centered {
body.cid-training .two-thirds-centered {
width: 66%;
max-width: 950px;
margin-left: auto;
margin-right: auto;
}
.landscape-section {
body.cid-training .landscape-section {
margin-left: auto;
margin-right: auto;
max-width: 1200px;
width: 100%;
}
#landscape {
body.cid-training #landscape {
opacity: 1;
visibility: visible;
overflow: hidden;

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 30 KiB