delete original English text

This commit is contained in:
Jimmy Song 2017-09-30 17:57:03 +08:00
parent e9b11a71ab
commit 75b615c055
5 changed files with 0 additions and 1227 deletions

View File

@ -8,18 +8,6 @@ cn-reviewers:
{% capture overview %}
<!--
When you specify a [Pod](/docs/user-guide/pods), you can optionally specify how
much CPU and memory (RAM) each Container needs. When Containers have resource
requests specified, the scheduler can make better decisions about which nodes to
place Pods on. And when Containers have their limits specified, contention for
resources on a node can be handled in a specified manner. For more details about
the difference between requests and limits, see
[Resource QoS](https://git.k8s.io/community/contributors/design-proposals/resource-qos.md).
-->
当您定义 [Pod](/docs/user-guide/pods) 的时候可以选择为每个容器指定需要的 CPU 和内存RAM大小。当为容器指定了资源请求后调度器就能够更好的判断出将容器调度到哪个节点上。如果您还为容器指定了资源限制节点上的资源就可以按照指定的方式做竞争。关于资源请求和限制的不同点和更多资料请参考 [Resource QoS](https://git.k8s.io/community/contributors/design-proposals/resource-qos.md)。
{% endcapture %}
@ -27,47 +15,12 @@ the difference between requests and limits, see
{% capture body %}
<!--
## Resource types
*CPU* and *memory* are each a *resource type*. A resource type has a base unit.
CPU is specified in units of cores, and memory is specified in units of bytes.
CPU and memory are collectively referred to as *compute resources*, or just
*resources*. Compute
resources are measurable quantities that can be requested, allocated, and
consumed. They are distinct from
[API resources](/docs/api/). API resources, such as Pods and
[Services](/docs/user-guide/services) are objects that can be read and modified
through the Kubernetes API server.
-->
## 资源类型
*CPU* 和 *内存* 都是 *资源类型*。资源类型具有基本单位。CPU 的单位是 core内存的单位是 byte。
CPU和内存统称为*计算资源*,也可以称为*资源*。计算资源的数量是可以被请求、分配和消耗的可测量的。它们与 [API 资源](/docs/api/) 不同。 API 资源(如 Pod 和 [Service](/docs/user-guide/services))是可通过 Kubernetes API server 读取和修改的对象。
<!--
## Resource requests and limits of Pod and Container
Each Container of a Pod can specify one or more of the following:
* `spec.containers[].resources.limits.cpu`
* `spec.containers[].resources.limits.memory`
* `spec.containers[].resources.requests.cpu`
* `spec.containers[].resources.requests.memory`
Although requests and limits can only be specified on individual Containers, it
is convenient to talk about Pod resource requests and limits. A
*Pod resource request/limit* for a particular resource type is the sum of the
resource requests/limits of that type for each Container in the Pod.
-->
## Pod 和 容器的资源请求和限制
Pod 中的每个容器都可以指定以下的一个或者多个值:
@ -79,32 +32,6 @@ Pod 中的每个容器都可以指定以下的一个或者多个值:
尽管只能在个别容器上指定请求和限制,但是我们可以方便地计算出 Pod 资源请求和限制。特定资源类型的Pod 资源请求/限制是 Pod 中每个容器的该类型的资源请求/限制的总和。
<!--
## Meaning of CPU
Limits and requests for CPU resources are measured in *cpu* units.
One cpu, in Kubernetes, is equivalent to:
- 1 AWS vCPU
- 1 GCP Core
- 1 Azure vCore
- 1 *Hyperthread* on a bare-metal Intel processor with Hyperthreading
Fractional requests are allowed. A Container with
`spec.containers[].resources.requests.cpu` of `0.5` is guaranteed half as much
CPU as one that asks for 1 CPU. The expression `0.1` is equivalent to the
expression `100m`, which can be read as "one hundred millicpu". Some people say
"one hundred millicores", and this is understood to mean the same thing. A
request with a decimal point, like `0.1`, is converted to `100m` by the API, and
precision finer than `1m` is not allowed. For this reason, the form `100m` might
be preferred.
CPU is always requested as an absolute quantity, never as a relative quantity;
0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine.
-->
## CPU 的含义
CPU 资源的限制和请求以 *cpu* 为单位。
@ -120,17 +47,6 @@ Kubernetes 中的一个 cpu 等于:
CPU 总是要用绝对数量不可以使用相对数量0.1 的 CPU 在单核、双核、48核的机器中的意义是一样的。
<!--
## Meaning of memory
Limits and requests for `memory` are measured in bytes. You can express memory as
a plain integer or as a fixed-point integer using one of these suffixes:
E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
Mi, Ki. For example, the following represent roughly the same value:
-->
## 内存的含义
内存的限制和请求以字节为单位。您可以使用以下后缀之一作为平均整数或定点整数表示内存EPTGMK。您还可以使用两个字母的等效的幂数EiPiTi GiMiKi。例如以下代表大致相同的值
@ -139,16 +55,6 @@ Mi, Ki. For example, the following represent roughly the same value:
128974848, 129e6, 129M, 123Mi
```
<!--
Here's an example.
The following Pod has two Containers. Each Container has a request of 0.25 cpu
and 64MiB (2<sup>26</sup> bytes) of memory Each Container has a limit of 0.5
cpu and 128MiB of memory. You can say the Pod has a request of 0.5 cpu and 128
MiB of memory, and a limit of 1 cpu and 256MiB of memory.
-->
下面是个例子。
以下 Pod 有两个容器。每个容器的请求为 0.25 cpu 和 64MiB2<sup>26</sup> 字节)内存,每个容器的限制为 0.5 cpu 和 128MiB 内存。您可以说该 Pod 请求 0.5 cpu 和 128 MiB 的内存,限制为 1 cpu 和 256MiB 的内存。
@ -180,71 +86,10 @@ spec:
cpu: "500m"
```
<!--
## How Pods with resource requests are scheduled
When you create a Pod, the Kubernetes scheduler selects a node for the Pod to
run on. Each node has a maximum capacity for each of the resource types: the
amount of CPU and memory it can provide for Pods. The scheduler ensures that,
for each resource type, the sum of the resource requests of the scheduled
Containers is less than the capacity of the node. Note that although actual memory
or CPU resource usage on nodes is very low, the scheduler still refuses to place
a Pod on a node if the capacity check fails. This protects against a resource
shortage on a node when resource usage later increases, for example, during a
daily peak in request rate.
-->
## 具有资源请求的 Pod 如何调度
当您创建一个 Pod 时Kubernetes 调度程序将为 Pod 选择一个节点。每个节点具有每种资源类型的最大容量:可为 Pod 提供的 CPU 和内存量。调度程序确保对于每种资源类型,调度的容器的资源请求的总和小于节点的容量。请注意,尽管节点上的实际内存或 CPU 资源使用量非常低,但如果容量检查失败,则调度程序仍然拒绝在该节点上放置 Pod。当资源使用量稍后增加时例如在请求率的每日峰值期间这可以防止节点上的资源短缺。
<!--
## How Pods with resource limits are run
When the kubelet starts a Container of a Pod, it passes the CPU and memory limits
to the container runtime.
When using Docker:
- The `spec.containers[].resources.requests.cpu` is converted to its core value,
which is potentially fractional, and multiplied by 1024. The greater of this number
or 2 is used as the value of the
[`--cpu-shares`](https://docs.docker.com/engine/reference/run/#/cpu-share-constraint)
flag in the `docker run` command.
- The `spec.containers[].resources.limits.cpu` is converted to its millicore value,
multiplied by 100000, and then divided by 1000. This number is used as the value
of the [`--cpu-quota`](https://docs.docker.com/engine/reference/run/#/cpu-quota-constraint)
flag in the `docker run` command. The [`--cpu-period`] flag is set to 100000,
which represents the default 100ms period for measuring quota usage. The
kubelet enforces cpu limits if it is started with the
[`--cpu-cfs-quota`] flag set to true. As of Kubernetes version 1.2, this flag
defaults to true.
- The `spec.containers[].resources.limits.memory` is converted to an integer, and
used as the value of the
[`--memory`](https://docs.docker.com/engine/reference/run/#/user-memory-constraints)
flag in the `docker run` command.
If a Container exceeds its memory limit, it might be terminated. If it is
restartable, the kubelet will restart it, as with any other type of runtime
failure.
If a Container exceeds its memory request, it is likely that its Pod will
be evicted whenever the node runs out of memory.
A Container might or might not be allowed to exceed its CPU limit for extended
periods of time. However, it will not be killed for excessive CPU usage.
To determine whether a Container cannot be scheduled or is being killed due to
resource limits, see the
[Troubleshooting](#troubleshooting) section.
-->
## 具有资源限制的 Pod 如何运行
当 kubelet 启动一个 Pod 的容器时,它会将 CPU 和内存限制传递到容器运行时。
@ -263,42 +108,18 @@ resource limits, see the
要确定容器是否由于资源限制而无法安排或被杀死,请参阅 [疑难解答](#troubleshooting) 部分。
<!--
## Monitoring compute resource usage
The resource usage of a Pod is reported as part of the Pod status.
If [optional monitoring](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/README.md)
is configured for your cluster, then Pod resource usage can be retrieved from
the monitoring system.
-->
## 监控计算资源使用
Pod 的资源使用情况被报告为 Pod 状态的一部分。
如果为集群配置了 [可选监控](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/README.md),则可以从监控系统检索 Pod 资源的使用情况。
<!--
## Troubleshooting
### My Pods are pending with event message failedScheduling
If the scheduler cannot find any node where a Pod can fit, the Pod remains
unscheduled until a place can be found. An event is produced each time the
scheduler fails to find a place for the Pod, like this:
## 疑难解答
### 我的 Pod 处于 pending 状态且事件信息显示 failedScheduling
如果调度器找不到任何该 Pod 可以匹配的节点,则该 Pod 将保持不可调度状态,直到找到一个可以被调度到的位置。每当调度器找不到 Pod 可以调度的地方时,会产生一个事件,如下所示:
-->
```shell
$ kubectl describe pod frontend | grep -A 3 Events
Events:
@ -306,24 +127,6 @@ Events:
36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others
```
<!--
In the preceding example, the Pod named "frontend" fails to be scheduled due to
insufficient CPU resource on the node. Similar error messages can also suggest
failure due to insufficient memory (PodExceedsFreeMemory). In general, if a Pod
is pending with a message of this type, there are several things to try:
- Add more nodes to the cluster.
- Terminate unneeded Pods to make room for pending Pods.
- Check that the Pod is not larger than all the nodes. For example, if all the
nodes have a capacity of `cpu: 1`, then a Pod with a request of `cpu: 1.1` will
never be scheduled.
You can check node capacities and amounts allocated with the
`kubectl describe nodes` command. For example:
-->
在上述示例中,由于节点上的 CPU 资源不足,名为 “frontend” 的 Pod 将无法调度。由于内存不足PodExceedsFreeMemory类似的错误消息也可能会导致失败。一般来说如果有这种类型的消息而处于 pending 状态,您可以尝试如下几件事情:
```shell
@ -356,26 +159,6 @@ Allocated resources:
680m (34%) 400m (20%) 920Mi (12%) 1070Mi (14%)
```
<!--
In the preceding output, you can see that if a Pod requests more than 1120m
CPUs or 6.23Gi of memory, it will not fit on the node.
By looking at the `Pods` section, you can see which Pods are taking up space on
the node.
The amount of resources available to Pods is less than the node capacity, because
system daemons use a portion of the available resources. The `allocatable` field
[NodeStatus](/docs/resources-reference/{{page.version}}/#nodestatus-v1-core)
gives the amount of resources that are available to Pods. For more information, see
[Node Allocatable Resources](https://git.k8s.io/community/contributors/design-proposals/node-allocatable.md).
The [resource quota](/docs/concepts/policy/resource-quotas/) feature can be configured
to limit the total amount of resources that can be consumed. If used in conjunction
with namespaces, it can prevent one team from hogging all the resources.
-->
在上面的输出中,您可以看到如果 Pod 请求超过 1120m CPU 或者 6.23Gi 内存,节点将无法满足。
通过查看 `Pods` 部分,您将看到哪些 Pod 占用的节点上的资源。
@ -384,16 +167,6 @@ Pod 可用的资源量小于节点容量,因为系统守护程序使用一部
可以将 [资源配额](/docs/concepts/policy/resource-quotas/) 功能配置为限制可以使用的资源总量。如果与 namespace 配合一起使用,就可以防止一个团队占用所有资源。
<!--
### My Container is terminated
Your Container might get terminated because it is resource-starved. To check
whether a Container is being killed because it is hitting a resource limit, call
`kubectl describe pod` on the Pod of interest:
-->
## 我的容器被终止了
您的容器可能因为资源枯竭而被终止了。要查看容器是否因为遇到资源限制而被杀死,请在相关的 Pod 上调用 `kubectl describe pod`
@ -436,16 +209,6 @@ Events:
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a
```
<!--
In the preceding example, the `Restart Count: 5` indicates that the `simmemleak`
Container in the Pod was terminated and restarted five times.
You can call `kubectl get pod` with the `-o go-template=...` option to fetch the status
of previously terminated Containers:
-->
在上面的例子中,`Restart Count: 5` 意味着 Pod 中的 `simmemleak` 容器被终止并重启了五次。
您可以使用 `kubectl get pod` 命令加上 `-o go-template=...` 选项来获取之前终止容器的状态。
@ -456,57 +219,8 @@ Container Name: simmemleak
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]{% endraw %}
```
<!--
You can see that the Container was terminated because of `reason:OOM Killed`,
where `OOM` stands for Out Of Memory.
-->
您可以看到容器因为 `reason:OOM killed` 被终止,`OOM` 表示 Out Of Memory。
<!--
## Opaque integer resources (Alpha feature)
Kubernetes version 1.5 introduces Opaque integer resources. Opaque
integer resources allow cluster operators to advertise new node-level
resources that would be otherwise unknown to the system.
Users can consume these resources in Pod specs just like CPU and memory.
The scheduler takes care of the resource accounting so that no more than the
available amount is simultaneously allocated to Pods.
**Note:** Opaque integer resources are Alpha in Kubernetes version 1.5.
Only resource accounting is implemented; node-level isolation is still
under active development.
Opaque integer resources are resources that begin with the prefix
`pod.alpha.kubernetes.io/opaque-int-resource-`. The API server
restricts quantities of these resources to whole numbers. Examples of
_valid_ quantities are `3`, `3000m` and `3Ki`. Examples of _invalid_
quantities are `0.5` and `1500m`.
There are two steps required to use opaque integer resources. First, the
cluster operator must advertise a per-node opaque resource on one or more
nodes. Second, users must request the opaque resource in Pods.
To advertise a new opaque integer resource, the cluster operator should
submit a `PATCH` HTTP request to the API server to specify the available
quantity in the `status.capacity` for a node in the cluster. After this
operation, the node's `status.capacity` will include a new resource. The
`status.allocatable` field is updated automatically with the new resource
asynchronously by the kubelet. Note that because the scheduler uses the
node `status.allocatable` value when evaluating Pod fitness, there may
be a short delay between patching the node capacity with a new resource and the
first pod that requests the resource to be scheduled on that node.
**Example:**
Here is an HTTP request that advertises five "foo" resources on node `k8s-node-1` whose master is `k8s-master`.
-->
## 不透明整型资源Alpha功能
Kubernetes 1.5 版本中引入不透明整型资源。不透明的整数资源允许集群运维人员发布新的节点级资源,否则系统将不了解这些资源。
@ -547,27 +261,6 @@ curl --header "Content-Type: application/json-patch+json" \
http://k8s-master:8080/api/v1/nodes/k8s-node-1/status
```
<!--
**Note**: In the preceding request, `~1` is the encoding for the character `/`
in the patch path. The operation path value in JSON-Patch is interpreted as a
JSON-Pointer. For more details, see
[IETF RFC 6901, section 3](https://tools.ietf.org/html/rfc6901#section-3).
To consume an opaque resource in a Pod, include the name of the opaque
resource as a key in the `spec.containers[].resources.requests` map.
The Pod is scheduled only if all of the resource requests are
satisfied, including cpu, memory and any opaque resources. The Pod will
remain in the `PENDING` state as long as the resource request cannot be met by
any node.
**Example:**
The Pod below requests 2 cpus and 1 "foo" (an opaque resource.)
-->
**注意:** 在前面的请求中,`~1` 是 patch 路径中 `/` 字符的编码。JSON-Patch 中的操作路径值被解释为 JSON-Pointer。更多详细信息请参阅 [IETF RFC 6901, section 3](https://tools.ietf.org/html/rfc6901#section-3)。
```yaml
@ -585,32 +278,6 @@ spec:
pod.alpha.kubernetes.io/opaque-int-resource-foo: 1
```
<!--
## Planned Improvements
Kubernetes version 1.5 only allows resource quantities to be specified on a
Container. It is planned to improve accounting for resources that are shared by
all Containers in a Pod, such as
[emptyDir volumes](/docs/concepts/storage/volumes/#emptydir).
Kubernetes version 1.5 only supports Container requests and limits for CPU and
memory. It is planned to add new resource types, including a node disk space
resource, and a framework for adding custom
[resource types](https://github.com/kubernetes/community/blob/{{page.githubbranch}}/contributors/design-proposals/resources.md).
Kubernetes supports overcommitment of resources by supporting multiple levels of
[Quality of Service](http://issue.k8s.io/168).
In Kubernetes version 1.5, one unit of CPU means different things on different
cloud providers, and on different machine types within the same cloud providers.
For example, on AWS, the capacity of a node is reported in
[ECUs](http://aws.amazon.com/ec2/faqs/), while in GCE it is reported in logical
cores. We plan to revise the definition of the cpu resource to allow for more
consistency across providers and platforms.
-->
## 计划改进
在 kubernetes 1.5 版本中仅允许在容器上指定资源量。计划改进对所有容器在 Pod 中共享资源的计量,如 [emptyDir volume](/docs/concepts/storage/volumes/#emptydir)。
@ -625,15 +292,6 @@ Kubernetes 通过支持通过多级别的 [服务质量](http://issue.k8s.io/168
{% capture whatsnext %}
<!--
* Get hands-on experience
[assigning CPU and RAM resources to a container](/docs/tasks/configure-pod-container/assign-cpu-ram-container/).
* [Container](/docs/api-reference/{{page.version}}/#container-v1-core)
* [ResourceRequirements](/docs/resources-reference/{{page.version}}/#resourcerequirements-v1-core)
-->
- 获取将 [CPU 和内存资源分配给容器](/docs/tasks/configure-pod-container/assign-cpu-ram-container/) 的实践经验
- [容器](/docs/api-reference/{{page.version}}/#container-v1-core)
- [ResourceRequirements](/docs/resources-reference/{{page.version}}/#resourcerequirements-v1-core)

View File

@ -9,28 +9,10 @@ cn-reviewers:
title: Secret
---
<!--
Objects of type `secret` are intended to hold sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a `secret` is safer and more flexible than putting it verbatim in a `pod` definition or in a docker image. See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/secrets.md) for more information.
-->
`Secret` 对象类型用来保存敏感信息例如密码、OAuth 令牌和 ssh key。将这些信息放在 `secret` 中比放在 `pod` 的定义或者 docker 镜像中来说更加安全和灵活。参阅 [Secret 设计文档](https://git.k8s.io/community/contributors/design-proposals/secrets.md) 获取更多详细信息。
* TOC
{:toc}
<!--
## Overview of Secrets
A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in an image; putting it in a Secret object allows for more control over how it is used, and reduces the risk of accidental exposure.
Users can create secrets, and the system also creates some secrets.
To use a secret, a pod needs to reference the secret. A secret can be used with a pod in two ways: as files in a [volume](/docs/concepts/storage/volumes/) mounted on one or more of its containers, or used by kubelet when pulling images for the pod.
-->
## Secret 概览
Secret 是一种包含少量敏感信息例如密码、token 或 key 的对象。这样的信息可能会被放在 Pod spec 中或者镜像中;将其放在一个 secret 对象中可以更好地控制它的用途,并降低意外暴露的风险。
@ -39,20 +21,6 @@ Secret 是一种包含少量敏感信息例如密码、token 或 key 的对象
要使用 secretpod 需要引用 secret。Pod 可以用两种方式使用 secret作为 [volume](/docs/concepts/storage/volumes/) 中的文件被挂载到 pod 中的一个或者多个容器里,或者当 kubelet 为 pod 拉取镜像时使用。
<!--
### Built-in Secrets
#### Service Accounts Automatically Create and Attach Secrets with API Credentials
Kubernetes automatically creates secrets which contain credentials for accessing the API and it automatically modifies your pods to use this type of secret.
The automatic creation and use of API credentials can be disabled or overridden if desired. However, if all you need to do is securely access the apiserver, this is the recommended workflow.
See the [Service Account](/docs/user-guide/service-accounts) documentation for more information on how Service Accounts work.
-->
### 内置 secret
#### Service Account 使用 API 凭证自动创建和附加 secret
@ -63,16 +31,6 @@ Kubernetes 自动创建包含访问 API 凭据的 secret并自动修改您的
参阅 [Service Account](/docs/user-guide/service-accounts) 文档获取关于 Service Account 如何工作的更多信息。
<!--
### Creating your own Secrets
#### Creating a Secret Using kubectl create secret
Say that some pods need to access a database. The username and password that the pods should use is in the files `./username.txt` and `./password.txt` on your local machine.
-->
### 创建您自己的 Secret
#### 使用 kubectl 创建 Secret
@ -85,12 +43,6 @@ $ echo -n "admin" > ./username.txt
$ echo -n "1f2d1e2e67df" > ./password.txt
```
<!--
The `kubectl create secret` command packages these files into a Secret and creates the object on the Apiserver.
-->
`kubectl create secret` 命令将这些文件打包到一个 Secret 中并在 API server 中创建了一个对象。
```shell
@ -98,12 +50,6 @@ $ kubectl create secret generic db-user-pass --from-file=./username.txt --from-f
secret "db-user-pass" created
```
<!--
You can check that the secret was created like this:
-->
您可以这样检查刚创建的 secret
```shell
@ -125,20 +71,6 @@ password.txt: 12 bytes
username.txt: 5 bytes
```
<!--
Note that neither `get` nor `describe` shows the contents of the file by default. This is to protect the secret from being exposed accidentally to someone looking or from being stored in a terminal log.
See [decoding a secret](#decoding-a-secret) for how to see the contents.
#### Creating a Secret Manually
You can also create a secret object in a file first, in json or yaml format, and then create that object.
Each item must be base64 encoded:
-->
请注意,默认情况下,`get` 和 `describe` 命令都不会显示文件的内容。这是为了防止将 secret 中的内容被意外暴露给从终端日志记录中刻意寻找它们的人。
请参阅 [解码 secret](#decoding-a-secret) 了解如何查看它们的内容。
@ -156,12 +88,6 @@ $ echo -n "1f2d1e2e67df" | base64
MWYyZDFlMmU2N2Rm
```
<!--
Now write a secret object that looks like this:
-->
现在可以像这样写一个 secret 对象:
```yaml
@ -175,14 +101,6 @@ data:
password: MWYyZDFlMmU2N2Rm
```
<!--
The data field is a map. Its keys must match [`DNS_SUBDOMAIN`](https://git.k8s.io/community/contributors/design-proposals/identifiers.md), except that leading dots are also allowed. The values are arbitrary data, encoded using base64.
Create the secret using [`kubectl create`](/docs/user-guide/kubectl/v1.7/#create):
-->
数据字段是一个映射。它的键必须匹配 [`DNS_SUBDOMAIN`](https://git.k8s.io/community/contributors/design-proposals/identifiers.md),前导点也是可以的。这些值可以是任意数据,使用 base64 进行编码。
使用 [`kubectl create`](/docs/user-guide/kubectl/v1.7/#create) 创建 secret
@ -192,16 +110,6 @@ $ kubectl create -f ./secret.yaml
secret "mysecret" created
```
<!--
**Encoding Note:** The serialized JSON and YAML values of secret data are encoded as base64 strings. Newlines are not valid within these strings and must be omitted. When using the `base64` utility on Darwin/OS X users should avoid using the `-b` option to split long lines. Conversely Linux users *should* add the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if `-w` option is not available.
#### Decoding a Secret
Secrets can be retrieved via the `kubectl get secret` command. For example, to retrieve the secret created in the previous section:
-->
**编码注意:** secret 数据的序列化 JSON 和 YAML 值使用 base64 编码成字符串。换行符在这些字符串中无效,必须省略。当在 Darwin/OS X 上使用 `base64` 实用程序时,用户应避免使用 `-b` 选项来拆分长行。另外,对于 Linux 用户如果 `-w` 选项不可用的话,应该添加选项 `-w 0``base64` 命令或管道 `base64 | tr -d '\n' `
#### 解码 Secret
@ -225,12 +133,6 @@ metadata:
type: Opaque
```
<!--
Decode the password field:
-->
解码密码字段:
```shell
@ -238,25 +140,6 @@ $ echo "MWYyZDFlMmU2N2Rm" | base64 --decode
1f2d1e2e67df
```
<!--
### Using Secrets
Secrets can be mounted as data volumes or be exposed as environment variables to be used by a container in a pod. They can also be used by other parts of the system, without being directly exposed to the pod. For example, they can hold credentials that other parts of the system should use to interact with external systems on your behalf.
#### Using Secrets as Files from a Pod
To consume a Secret in a volume in a Pod:
1. Create a secret or use an existing one. Multiple pods can reference the same secret.
2. Modify your Pod definition to add a volume under `spec.volumes[]`. Name the volume anything, and have a `spec.volumes[].secret.secretName` field equal to the name of the secret object.
3. Add a `spec.containers[].volumeMounts[]` to each container that needs the secret. Specify `spec.containers[].volumeMounts[].readOnly = true` and `spec.containers[].volumeMounts[].mountPath` to an unused directory name where you would like the secrets to appear.
4. Modify your image and/or command line so that the program looks for files in that directory. Each key in the secret `data` map becomes the filename under `mountPath`.
This is an example of a pod that mounts a secret in a volume:
-->
### 使用 Secret
Secret 可以作为数据卷被挂载,或作为环境变量暴露出来以供 pod 中的容器使用。它们也可以被系统的其他部分使用,而不直接暴露在 pod 内。例如,它们可以保存凭据,系统的其他部分应该用它来代表您与外部系统进行交互。
@ -291,20 +174,6 @@ spec:
secretName: mysecret
```
<!--
Each secret you want to use needs to be referred to in `spec.volumes`.
If there are multiple containers in the pod, then each container needs its own `volumeMounts` block, but only one `spec.volumes` is needed per secret.
You can package many files into one secret, or use many secrets, whichever is convenient.
**Projection of secret keys to specific paths**
We can also control the paths within the volume where Secret keys are projected. You can use `spec.volumes[].secret.items` field to change target path of each key:
-->
您想要用的每个 secret 都需要在 `spec.volumes` 中指明。
如果 pod 中有多个容器,每个容器都需要自己的 `volumeMounts` 配置块,但是每个 secret 只需要一个 `spec.volumes`
@ -337,23 +206,6 @@ spec:
path: my-group/my-username
```
<!--
What will happen:
* `username` secret is stored under `/etc/foo/my-group/my-username` file instead of `/etc/foo/username`.
* `password` secret is not projected
If `spec.volumes[].secret.items` is used, only keys specified in `items` are projected. To consume all keys from the secret, all of them must be listed in the `items` field. All listed keys must exist in the corresponding secret. Otherwise, the volume is not created.
**Secret files permissions**
You can also specify the permission mode bits files part of a secret will have. If you don't specify any, `0644` is used by default. You can specify a default mode for the whole secret volume and override per key if needed.
For example, you can specify a default mode like this:
-->
将会发生什么呢:
- `username` secret 存储在 `/etc/foo/my-group/my-username` 文件中而不是 `/etc/foo/username` 中。
@ -384,16 +236,6 @@ spec:
defaultMode: 256
```
<!--
Then, the secret will be mounted on `/etc/foo` and all the files created by the secret volume mount will have permission `0400`.
Note that the JSON spec doesn't support octal notation, so use the value 256 for 0400 permissions. If you use yaml instead of json for the pod, you can use octal notation to specify permissions in a more natural way.
You can also use mapping, as in the previous example, and specify different permission for different files like this:
-->
然后secret 将被挂载到 `/etc/foo` 目录,所有通过该 secret volume 挂载创建的文件的权限都是 `0400`
请注意JSON 规范不支持八进制符号,因此使用 256 值作为 0400 权限。如果您使用 yaml 而不是 json 作为 pod则可以使用八进制符号以更自然的方式指定权限。
@ -422,18 +264,6 @@ spec:
mode: 511
```
<!--
In this case, the file resulting in `/etc/foo/my-group/my-username` will have permission value of `0777`. Owing to JSON limitations, you must specify the mode in decimal notation.
Note that this permission value might be displayed in decimal notation if you read it later.
**Consuming Secret Values from Volumes**
Inside the container that mounts a secret volume, the secret keys appear as files and the secret values are base-64 decoded and stored inside these files. This is the result of commands executed inside the container from the example above:
-->
在这种情况下,导致 `/etc/foo/my-group/my-username` 的文件的权限值为 `0777`。由于 JSON 限制,必须以十进制格式指定模式。
请注意,如果稍后阅读此权限值可能会以十进制格式显示。
@ -452,27 +282,6 @@ $ cat /etc/foo/password
1f2d1e2e67df
```
<!--
The program in a container is responsible for reading the secrets from the files.
**Mounted Secrets are updated automatically**
When a secret being already consumed in a volume is updated, projected keys are eventually updated as well.
Kubelet is checking whether the mounted secret is fresh on every periodic sync. However, it is using its local ttl-based cache for getting the current value of the secret. As a result, the total delay from the moment when the secret is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period + ttl of secrets cache in kubelet.
#### Using Secrets as Environment Variables
To use a secret in an environment variable in a pod:
1. Create a secret or use an existing one. Multiple pods can reference the same secret.
2. Modify your Pod definition in each container that you wish to consume the value of a secret key to add an environment variable for each secret key you wish to consume. The environment variable that consumes the secret key should populate the secret's name and key in `env[x].valueFrom.secretKeyRef`.
3. Modify your image and/or command line so that the program looks for values in the specified environment variables
This is an example of a pod that uses secrets from environment variables:
-->
容器中的程序负责从文件中读取 secret。
**挂载的 secret 被自动更新**
@ -512,14 +321,6 @@ spec:
restartPolicy: Never
```
<!--
**Consuming Secret Values from Environment Variables**
Inside a container that consumes a secret in an environment variables, the secret keys appear as normal environment variables containing the base-64 decoded values of the secret data. This is the result of commands executed inside the container from the example above:
-->
**消费环境变量里的 Secret 值**
在一个消耗环境变量 secret 的容器中secret key 作为包含 secret 数据的 base-64 解码值的常规环境变量。这是从上面的示例在容器内执行的命令的结果:
@ -531,18 +332,6 @@ $ echo $SECRET_PASSWORD
1f2d1e2e67df
```
<!--
#### Using imagePullSecrets
An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry password to the Kubelet so it can pull a private image on behalf of your Pod.
**Manually specifying an imagePullSecret**
Use of imagePullSecrets is described in the [images documentation](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod)
-->
#### 使用 imagePullSecret
imagePullSecret 是将包含 Docker或其他镜像注册表密码的 secret 传递给 Kubelet 的一种方式,因此可以代表您的 pod 拉取私有镜像。
@ -551,18 +340,6 @@ imagePullSecret 是将包含 Docker或其他镜像注册表密码的 secre
imagePullSecret 的使用在 [镜像文档](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) 中说明。
<!--
### Arranging for imagePullSecrets to be Automatically Attached
You can manually create an imagePullSecret, and reference it from a serviceAccount. Any pods created with that serviceAccount or that default to use that serviceAccount, will get their imagePullSecret field set to that of the service account. See [Adding ImagePullSecrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#adding-imagepullsecrets-to-a-service-account) for a detailed explanation of that process.
#### Automatic Mounting of Manually Created Secrets
Manually created secrets (e.g. one containing a token for accessing a github account) can be automatically attached to pods based on their service account. See [Injecting Information into Pods Using a PodPreset](/docs/tasks/run-application/podpreset/) for a detailed explanation of that process.
-->
### 安排 imagePullSecrets 自动附加
您可以手动创建 imagePullSecret并从 serviceAccount 引用它。使用该 serviceAccount 创建的任何 pod 和默认使用该 serviceAccount 的 pod 将会将其的 imagePullSecret 字段设置为服务帐户的 imagePullSecret 字段。有关该过程的详细说明,请参阅 [将 ImagePullSecrets 添加到服务帐户](/docs/tasks/configure-pod-container/configure-service-account/#adding-imagepullsecrets-to-a-service-account)。
@ -571,17 +348,6 @@ Manually created secrets (e.g. one containing a token for accessing a github acc
手动创建的 secret例如包含用于访问 github 帐户的令牌)可以根据其服务帐户自动附加到 pod。请参阅 [使用 PodPreset 向 Pod 中注入信息](/docs/tasks/run-application/podpreset/) 以获取该进程的详细说明。
<!--
## Details
### Restrictions
Secret volume sources are validated to ensure that the specified object reference actually points to an object of type `Secret`. Therefore, a secret needs to be created before any pods that depend on it.
Secret API objects reside in a namespace. They can only be referenced by pods in that same namespace.
-->
## 详细
### 限制
@ -590,20 +356,6 @@ Secret API objects reside in a namespace. They can only be referenced by pods
Secret API 对象驻留在命名空间中。它们只能由同一命名空间中的 pod 引用。
<!--
Individual secrets are limited to 1MB in size. This is to discourage creation of very large secrets which would exhaust apiserver and kubelet memory. However, creation of many smaller secrets could also exhaust memory. More comprehensive limits on memory usage due to secrets is a planned feature.
Kubelet only supports use of secrets for Pods it gets from the API server. This includes any pods created using kubectl, or indirectly via a replication controller. It does not include pods created via the kubelets `--manifest-url` flag, its `--config` flag, or its REST API (these are not common ways to create pods.)
Secrets must be created before they are consumed in pods as environment variables unless they are marked as optional. References to Secrets that do not exist will prevent the pod from starting.
References via `secretKeyRef` to keys that do not exist in a named Secret will prevent the pod from starting.
Secrets used to populate environment variables via `envFrom` that have keys that are considered invalid environment variable names will have those keys skipped. The pod will be allowed to start. There will be an event whose reason is `InvalidVariableNames` and the message will contain the list of invalid keys that were skipped. The example shows a pod which refers to the default/mysecret ConfigMap that contains 2 invalid keys, 1badkey and 2alsobad.
-->
每个 secret 的大小限制为1MB。这是为了防止创建非常大的 secret 会耗尽 apiserver 和 kubelet 的内存。然而,创建许多较小的 secret 也可能耗尽内存。更全面得限制 secret 对内存使用的功能还在计划中。
Kubelet 仅支持从 API server 获取的 Pod 使用 secret。这包括使用 kubectl 创建的任何 pod或间接通过 replication controller 创建的 pod。它不包括通过 kubelet `--manifest-url` 标志,其 `--config` 标志或其 REST API 创建的pod这些不是创建 pod 的常用方法)。
@ -620,20 +372,6 @@ LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT
0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names.
```
<!--
### Secret and Pod Lifetime interaction
When a pod is created via the API, there is no check whether a referenced secret exists. Once a pod is scheduled, the kubelet will try to fetch the secret value. If the secret cannot be fetched because it does not exist or because of a temporary lack of connection to the API server, kubelet will periodically retry. It will report an event about the pod explaining the reason it is not started yet. Once the secret is fetched, the kubelet will create and mount a volume containing it. None of the pod's containers will start until all the pod's volumes are mounted.
## Use cases
### Use-Case: Pod with ssh keys
Create a secret containing some ssh keys:
-->
### Secret 与 Pod 生命周期的联系
通过 API 创建 Pod 时,不会检查应用的 secret 是否存在。一旦 Pod 被调度kubelet 就会尝试获取该 secret 的值。如果获取不到该 secret或者暂时无法与 API server 建立连接kubelet 将会定期重试。Kubelet 将会报告关于 pod 的事件,并解释它无法启动的原因。一旦获取到 secretkubelet将创建并装载一个包含它的卷。在所有 pod 的卷被挂载之前,都不会启动 pod 的容器。
@ -648,14 +386,6 @@ Create a secret containing some ssh keys:
$ kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub
```
<!--
**Security Note:** think carefully before sending your own ssh keys: other users of the cluster may have access to the secret. Use a service account which you want to be accessible to all the users with whom you share the Kubernetes cluster, and can revoke if they are compromised.
Now we can create a pod which references the secret with the ssh key and consumes it in a volume:
-->
**安全性注意事项**:发送自己的 ssh 密钥之前要仔细思考:集群的其他用户可能有权访问该密钥。使用您想要共享 Kubernetes 群集的所有用户可以访问的服务帐户,如果它们遭到入侵,可以撤销。
现在我们可以创建一个使用 ssh 密钥引用 secret 的pod并在一个卷中使用它
@ -681,12 +411,6 @@ spec:
mountPath: "/etc/secret-volume"
```
<!--
When the container's command runs, the pieces of the key will be available in:
-->
当容器中的命令运行时,密钥的片段将可在以下目录:
```shell
@ -694,24 +418,8 @@ When the container's command runs, the pieces of the key will be available in:
/etc/secret-volume/ssh-privatekey
```
<!--
The container is then free to use the secret data to establish an ssh connection.
-->
然后容器可以自由使用密钥数据建立一个 ssh 连接。
<!--
### Use-Case: Pods with prod / test credentials
This example illustrates a pod which consumes a secret containing prod credentials and another pod which consumes a secret with test environment credentials.
Make the secrets:
-->
### 使用案例:包含 prod/test 凭据的 pod
下面的例子说明一个 pod 消费一个包含 prod 凭据的 secret另一个 pod 使用测试环境凭据消费 secret。
@ -725,12 +433,6 @@ $ kubectl create secret generic test-db-secret --from-literal=username=testuser
secret "test-db-secret" created
```
<!--
Now make the pods:
-->
创建 pod
```yaml
@ -775,12 +477,6 @@ items:
mountPath: "/etc/secret-volume"
```
<!--
Both containers will have the following files present on their filesystems with the values for each container's environment:
-->
这两个容器将在其文件系统上显示以下文件,其中包含每个容器环境的值:
```shell
@ -788,12 +484,6 @@ Both containers will have the following files present on their filesystems with
/etc/secret-volume/password
```
<!--
Note how the specs for the two pods differ only in one field; this facilitates creating pods with different capabilities from a common pod config template. You could further simplify the base pod specification by using two Service Accounts: one called, say, `prod-user` with the `prod-db-secret`, and one called, say, `test-user` with the `test-db-secret`. Then, the pod spec can be shortened to, for example:
-->
请注意,两个 pod 的 spec 配置中仅有一个字段有所不同;这有助于使用普通的 pod 配置模板创建具有不同功能的 pod。您可以使用两个 service account 进一步简化基本 pod spec一个名为 `prod-user` 拥有 `prod-db-secret` ,另一个称为 `test-user` 拥有 `test-db-secret` 。然后pod spec 可以缩短为,例如:
```yaml
@ -810,15 +500,6 @@ spec:
image: myClientImage
```
<!--
### Use-case: Dotfiles in secret volume
In order to make piece of data 'hidden' (i.e., in a file whose name begins with a dot character), simply
make that key begin with a dot. For example, when the following secret is mounted into a volume:
-->
### 使用案例secret 卷中以点号开头的文件
为了将数据“隐藏”起来(即文件名以点号开头的文件),简单地说让该键以一个点开始。例如,当如下 secret 被挂载到卷中:
@ -853,35 +534,12 @@ spec:
mountPath: "/etc/secret-volume"
```
<!--
The `secret-volume` will contain a single file, called `.secret-file`, and the `dotfile-test-container` will have this file present at the path `/etc/secret-volume/.secret-file`.
**NOTE**
Files beginning with dot characters are hidden from the output of `ls -l`; you must use `ls -la` to see them when listing directory contents.
-->
`Secret-volume` 将包含一个单独的文件,叫做 `.secret-file``dotfile-test-container` 的 `/etc/secret-volume/.secret-file` 路径下将有该文件。
**注意**
以点号开头的文件在 `ls -l` 的输出中被隐藏起来了;列出目录内容时,必须使用 `ls -la` 才能查看它们。
<!--
### Use-case: Secret visible to one container in a pod
Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed remote file reading exploit in the server, which could expose the private key to an attacker.
This could be divided into two processes in two containers: a frontend container which handles user interaction and business logic, but which cannot see the private key; and a signer container that can see the private key, and responds to simple signing requests from the frontend (e.g. over localhost networking).
With this partitioned approach, an attacker now has to trick the application server into doing something rather arbitrary, which may be harder than getting it to read a file.
-->
### 使用案例Secret 仅对 pod 中的一个容器可见
考虑以下一个需要处理 HTTP 请求的程序,执行一些复杂的业务逻辑,然后使用 HMAC 签署一些消息。因为它具有复杂的应用程序逻辑,所以在服务器中可能会出现一个未被注意的远程文件读取漏洞,这可能会将私钥暴露给攻击者。
@ -892,24 +550,6 @@ With this partitioned approach, an attacker now has to trick the application ser
<!-- TODO: explain how to do this while still using automation. -->
<!--
## Best practices
### Clients that use the secrets API
When deploying applications that interact with the secrets API, access should be limited using [authorization policies](https://kubernetes.io/docs/admin/authorization/) such as [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/).
Secrets often hold values that span a spectrum of importance, many of which can cause escalations within Kubernetes (e.g. service account tokens) and to external systems. Even if an individual app can reason about the power of the secrets it expects to interact with, other apps within the same namespace can render those assumptions invalid.
For these reasons `watch` and `list` requests for secrets within a namespace are extremely powerful capabilities and should be avoided, since listing secrets allows the clients to inspect the values if all secrets are in that namespace. The ability to `watch` and `list` all secrets in a cluster should be reserved for only the most privileged, system-level components.
Applications that need to access the secrets API should perform `get` requests on the secrets they need. This lets administrators restrict access to all secrets while [white-listing access to individual instances](https://kubernetes.io/docs/admin/authorization/rbac/#referring-to-resources) that the app needs.
For improved performance over a looping `get`, clients can design resources that reference a secret then `watch` the resource, re-requesting the secret when the reference changes. Additionally, a ["bulk watch" API](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/bulk_watch.md) to let clients `watch` individual resources has also been proposed, and will likely be available in future releases of Kubernetes.
-->
## 最佳实践
### 客户端使用 secret API
@ -924,26 +564,6 @@ Secret 中的值对于不同的环境来说重要性可能不同,例如对于
为了提高循环获取的性能,客户端可以设计引用 secret 的资源,然后 `watch` 资源,在引用更改时重新请求 secret。此外还提出了一种 [”批量监控“ API](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/bulk_watch.md) 来让客户端 `watch` 每个资源,该功能可能会在将来的 Kubernetes 版本中提供。
<!--
## Security Properties
### Protections
Because `secret` objects can be created independently of the `pods` that use them, there is less risk of the secret being exposed during the workflow of creating, viewing, and editing pods. The system can also take additional precautions with `secret` objects, such as avoiding writing them to disk where possible.
A secret is only sent to a node if a pod on that node requires it. It is not written to disk. It is stored in a tmpfs. It is deleted once the pod that depends on it is deleted.
On most Kubernetes-project-maintained distributions, communication between user to the apiserver, and from apiserver to the kubelets, is protected by SSL/TLS. Secrets are protected when transmitted over these channels.
Secret data on nodes is stored in tmpfs volumes and thus does not come to rest on the node.
There may be secrets for several pods on the same node. However, only the secrets that a pod requests are potentially visible within its containers. Therefore, one Pod does not have access to the secrets of another pod.
There may be several containers in a pod. However, each container in a pod has to request the secret volume in its `volumeMounts` for it to be visible within the container. This can be used to construct useful [security partitions at the Pod level](#use-case-secret-visible-to-one-container-in-a-pod).
-->
## 安全属性
### 保护
@ -960,22 +580,6 @@ There may be several containers in a pod. However, each container in a pod has
Pod 中有多个容器。但是pod 中的每个容器必须请求其挂载卷中的 secret 卷才能在容器内可见。这可以用于 [在 Pod 级别构建安全分区](#use-case-secret-visible-to-one-container-in-a-pod)。
<!--
### Risks
- In the API server secret data is stored as plaintext in etcd; therefore:
- Administrators should limit access to etcd to admin users
- Secret data in the API server is at rest on the disk that etcd uses; admins may want to wipe/shred disks used by etcd when no longer in use
- If you configure the secret through a manifest (JSON or YAML) file which has the secret data encoded as base64, sharing this file or checking it in to a source repository means the secret is compromised. Base64 encoding is not an encryption method and is considered the same as plain text.
- Applications still need to protect the value of secret after reading it from the volume, such as not accidentally logging it or transmitting it to an untrusted party.
- A user who can create a pod that uses a secret can also see the value of that secret. Even if apiserver policy does not allow that user to read the secret object, the user could run a pod which exposes the secret.
- If multiple replicas of etcd are run, then the secrets will be shared between them. By default, etcd does not secure peer-to-peer communication with SSL/TLS, though this can be configured.
- Currently, anyone with root on any node can read any secret from the apiserver, by impersonating the kubelet. It is a planned feature to only send secrets to nodes that actually require them, to restrict the impact of a root exploit on a single node.
-->
### 风险
- API server 的 secret 数据以纯文本的方式存储在 etcd 中;因此:

View File

@ -14,53 +14,12 @@ cn-reviewers:
{% comment %}Updated: 4/14/2015{% endcomment %}
{% comment %}Edited and moved to Concepts section: 2/2/17{% endcomment %}
<!--
This page describes the lifecycle of a Pod.
-->
该页面将描述 Pod 的生命周期。
{% endcapture %}
{% capture body %}
<!--
## Pod phase
A Pod's `status` field is a
[PodStatus](/docs/resources-reference/v1.6/#podstatus-v1-core)
object, which has a `phase` field.
The phase of a Pod is a simple, high-level summary of where the Pod is in its
lifecycle. The phase is not intended to be a comprehensive rollup of observations
of Container or Pod state, nor is it intended to be a comprehensive state machine.
The number and meanings of Pod phase values are tightly guarded.
Other than what is documented here, nothing should be assumed about Pods that
have a given `phase` value.
Here are the possible values for `phase`:
* Pending: The Pod has been accepted by the Kubernetes system, but one or more of
the Container images has not been created. This includes time before being
scheduled as well as time spent downloading images over the network,
which could take a while.
* Running: The Pod has been bound to a node, and all of the Containers have been
created. At least one Container is still running, or is in the process of
starting or restarting.
* Succeeded: All Containers in the Pod have terminated in success, and will not
be restarted.
* Failed: All Containers in the Pod have terminated, and at least one Container
has terminated in failure. That is, the Container either exited with non-zero
status or was terminated by the system.
* Unknown: For some reason the state of the Pod could not be obtained, typically
due to an error in communicating with the host of the Pod.
-->
## Pod phase
Pod 的 `status` 定义在 [PodStatus](/docs/resources-reference/v1.7/#podstatus-v1-core) 对象中,其中有一个 `phase` 字段。
@ -77,68 +36,10 @@ Pod 相位的数量和含义是严格指定的。除了本文档中列举的内
- 失败FailedPod 中的所有容器都已终止了并且至少有一个容器是因为失败终止。也就是说容器以非0状态退出或者被系统终止。
- 未知Unknown因为某些原因无法取得 Pod 的状态,通常是因为与 Pod 所在主机通信失败。
<!--
## Pod conditions
A Pod has a PodStatus, which has an array of
[PodConditions](/docs/resources-reference/v1.6/#podcondition-v1-core). Each element
of the PodCondition array has a `type` field and a `status` field. The `type`
field is a string, with possible values PodScheduled, Ready, Initialized, and
Unschedulable. The `status` field is a string, with possible values True, False,
and Unknown.
-->
## Pod 状态
Pod 有一个 PodStatus 对象,其中包含一个 [PodCondition](/docs/resources-reference/v1.7/#podcondition-v1-core) 数组。 PodCondition 数组的每个元素都有一个 `type` 字段和一个 `status` 字段。`type` 字段是字符串,可能的值有 PodScheduled、Ready、Initialized 和 Unschedulable。`status` 字段是一个字符串,可能的值有 True、False 和 Unknown。
<!--
## Container probes
A [Probe](/docs/resources-reference/v1.6/#probe-v1-core) is a diagnostic
performed periodically by the [kubelet](/docs/admin/kubelet/)
on a Container. To perform a diagnostic,
the kubelet calls a
[Handler](https://godoc.org/k8s.io/kubernetes/pkg/api/v1#Handler) implemented by
the Container. There are three types of handlers:
* [ExecAction](/docs/resources-reference/v1.6/#execaction-v1-core):
Executes a specified command inside the Container. The diagnostic
is considered successful if the command exits with a status code of 0.
* [TCPSocketAction](/docs/resources-reference/v1.6/#tcpsocketaction-v1-core):
Performs a TCP check against the Container's IP address on
a specified port. The diagnostic is considered successful if the port is open.
* [HTTPGetAction](/docs/resources-reference/v1.6/#httpgetaction-v1-core):
Performs an HTTP Get request against the Container's IP
address on a specified port and path. The diagnostic is considered successful
if the response has a status code greater than or equal to 200 and less than 400.
Each probe has one of three results:
* Success: The Container passed the diagnostic.
* Failure: The Container failed the diagnostic.
* Unknown: The diagnostic failed, so no action should be taken.
The kubelet can optionally perform and react to two kinds of probes on running
Containers:
* `livenessProbe`: Indicates whether the Container is running. If
the liveness probe fails, the kubelet kills the Container, and the Container
is subjected to its [restart policy](#restart-policy). If a Container does not
provide a liveness probe, the default state is `Success`.
* `readinessProbe`: Indicates whether the Container is ready to service requests.
If the readiness probe fails, the endpoints controller removes the Pod's IP
address from the endpoints of all Services that match the Pod. The default
state of readiness before the initial delay is `Failure`. If a Container does
not provide a readiness probe, the default state is `Success`.
-->
## 容器探针
[探针](/docs/resources-reference/v1.7/#probe-v1-core) 是由 [kubelet](/docs/admin/kubelet/) 对容器执行的定期诊断。要执行诊断kubelet 调用由容器实现的 [Handler](https://godoc.org/k8s.io/kubernetes/pkg/api/v1#Handler)。有三种类型的处理程序:
@ -158,36 +59,6 @@ Kubelet 可以选择是否执行在容器上运行的两种探针执行和做出
- `livenessProbe`:指示容器是否正在运行。如果存活探测失败,则 kubelet 会杀死容器,并且容器将受到其 [重启策略](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) 的影响。如果容器不提供存活探针,则默认状态为 `Success`
- `readinessProbe`:指示容器是否准备好服务请求。如果就绪探测失败,端点控制器将从与 Pod 匹配的所有 Service 的端点中删除该 Pod 的 IP 地址。初始延迟之前的就绪状态默认为 `Failure`。如果容器不提供就绪探针,则默认状态为 `Success`
<!--
### When should you use liveness or readiness probes?
If the process in your Container is able to crash on its own whenever it
encounters an issue or becomes unhealthy, you do not necessarily need a liveness
probe; the kubelet will automatically perform the correct action in accordance
with the Pod's `restartPolicy`.
If you'd like your Container to be killed and restarted if a probe fails, then
specify a liveness probe, and specify a `restartPolicy` of Always or OnFailure.
If you'd like to start sending traffic to a Pod only when a probe succeeds,
specify a readiness probe. In this case, the readiness probe might be the same
as the liveness probe, but the existence of the readiness probe in the spec means
that the Pod will start without receiving any traffic and only start receiving
traffic after the probe starts succeeding.
If you want your Container to be able to take itself down for maintenance, you
can specify a readiness probe that checks an endpoint specific to readiness that
is different from the liveness probe.
Note that if you just want to be able to drain requests when the Pod is deleted,
you do not necessarily need a readiness probe; on deletion, the Pod automatically
puts itself into an unready state regardless of whether the readiness probe exists.
The Pod remains in the unready state while it waits for the Containers in the Pod
to stop.
-->
### 该什么时候使用存活liveness和就绪readiness探针?
如果容器中的进程能够在遇到问题或不健康的情况下自行崩溃,则不一定需要存活探针; kubelet 将根据 Pod 的`restartPolicy` 自动执行正确的操作。
@ -200,78 +71,14 @@ to stop.
请注意,如果您只想在 Pod 被删除时能够排除请求,则不一定需要使用就绪探针;在删除 Pod 时Pod 会自动将自身置于未完成状态,无论就绪探针是否存在。当等待 Pod 中的容器停止时Pod 仍处于未完成状态。
<!--
## Pod and Container status
For detailed information about Pod Container status, see
[PodStatus](/docs/resources-reference/v1.6/#podstatus-v1-core)
and
[ContainerStatus](/docs/resources-reference/v1.6/#containerstatus-v1-core).
Note that the information reported as Pod status depends on the current
[ContainerState](/docs/resources-reference/v1.6/#containerstatus-v1-core).
-->
## Pod 和容器状态
有关 Pod 容器状态的详细信息,请参阅 [PodStatus](/docs/resources-reference/v1.7/#podstatus-v1-core) 和 [ContainerStatus](/docs/resources-reference/v1.7/#containerstatus-v1-core)。请注意,报告的 Pod 状态信息取决于当前的 [ContainerState](/docs/resources-reference/v1.7/#containerstatus-v1-core)。
<!--
## Restart policy
A PodSpec has a `restartPolicy` field with possible values Always, OnFailure,
and Never. The default value is Always.
`restartPolicy` applies to all Containers in the Pod. `restartPolicy` only
refers to restarts of the Containers by the kubelet on the same node. Failed
Containers that are restarted by the kubelet are restarted with an exponential
back-off delay (10s, 20s, 40s ...) capped at five minutes, and is reset after ten
minutes of successful execution. As discussed in the
[Pods document](/docs/user-guide/pods/#durability-of-pods-or-lack-thereof),
once bound to a node, a Pod will never be rebound to another node.
-->
## 重启策略
PodSpec 中有一个 `restartPolicy` 字段,可能的值为 Always、OnFailure 和 Never。默认为 Always。 `restartPolicy` 适用于 Pod 中的所有容器。`restartPolicy` 仅指通过同一节点上的 kubelet 重新启动容器。失败的容器由 kubelet 以五分钟为上限的指数退避延迟10秒20秒40秒...)重新启动,并在成功执行十分钟后重置。如 [Pod 文档](/docs/user-guide/pods/#durability-of-pods-or-lack-thereof) 中所述一旦绑定到一个节点Pod 将永远不会重新绑定到另一个节点。
<!--
## Pod lifetime
In general, Pods do not disappear until someone destroys them. This might be a
human or a controller. The only exception to
this rule is that Pods with a `phase` of Succeeded or Failed for more than some
duration (determined by the master) will expire and be automatically destroyed.
Three types of controllers are available:
- Use a [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/) for Pods that are expected to terminate,
for example, batch computations. Jobs are appropriate only for Pods with
`restartPolicy` equal to OnFailure or Never.
- Use a [ReplicationController](/docs/concepts/workloads/controllers/replicationcontroller/),
[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/), or
[Deployment](/docs/concepts/workloads/controllers/deployment/)
for Pods that are not expected to terminate, for example, web servers.
ReplicationControllers are appropriate only for Pods with a `restartPolicy` of
Always.
- Use a [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) for Pods that need to run one per
machine, because they provide a machine-specific system service.
All three types of controllers contain a PodTemplate. It
is recommended to create the appropriate controller and let
it create Pods, rather than directly create Pods yourself. That is because Pods
alone are not resilient to machine failures, but controllers are.
If a node dies or is disconnected from the rest of the cluster, Kubernetes
applies a policy for setting the `phase` of all Pods on the lost node to Failed.
-->
## Pod 的生命
一般来说Pod 不会消失,直到人为销毁他们。这可能是一个人或控制器。这个规则的唯一例外是成功或失败的 `phase` 超过一段时间(由 master 确定的Pod将过期并被自动销毁。
@ -288,17 +95,6 @@ applies a policy for setting the `phase` of all Pods on the lost node to Failed.
如果节点死亡或与集群的其余部分断开连接,则 Kubernetes 将应用一个策略将丢失节点上的所有 Pod 的 `phase` 设置为 Failed。
<!--
## Examples
### Advanced liveness probe example
Liveness probes are executed by the kubelet, so all requests are made in the
kubelet network namespace.
-->
## 示例
### 高级 liveness 探针示例
@ -333,69 +129,6 @@ spec:
name: liveness
```
<!--
### Example states
* Pod is running and has one Container. Container exits with success.
* Log completion event.
* If `restartPolicy` is:
* Always: Restart Container; Pod `phase` stays Running.
* OnFailure: Pod `phase` becomes Succeeded.
* Never: Pod `phase` becomes Succeeded.
* Pod is running and has one Container. Container exits with failure.
* Log failure event.
* If `restartPolicy` is:
* Always: Restart Container; Pod `phase` stays Running.
* OnFailure: Restart Container; Pod `phase` stays Running.
* Never: Pod `phase` becomes Failed.
* Pod is running and has two Containers. Container 1 exits with failure.
* Log failure event.
* If `restartPolicy` is:
* Always: Restart Container; Pod `phase` stays Running.
* OnFailure: Restart Container; Pod `phase` stays Running.
* Never: Do not restart Container; Pod `phase` stays Running.
* If Container 1 is not running, and Container 2 exits:
* Log failure event.
* If `restartPolicy` is:
* Always: Restart Container; Pod `phase` stays Running.
* OnFailure: Restart Container; Pod `phase` stays Running.
* Never: Pod `phase` becomes Failed.
* Pod is running and has one Container. Container runs out of memory.
* Container terminates in failure.
* Log OOM event.
* If `restartPolicy` is:
* Always: Restart Container; Pod `phase` stays Running.
* OnFailure: Restart Container; Pod `phase` stays Running.
* Never: Log failure event; Pod `phase` becomes Failed.
* Pod is running, and a disk dies.
* Kill all Containers.
* Log appropriate event.
* Pod `phase` becomes Failed.
* If running under a controller, Pod is recreated elsewhere.
* Pod is running, and its node is segmented out.
* Node controller waits for timeout.
* Node controller sets Pod `phase` to Failed.
* If running under a controller, Pod is recreated elsewhere.
{% endcapture %}
{% capture whatsnext %}
* Get hands-on experience
[attaching handlers to Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/).
* Get hands-on experience
[configuring liveness and readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/).
* Learn more about [Container lifecycle hooks](/docs/concepts/containers/container-lifecycle-hooks/).
-->
### 状态示例
- Pod 中只有一个容器并且正在运行。容器成功退出。

View File

@ -10,12 +10,6 @@ cn-reviewers:
title: Docker 用户使用 kubectl 命令指南
---
<!--
In this doc, we introduce the Kubernetes command line for interacting with the api to docker-cli users. The tool, kubectl, is designed to be familiar to docker-cli users but there are a few necessary differences. Each section of this doc highlights a docker subcommand explains the kubectl equivalent.
-->
在本文中,我们将向 docker-cli 用户介绍 Kubernetes 命令行如何与 api 进行交互。该命令行工具——kubectl被设计成 docker-cli 用户所熟悉的样子,但是它们之间又存在一些必要的差异。该文档将向您展示每个 docker 子命令和 kubectl 与其等效的命令。
* TOC
@ -23,14 +17,6 @@ In this doc, we introduce the Kubernetes command line for interacting with the a
#### docker run
<!--
How do I run an nginx Deployment and expose it to the world? Checkout [kubectl run](/docs/user-guide/kubectl/{{page.version}}/#run).
With docker:
-->
如何运行一个 nginx Deployment 并将其暴露出来? 查看 [kubectl run](/docs/user-guide/kubectl/{{page.version}}/#run) 。
使用 docker 命令:
@ -43,12 +29,6 @@ CONTAINER ID IMAGE COMMAND CREATED
a9ec34d98787 nginx "nginx -g 'daemon of 2 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp, 443/tcp nginx-app
```
<!--
With kubectl:
-->
使用 kubectl 命令:
```shell
@ -57,14 +37,6 @@ $ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"
deployment "nginx-app" created
```
<!--
`kubectl run` creates a Deployment named "nginx-app" on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead.
If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/{{page.version}}/#run) for more details.
Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands. Now, we can expose a new Service with the deployment created above:
-->
在大于等于 1.2 版本 Kubernetes 集群中,使用`kubectl run` 命令将创建一个名为 "nginx-app" 的 Deployment。如果您运行的是老版本将会创建一个 replication controller。
如果您想沿用旧的行为,使用 `--generation=run/v1` 参数,这样就会创建 replication controller。查看 [`kubectl run`](/docs/user-guide/kubectl/{{page.version}}/#run) 获取更多详细信息。
@ -74,14 +46,6 @@ $ kubectl expose deployment nginx-app --port=80 --name=nginx-http
service "nginx-http" exposed
```
<!--
With kubectl, we create a [Deployment](/docs/concepts/workloads/controllers/deployment/) which will make sure that N pods are running nginx (where N is the number of replicas stated in the spec, which defaults to 1). We also create a [service](/docs/user-guide/services) with a selector that matches the Deployment's selector. See the [Quick start](/docs/user-guide/quick-start) for more information.
By default images are run in the background, similar to `docker run -d ...`, if you want to run things in the foreground, use:
-->
在 kubectl 命令中,我们创建了一个 [Deployment](/docs/concepts/workloads/controllers/deployment/),这将保证有 N 个运行 nginx 的 podN 代表 spec 中声明的 replica 数,默认为 1。我们还创建了一个 [service](/docs/user-guide/services),使用 selector 匹配具有相应的 selector 的 Deployment。查看 [快速开始](/docs/user-guide/quick-start) 获取更多信息。
默认情况下镜像会在后台运行,与`docker run -d ...` 类似,如果您想在前台运行,使用:
@ -90,15 +54,6 @@ By default images are run in the background, similar to `docker run -d ...`, if
kubectl run [-i] [--tty] --attach <name> --image=<image>
```
<!--
Unlike `docker run ...`, if `--attach` is specified, we attach to `stdin`, `stdout` and `stderr`, there is no ability to control which streams are attached (`docker -a ...`).
Because we start a Deployment for your container, it will be restarted if you terminate the attached process (e.g. `ctrl-c`), this is different from `docker run -it`.
To destroy the Deployment (and its pods) you need to run `kubectl delete deployment <name>`.
-->
`docker run ...` 不同的是,如果指定了 `--attach` ,我们将连接到 `stdin``stdout` 和 `stderr`,而不能控制具体连接到哪个输出流(`docker -a ...`)。
因为我们使用 Deployment 启动了容器,如果您终止了连接到的进程(例如 `ctrl-c`),容器将会重启,这跟 `docker run -it` 不同。
@ -106,14 +61,6 @@ To destroy the Deployment (and its pods) you need to run `kubectl delete deploym
#### docker ps
<!--
How do I list what is currently running? Checkout [kubectl get](/docs/user-guide/kubectl/{{page.version}}/#get).
With docker:
-->
如何列出哪些正在运行?查看 [kubectl get](/docs/user-guide/kubectl/{{page.version}}/#get)。
使用 docker 命令:
@ -124,12 +71,6 @@ CONTAINER ID IMAGE COMMAND CREATED
a9ec34d98787 nginx "nginx -g 'daemon of About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 443/tcp nginx-app
```
<!--
With kubectl:
-->
使用 kubectl 命令:
```shell
@ -140,14 +81,6 @@ nginx-app-5jyvm 1/1 Running 0 1h
#### docker attach
<!--
How do I attach to a process that is already running in a container? Checkout [kubectl attach](/docs/user-guide/kubectl/{{page.version}}/#attach).
With docker:
-->
如何连接到已经运行在容器中的进程?查看 [kubectl attach](/docs/user-guide/kubectl/{{page.version}}/#attach)。
使用 docker 命令:
@ -160,12 +93,6 @@ $ docker attach a9ec34d98787
...
```
<!--
With kubectl:
-->
使用 kubectl 命令:
```shell
@ -178,14 +105,6 @@ $ kubectl attach -it nginx-app-5jyvm
#### docker exec
<!--
How do I execute a command in a container? Checkout [kubectl exec](/docs/user-guide/kubectl/{{page.version}}/#exec).
With docker:
-->
如何在容器中执行命令?查看 [kubectl exec](/docs/user-guide/kubectl/{{page.version}}/#exec)。
使用 docker 命令:
@ -198,12 +117,6 @@ $ docker exec a9ec34d98787 cat /etc/hostname
a9ec34d98787
```
<!--
With kubectl:
-->
使用 kubectl 命令:
```shell
@ -214,14 +127,6 @@ $ kubectl exec nginx-app-5jyvm -- cat /etc/hostname
nginx-app-5jyvm
```
<!--
What about interactive commands?
With docker:
-->
执行交互式命令怎么办?
使用 docker 命令:
@ -231,12 +136,6 @@ $ docker exec -ti a9ec34d98787 /bin/sh
# exit
```
<!--
With kubectl:
-->
使用 kubectl 命令:
```shell
@ -244,24 +143,10 @@ $ kubectl exec -ti nginx-app-5jyvm -- /bin/sh
# exit
```
<!--
For more information see [Getting a Shell to a Running Container](/docs/tasks/kubectl/get-shell-running-container/).
-->
更多信息请查看 [获取运行中容器的 Shell 环境](/docs/tasks/kubectl/get-shell-running-container/)。
#### docker logs
<!--
How do I follow stdout/stderr of a running process? Checkout [kubectl logs](/docs/user-guide/kubectl/{{page.version}}/#logs).
With docker:
-->
如何查看运行中进程的 stdout/stderr查看 [kubectl logs](/docs/user-guide/kubectl/{{page.version}}/#logs)。
使用 docker 命令:
@ -272,12 +157,6 @@ $ docker logs -f a9e
192.168.9.1 - - [14/Jul/2015:01:04:03 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-"
```
<!--
With kubectl:
-->
使用 kubectl 命令:
```shell
@ -286,12 +165,6 @@ $ kubectl logs -f nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
```
<!--
Now's a good time to mention slight difference between pods and containers; by default pods will not terminate if their processes exit. Instead it will restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated, but for Kubernetes, each invocation is separate. To see the output from a previous run in Kubernetes, do this:
-->
现在是时候提一下 pod 和容器之间的细微差别了;默认情况下如果 pod 中的进程退出 pod 也不会终止,相反它将会重启该进程。这类似于 docker run 时的 `--restart=always` 选项, 这是主要差别。在 docker 中,进程的每个调用的输出都是被连接起来的,但是对于 kubernetes每个调用都是分开的。要查看以前在 kubernetes 中执行的输出,请执行以下操作:
```shell
@ -300,24 +173,8 @@ $ kubectl logs --previous nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
```
<!--
See [Logging and Monitoring Cluster Activity](/docs/concepts/cluster-administration/logging/) for more information.
-->
查看 [记录和监控集群活动](/docs/concepts/cluster-administration/logging/) 获取更多信息。
<!--
#### docker stop and docker rm
How do I stop and delete a running process? Checkout [kubectl delete](/docs/user-guide/kubectl/{{page.version}}/#delete).
With docker:
-->
#### docker stop 和 docker rm
如何停止和删除运行中的进程?查看 [kubectl delete](/docs/user-guide/kubectl/{{page.version}}/#delete)。
@ -334,12 +191,6 @@ $ docker rm a9ec34d98787
a9ec34d98787
```
<!--
With kubectl:
-->
使用 kubectl 命令:
```shell
@ -355,34 +206,14 @@ $ kubectl get po -l run=nginx-app
# Return nothing
```
<!--
Notice that we don't delete the pod directly. With kubectl we want to delete the Deployment that owns the pod. If we delete the pod directly, the Deployment will recreate the pod.
-->
请注意,我们不直接删除 pod。使用 kubectl 命令,我们要删除拥有该 pod 的 Deployment。如果我们直接删除podDeployment 将会重新创建该 pod。
#### docker login
<!--
There is no direct analog of `docker login` in kubectl. If you are interested in using Kubernetes with a private registry, see [Using a Private Registry](/docs/concepts/containers/images/#using-a-private-registry).
-->
在 kubectl 中没有对 `docker login` 的直接模拟。如果您有兴趣在私有镜像仓库中使用 Kubernetes请参阅 [使用私有镜像仓库](/docs/concepts/containers/images/#using-a-private-registry)。
#### docker version
<!--
How do I get the version of my client and server? Checkout [kubectl version](/docs/user-guide/kubectl/{{page.version}}/#version).
With docker:
-->
如何查看客户端和服务端的版本?查看 [kubectl version](/docs/user-guide/kubectl/{{page.version}}/#version)。
使用 docker 命令:
@ -401,12 +232,6 @@ Git commit (server): 0baf609
OS/Arch (server): linux/amd64
```
<!--
With kubectl:
-->
使用 kubectl 命令:
```shell
@ -417,14 +242,6 @@ Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.9+a3d1dfa6f4
#### docker info
<!--
How do I get miscellaneous info about my environment and configuration? Checkout [kubectl cluster-info](/docs/user-guide/kubectl/{{page.version}}/#cluster-info).
With docker:
-->
如何获取有关环境和配置的各种信息?查看 [kubectl cluster-info](/docs/user-guide/kubectl/{{page.version}}/#cluster-info)。
使用 docker 命令:
@ -449,12 +266,6 @@ ID: ADUV:GCYR:B3VJ:HMPO:LNPQ:KD5S:YKFQ:76VN:IANZ:7TFV:ZBF4:BYJO
WARNING: No swap limit support
```
<!--
With kubectl:
-->
使用 kubectl 命令:
```shell

View File

@ -6,28 +6,9 @@ cn-reviewers:
- rootsongjc
---
<!--
JSONPath template is composed of JSONPath expressions enclosed by {}.
And we add three functions in addition to the original JSONPath syntax:
-->
JSONPath 模板由 {} 包起来的 JSONPath 表达式组成。
除了原始的 JSONPath 语法之外,我们还添加了三个函数:
<!--
1. The `$` operator is optional since the expression always starts from the root object by default.
2. We can use `""` to quote text inside JSONPath expressions.
3. We can use `range` operator to iterate lists.
The result object is printed as its String() function.
Given the input:
-->
1. `$` 运算符是可选的,因为表达式默认情况下始终从根对象开始。
2. 我们可以使用 `""` 来引用 JSONPath 表达式中的文本。
3. 我们可以使用 `range` 运算符来迭代列表。
@ -72,20 +53,6 @@ Given the input:
]
}
```
<!--
| Function | Description | Example | Result |
| ----------------- | ------------------------- | ---------------------------------------- | ---------------------------------------- |
| text | the plain text | kind is {.kind} | kind is List |
| @ | the current object | {@} | the same as input |
| . or [] | child operator | {.kind} or {['kind']} | List |
| .. | recursive descent | {..name} | 127.0.0.1 127.0.0.2 myself e2e |
| * | wildcard. Get all objects | {.items[*].metadata.name} | [127.0.0.1 127.0.0.2] |
| [start:end :step] | subscript operator | {.users[0].name} | myself |
| [,] | union operator | {.items[*]['metadata.name', 'status.capacity']} | 127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8] |
| ?() | filter | {.users[?(@.name=="e2e")].user.password} | secret |
| range, end | iterate list | {range .items[*]}[{.metadata.name}, {.status.capacity}] {end} | [127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]] |
| "" | quote interpreted string | {range .items[*]}{.metadata.name}{"\t"}{end} | 127.0.0.1 127.0.0.2 |
-->
| 函数 | 描述 | 示例 | 结果 |
| ----------------- | ---------- | ---------------------------------------- | ---------------------------------------- |
| text | 纯文本 | kind is {.kind} | kind is List |