Merge pull request #23199 from tengqm/zh-links-tasks-8
[zh] Tidy up and fix links in tasks section (8/10)
This commit is contained in:
commit
5271b7e865
|
@ -1,15 +1,13 @@
|
|||
---
|
||||
title: 创建一个外部负载均衡器
|
||||
title: 创建外部负载均衡器
|
||||
content_type: task
|
||||
weight: 80
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Create an External Load Balancer
|
||||
content_type: task
|
||||
weight: 80
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -19,10 +17,10 @@ This page shows how to create an External Load Balancer.
|
|||
-->
|
||||
本文展示如何创建一个外部负载均衡器。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
This feature is only available for cloud providers or environments which support external load balancers.
|
||||
-->
|
||||
{{< note >}}
|
||||
此功能仅适用于支持外部负载均衡器的云提供商或环境。
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -33,7 +31,9 @@ that sends traffic to the correct port on your cluster nodes
|
|||
_provided your cluster runs in a supported environment and is configured with
|
||||
the correct cloud load balancer provider package_.
|
||||
-->
|
||||
创建服务时,您可以选择自动创建云网络负载均衡器。这提供了一个外部可访问的 IP 地址,可将流量分配到集群节点上的正确端口上 _假设集群在支持的环境中运行,并配置了正确的云负载平衡器提供商包_。
|
||||
创建服务时,你可以选择自动创建云网络负载均衡器。这提供了一个外部可访问的 IP 地址,
|
||||
可将流量分配到集群节点上的正确端口上
|
||||
( _假设集群在支持的环境中运行,并配置了正确的云负载平衡器提供商包_)。
|
||||
|
||||
<!--
|
||||
For information on provisioning and using an Ingress resource that can give
|
||||
|
@ -41,17 +41,13 @@ services externally-reachable URLs, load balance the traffic, terminate SSL etc.
|
|||
please check the [Ingress](/docs/concepts/services-networking/ingress/)
|
||||
documentation.
|
||||
-->
|
||||
有关如何配置和使用 Ingress 资源为服务提供外部可访问的 URL、负载均衡流量、终止 SSL 等功能,请查看 [Ingress](/docs/concepts/services-networking/ingress/) 文档。
|
||||
|
||||
|
||||
有关如何配置和使用 Ingress 资源为服务提供外部可访问的 URL、负载均衡流量、终止 SSL 等功能,
|
||||
请查看 [Ingress](/zh/docs/concepts/services-networking/ingress/) 文档。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
|
@ -62,7 +58,8 @@ To create an external load balancer, add the following line to your
|
|||
-->
|
||||
## 配置文件
|
||||
|
||||
要创建外部负载均衡器,请将以下内容添加到 [服务配置文件](/docs/concepts/services-networking/service/#loadbalancer):
|
||||
要创建外部负载均衡器,请将以下内容添加到
|
||||
[服务配置文件](/zh/docs/concepts/services-networking/service/#loadbalancer):
|
||||
|
||||
```yaml
|
||||
type: LoadBalancer
|
||||
|
@ -71,7 +68,7 @@ To create an external load balancer, add the following line to your
|
|||
<!--
|
||||
Your configuration file might look like:
|
||||
-->
|
||||
您的配置文件可能会如下所示:
|
||||
你的配置文件可能会如下所示:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -95,7 +92,7 @@ its `--type=LoadBalancer` flag:
|
|||
-->
|
||||
## 使用 kubectl
|
||||
|
||||
您也可以使用 `kubectl expose` 命令及其 `--type=LoadBalancer` 参数创建服务:
|
||||
你也可以使用 `kubectl expose` 命令及其 `--type=LoadBalancer` 参数创建服务:
|
||||
|
||||
```bash
|
||||
kubectl expose rc example --port=8765 --target-port=9376 \
|
||||
|
@ -112,7 +109,8 @@ For more information, including optional flags, refer to the
|
|||
-->
|
||||
此命令通过使用与引用资源(在上面的示例的情况下,名为 `example` 的 replication controller)相同的选择器来创建一个新的服务。
|
||||
|
||||
更多信息(包括更多的可选参数),请参阅 [`kubectl expose` reference](/docs/reference/generated/kubectl/kubectl-commands/#expose)。
|
||||
更多信息(包括更多的可选参数),请参阅
|
||||
[`kubectl expose` 指南](/docs/reference/generated/kubectl/kubectl-commands/#expose)。
|
||||
|
||||
<!--
|
||||
## Finding your IP address
|
||||
|
@ -120,9 +118,9 @@ For more information, including optional flags, refer to the
|
|||
You can find the IP address created for your service by getting the service
|
||||
information through `kubectl`:
|
||||
-->
|
||||
## 找到您的 IP 地址
|
||||
## 找到你的 IP 地址
|
||||
|
||||
您可以通过 `kubectl` 获取服务信息,找到为您的服务创建的 IP 地址:
|
||||
你可以通过 `kubectl` 获取服务信息,找到为你的服务创建的 IP 地址:
|
||||
|
||||
```bash
|
||||
kubectl describe services example-service
|
||||
|
@ -154,28 +152,29 @@ The IP address is listed next to `LoadBalancer Ingress`.
|
|||
-->
|
||||
IP 地址列在 `LoadBalancer Ingress` 旁边。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
If you are running your service on Minikube, you can find the assigned IP address and port with:
|
||||
-->
|
||||
**注意:** 如果您在 Minikube 上运行服务,您可以通过以下命令找到分配的 IP 地址和端口:
|
||||
{{< /note >}}
|
||||
{{< note >}}
|
||||
如果你在 Minikube 上运行服务,你可以通过以下命令找到分配的 IP 地址和端口:
|
||||
|
||||
```bash
|
||||
minikube service example-service --url
|
||||
```
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
## Preserving the client source IP
|
||||
-->
|
||||
## 保留客户端源 IP
|
||||
|
||||
<!--
|
||||
Due to the implementation of this feature, the source IP seen in the target
|
||||
container is *not the original source IP* of the client. To enable
|
||||
preservation of the client IP, the following fields can be configured in the
|
||||
service spec (supported in GCE/Google Kubernetes Engine environments):
|
||||
-->
|
||||
由于此功能的实现,目标容器中看到的源 IP 将 *不是客户端的原始源 IP*。要启用保留客户端 IP,可以在服务的 spec 中配置以下字段(支持 GCE/Google Kubernetes Engine 环境):
|
||||
## 保留客户端源 IP
|
||||
|
||||
由于此功能的实现,目标容器中看到的源 IP 将 *不是客户端的原始源 IP*。
|
||||
要启用保留客户端 IP,可以在服务的 spec 中配置以下字段(支持 GCE/Google Kubernetes Engine 环境):
|
||||
|
||||
<!--
|
||||
* `service.spec.externalTrafficPolicy` - denotes if this Service desires to route
|
||||
|
@ -186,7 +185,11 @@ load-spreading. Local preserves the client source IP and avoids a second hop
|
|||
for LoadBalancer and NodePort type services, but risks potentially imbalanced
|
||||
traffic spreading.
|
||||
-->
|
||||
* `service.spec.externalTrafficPolicy` - 表示此服务是否希望将外部流量路由到节点本地或集群范围的端点。有两个可用选项:Cluster(默认)和 Local。Cluster 隐藏了客户端源 IP,可能导致第二跳到另一个节点,但具有良好的整体负载分布。Local 保留客户端源 IP 并避免 LoadBalancer 和 NodePort 类型服务的第二跳,但存在潜在的不均衡流量传播风险。
|
||||
* `service.spec.externalTrafficPolicy` - 表示此服务是否希望将外部流量路由到节点本地或集群范围的端点。
|
||||
有两个可用选项:Cluster(默认)和 Local。
|
||||
Cluster 隐藏了客户端源 IP,可能导致第二跳到另一个节点,但具有良好的整体负载分布。
|
||||
Local 保留客户端源 IP 并避免 LoadBalancer 和 NodePort 类型服务的第二跳,
|
||||
但存在潜在的不均衡流量传播风险。
|
||||
|
||||
<!--
|
||||
* `service.spec.healthCheckNodePort` - specifies the health check nodePort
|
||||
|
@ -198,8 +201,11 @@ user-specified `healthCheckNodePort` value if specified by the client. It only h
|
|||
effect when `type` is set to LoadBalancer and `externalTrafficPolicy` is set
|
||||
to Local.
|
||||
-->
|
||||
|
||||
* `service.spec.healthCheckNodePort` - 指定服务的 healthcheck nodePort(数字端口号)。如果未指定 `healthCheckNodePort`,服务控制器从集群的 NodePort 范围内分配一个端口。您可以通过设置 API 服务器的命令行选项 `--service-node-port-range` 来配置上述范围。它将会使用用户指定的 `healthCheckNodePort` 值(如果被客户端指定)。仅当 `type` 设置为 LoadBalancer 并且 `externalTrafficPolicy` 设置为 Local 时才生效。
|
||||
* `service.spec.healthCheckNodePort` - 指定服务的 healthcheck nodePort(数字端口号)。
|
||||
如果未指定 `healthCheckNodePort`,服务控制器从集群的 NodePort 范围内分配一个端口。
|
||||
你可以通过设置 API 服务器的命令行选项 `--service-node-port-range` 来配置上述范围。
|
||||
它将会使用用户指定的 `healthCheckNodePort` 值(如果被客户端指定)。
|
||||
仅当 `type` 设置为 LoadBalancer 并且 `externalTrafficPolicy` 设置为 Local 时才生效。
|
||||
|
||||
<!--
|
||||
Setting `externalTrafficPolicy` to Local in the Service configuration file
|
||||
|
@ -224,10 +230,7 @@ spec:
|
|||
|
||||
<!--
|
||||
## Garbage Collecting Load Balancers
|
||||
-->
|
||||
## 垃圾收集负载均衡器
|
||||
|
||||
<!--
|
||||
In usual case, the correlating load balancer resources in cloud provider should
|
||||
be cleaned up soon after a LoadBalancer type Service is deleted. But it is known
|
||||
that there are various corner cases where cloud resources are orphaned after the
|
||||
|
@ -235,7 +238,12 @@ associated Service is deleted. Finalizer Protection for Service LoadBalancers wa
|
|||
introduced to prevent this from happening. By using finalizers, a Service resource
|
||||
will never be deleted until the correlating load balancer resources are also deleted.
|
||||
-->
|
||||
在通常情况下,应在删除 LoadBalancer 类型服务后立即清除云提供商中的相关负载均衡器资源。但是,众所周知,在删除关联的服务后,云资源被孤立的情况很多。引入了针对服务负载均衡器的终结器保护,以防止这种情况发生。通过使用终结器,在删除相关的负载均衡器资源之前,也不会删除服务资源。
|
||||
## 回收负载均衡器
|
||||
|
||||
在通常情况下,应在删除 LoadBalancer 类型服务后立即清除云提供商中的相关负载均衡器资源。
|
||||
但是,众所周知,在删除关联的服务后,云资源被孤立的情况很多。
|
||||
引入了针对服务负载均衡器的终结器保护,以防止这种情况发生。
|
||||
通过使用终结器,在删除相关的负载均衡器资源之前,也不会删除服务资源。
|
||||
|
||||
<!--
|
||||
Specifically, if a Service has `type` LoadBalancer, the service controller will attach
|
||||
|
@ -244,25 +252,18 @@ The finalizer will only be removed after the load balancer resource is cleaned u
|
|||
This prevents dangling load balancer resources even in corner cases such as the
|
||||
service controller crashing.
|
||||
-->
|
||||
具体来说,如果服务具有 `type` LoadBalancer,则服务控制器将附加一个名为 `service.kubernetes.io/load-balancer-cleanup` 的终结器。
|
||||
具体来说,如果服务具有 `type` LoadBalancer,则服务控制器将附加一个名为
|
||||
`service.kubernetes.io/load-balancer-cleanup` 的终结器。
|
||||
仅在清除负载均衡器资源后才能删除终结器。
|
||||
即使在诸如服务控制器崩溃之类的极端情况下,这也可以防止负载均衡器资源悬空。
|
||||
|
||||
<!--
|
||||
This feature is beta and enabled by default since Kubernetes v1.16. You can also
|
||||
enable it in v1.15 (alpha) via the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
`ServiceLoadBalancerFinalizer`.
|
||||
-->
|
||||
自 Kubernetes v1.16 起,此功能为 beta 版本并默认启用。您也可以通过[功能开关](/docs/reference/command-line-tools-reference/feature-gates/)`ServiceLoadBalancerFinalizer` 在 v1.15 (alpha)中启用它。
|
||||
|
||||
<!--
|
||||
## External Load Balancer Providers
|
||||
|
||||
It is important to note that the datapath for this functionality is provided by a load balancer external to the Kubernetes cluster.
|
||||
-->
|
||||
## 外部负载均衡器提供商
|
||||
|
||||
<!--
|
||||
It is important to note that the datapath for this functionality is provided by a load balancer external to the Kubernetes cluster.
|
||||
-->
|
||||
请务必注意,此功能的数据路径由 Kubernetes 集群外部的负载均衡器提供。
|
||||
|
||||
<!--
|
||||
|
@ -272,7 +273,11 @@ pods. The Kubernetes service controller automates the creation of the external l
|
|||
firewall rules (if needed) and retrieves the external IP allocated by the cloud provider and populates it in the service
|
||||
object.
|
||||
-->
|
||||
当服务 `type` 设置为 LoadBalancer 时,Kubernetes 向集群中的 pod 提供的功能等同于 `type` 等于 ClusterIP,并通过使用 Kubernetes pod 的条目对负载均衡器(从外部到 Kubernetes)进行编程来扩展它。 Kubernetes 服务控制器自动创建外部负载均衡器、健康检查(如果需要)、防火墙规则(如果需要),并获取云提供商分配的外部 IP 并将其填充到服务对象中。
|
||||
当服务 `type` 设置为 LoadBalancer 时,Kubernetes 向集群中的 Pod 提供的功能等同于
|
||||
`type` 等于 ClusterIP,并通过使用 Kubernetes pod 的条目对负载均衡器(从外部到 Kubernetes)
|
||||
进行编程来扩展它。
|
||||
Kubernetes 服务控制器自动创建外部负载均衡器、健康检查(如果需要)、防火墙规则(如果需要),
|
||||
并获取云提供商分配的外部 IP 并将其填充到服务对象中。
|
||||
|
||||
<!--
|
||||
## Caveats and Limitations when preserving source IPs
|
||||
|
@ -282,7 +287,8 @@ kube-proxy rules which would correctly balance across all endpoints.
|
|||
-->
|
||||
## 保留源 IP 时的注意事项和限制
|
||||
|
||||
GCE/AWS 负载均衡器不为其目标池提供权重。对于旧的 LB kube-proxy 规则来说,这不是一个问题,它可以在所有端点之间正确平衡。
|
||||
GCE/AWS 负载均衡器不为其目标池提供权重。
|
||||
对于旧的 LB kube-proxy 规则来说,这不是一个问题,它可以在所有端点之间正确平衡。
|
||||
|
||||
<!--
|
||||
With the new functionality, the external traffic is not equally load balanced across pods, but rather
|
||||
|
@ -290,13 +296,16 @@ equally balanced at the node level (because GCE/AWS and other external LB implem
|
|||
for specifying the weight per node, they balance equally across all target nodes, disregarding the number of
|
||||
pods on each node).
|
||||
-->
|
||||
使用新功能,外部流量不会在 pod 之间平均负载,而是在节点级别平均负载(因为 GCE/AWS 和其他外部 LB 实现无法指定每个节点的权重,因此它们的平衡跨所有目标节点,并忽略每个节点上的 pod 数量)。
|
||||
使用新功能,外部流量不会在 pod 之间平均负载,而是在节点级别平均负载
|
||||
(因为 GCE/AWS 和其他外部 LB 实现无法指定每个节点的权重,
|
||||
因此它们的平衡跨所有目标节点,并忽略每个节点上的 Pod 数量)。
|
||||
|
||||
<!--
|
||||
We can, however, state that for NumServicePods << NumNodes or NumServicePods >> NumNodes, a fairly close-to-equal
|
||||
distribution will be seen, even without weights.
|
||||
-->
|
||||
但是,我们可以声明,对于 NumServicePods << NumNodes 或 NumServicePods >> NumNodes 时,即使没有权重,也会看到接近相等的分布。
|
||||
但是,我们可以声明,对于 `NumServicePods << NumNodes` 或 `NumServicePods >> NumNodes` 时,
|
||||
即使没有权重,也会看到接近相等的分布。
|
||||
|
||||
<!--
|
||||
Once the external load balancers provide weights, this functionality can be added to the LB programming path.
|
||||
|
@ -307,6 +316,5 @@ Internal pod to pod traffic should behave similar to ClusterIP services, with eq
|
|||
一旦外部负载平衡器提供权重,就可以将此功能添加到 LB 编程路径中。
|
||||
*未来工作:1.4 版本不提供权重支持,但可能会在将来版本中添加*
|
||||
|
||||
内部 pod 到 pod 的流量应该与 ClusterIP 服务类似,所有 pod 的概率相同。
|
||||
|
||||
内部 Pod 到 Pod 的流量应该与 ClusterIP 服务类似,所有 Pod 的概率相同。
|
||||
|
||||
|
|
|
@ -5,11 +5,9 @@ weight: 100
|
|||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: List All Container Images Running in a Cluster
|
||||
content_type: task
|
||||
weight: 100
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -18,17 +16,12 @@ weight: 100
|
|||
This page shows how to use kubectl to list all of the Container images
|
||||
for Pods running in a cluster.
|
||||
-->
|
||||
本文展示如何使用 kubectl 来列出集群中所有运行 pod 的容器的镜像
|
||||
|
||||
|
||||
本文展示如何使用 kubectl 来列出集群中所有运行 Pod 的容器的镜像
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
|
@ -36,7 +29,7 @@ In this exercise you will use kubectl to fetch all of the Pods
|
|||
running in a cluster, and format the output to pull out the list
|
||||
of Containers for each.
|
||||
-->
|
||||
在本练习中,您将使用 kubectl 来获取集群中运行的所有 Pod,并格式化输出来提取每个 pod 中的容器列表。
|
||||
在本练习中,你将使用 kubectl 来获取集群中运行的所有 Pod,并格式化输出来提取每个 Pod 中的容器列表。
|
||||
|
||||
<!--
|
||||
## List all Containers in all namespaces
|
||||
|
@ -57,13 +50,14 @@ of Containers for each.
|
|||
- 使用 `kubectl get pods --all-namespaces` 获取所有命名空间下的所有 Pod
|
||||
- 使用 `-o jsonpath={..image}` 来格式化输出,以仅包含容器镜像名称。
|
||||
这将以递归方式从返回的 json 中解析出 `image` 字段。
|
||||
- 参阅 [jsonpath reference](/docs/user-guide/jsonpath/) 来获取更多关于如何使用 jsonpath 的信息。
|
||||
- 参阅 [jsonpath 说明](/zh/docs/reference/kubectl/jsonpath/)
|
||||
获取更多关于如何使用 jsonpath 的信息。
|
||||
- 使用标准化工具来格式化输出:`tr`, `sort`, `uniq`
|
||||
- 使用 `tr` 以用换行符替换空格
|
||||
- 使用 `sort` 来对结果进行排序
|
||||
- 使用 `uniq` 来聚合镜像计数
|
||||
|
||||
```sh
|
||||
```shell
|
||||
kubectl get pods --all-namespaces -o jsonpath="{..image}" |\
|
||||
tr -s '[[:space:]]' '\n' |\
|
||||
sort |\
|
||||
|
@ -83,7 +77,7 @@ e.g. many fields are called `name` within a given item:
|
|||
|
||||
作为替代方案,可以使用 Pod 的镜像字段的绝对路径。这确保即使字段名称重复的情况下也能检索到正确的字段,例如,特定项目中的许多字段都称为 `name`:
|
||||
|
||||
```sh
|
||||
```shell
|
||||
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}"
|
||||
```
|
||||
|
||||
|
@ -102,13 +96,14 @@ jsonpath 解释如下:
|
|||
- `.containers[*]`: 对于每个容器
|
||||
- `.image`: 获取镜像
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
When fetching a single Pod by name, e.g. `kubectl get pod nginx`,
|
||||
the `.items[*]` portion of the path should be omitted because a single
|
||||
Pod is returned instead of a list of items.
|
||||
-->
|
||||
**注意:** 按名字获取单个 Pod 时,例如 `kubectl get pod nginx`,路径的 `.items[*]` 部分应该省略,因为返回的是一个 Pod 而不是一个项目列表。
|
||||
{{< note >}}
|
||||
按名字获取单个 Pod 时,例如 `kubectl get pod nginx`,路径的 `.items[*]` 部分应该省略,
|
||||
因为返回的是一个 Pod 而不是一个项目列表。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
@ -121,7 +116,7 @@ iterate over elements individually.
|
|||
|
||||
可以使用 `range` 操作进一步控制格式化,以单独操作每个元素。
|
||||
|
||||
```sh
|
||||
```shell
|
||||
kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' |\
|
||||
sort
|
||||
```
|
||||
|
@ -132,11 +127,11 @@ sort
|
|||
To target only Pods matching a specific label, use the -l flag. The
|
||||
following matches only Pods with labels matching `app=nginx`.
|
||||
-->
|
||||
## 列出以 label 过滤后的 Pod 的所有容器
|
||||
## 列出以标签过滤后的 Pod 的所有容器
|
||||
|
||||
要获取匹配特定标签的 Pod,请使用 -l 参数。以下匹配仅与标签 `app=nginx` 相符的 Pod。
|
||||
|
||||
```sh
|
||||
```shell
|
||||
kubectl get pods --all-namespaces -o=jsonpath="{..image}" -l app=nginx
|
||||
```
|
||||
|
||||
|
@ -150,7 +145,7 @@ following matches only Pods in the `kube-system` namespace.
|
|||
|
||||
要获取匹配特定命名空间的 Pod,请使用 namespace 参数。以下仅匹配 `kube-system` 命名空间下的 Pod。
|
||||
|
||||
```sh
|
||||
```shell
|
||||
kubectl get pods --namespace kube-system -o jsonpath="{..image}"
|
||||
```
|
||||
|
||||
|
@ -164,33 +159,20 @@ for formatting the output:
|
|||
|
||||
作为 jsonpath 的替代,Kubectl 支持使用 [go-templates](https://golang.org/pkg/text/template/) 来格式化输出:
|
||||
|
||||
|
||||
```sh
|
||||
```shell
|
||||
kubectl get pods --all-namespaces -o go-template --template="{{range .items}}{{range .spec.containers}}{{.image}} {{end}}{{end}}"
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
<!--
|
||||
### Reference
|
||||
|
||||
* [Jsonpath](/docs/user-guide/jsonpath/) reference guide
|
||||
* [Jsonpath](/docs/reference/kubectl/jsonpath/) reference guide
|
||||
* [Go template](https://golang.org/pkg/text/template/) reference guide
|
||||
-->
|
||||
### 参考
|
||||
|
||||
* [Jsonpath](/docs/user-guide/jsonpath/) 参考指南
|
||||
* [Jsonpath](/zh/docs/reference/kubectl/jsonpath/) 参考指南
|
||||
* [Go template](https://golang.org/pkg/text/template/) 参考指南
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -5,11 +5,9 @@ weight: 60
|
|||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Use a Service to Access an Application in a Cluster
|
||||
content_type: tutorial
|
||||
weight: 60
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -19,22 +17,15 @@ This page shows how to create a Kubernetes Service object that external
|
|||
clients can use to access an application running in a cluster. The Service
|
||||
provides load balancing for an application that has two running instances.
|
||||
-->
|
||||
本文展示如何创建一个 Kubernetes 服务对象,能让外部客户端访问在集群中运行的应用。该服务为一个应用的两个运行实例提供负载均衡。
|
||||
|
||||
|
||||
|
||||
本文展示如何创建一个 Kubernetes 服务对象,能让外部客户端访问在集群中运行的应用。
|
||||
该服务为一个应用的两个运行实例提供负载均衡。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "objectives" %}}
|
||||
|
||||
|
||||
<!--
|
||||
* Run two instances of a Hello World application.
|
||||
* Create a Service object that exposes a node port.
|
||||
|
@ -44,9 +35,6 @@ provides load balancing for an application that has two running instances.
|
|||
* 创建一个服务对象来暴露 node port。
|
||||
* 使用服务对象来访问正在运行的应用。
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- lessoncontent -->
|
||||
|
||||
<!--
|
||||
|
@ -59,6 +47,7 @@ Here is the configuration file for the application Deployment:
|
|||
这是应用程序部署的配置文件:
|
||||
|
||||
{{< codenew file="service/access/hello-application.yaml" >}}
|
||||
|
||||
<!--
|
||||
1. Run a Hello World application in your cluster:
|
||||
Create the application Deployment using the file above:
|
||||
|
@ -74,17 +63,23 @@ Here is the configuration file for the application Deployment:
|
|||
each of which runs the Hello World application.
|
||||
-->
|
||||
|
||||
1. 在您的集群中运行一个 Hello World 应用:
|
||||
1. 在你的集群中运行一个 Hello World 应用:
|
||||
使用上面的文件创建应用程序 Deployment:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/service/access/hello-application.yaml
|
||||
```
|
||||
上面的命令创建一个 [Deployment](/docs/concepts/workloads/controllers/deployment/) 对象和一个关联的 [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) 对象。这个 ReplicaSet 有两个 [Pod](/docs/concepts/workloads/pods/pod/),每个 Pod 都运行着 Hello World 应用。
|
||||
|
||||
上面的命令创建一个 [Deployment](/zh/docs/concepts/workloads/controllers/deployment/) 对象
|
||||
和一个关联的 [ReplicaSet](/zh/docs/concepts/workloads/controllers/replicaset/) 对象。
|
||||
这个 ReplicaSet 有两个 [Pod](/zh/docs/concepts/workloads/pods/),
|
||||
每个 Pod 都运行着 Hello World 应用。
|
||||
|
||||
<!--
|
||||
1. Display information about the Deployment:
|
||||
-->
|
||||
1. 展示 Deployment 的信息:
|
||||
2. 展示 Deployment 的信息:
|
||||
|
||||
```shell
|
||||
kubectl get deployments hello-world
|
||||
kubectl describe deployments hello-world
|
||||
|
@ -93,7 +88,8 @@ Here is the configuration file for the application Deployment:
|
|||
<!--
|
||||
1. Display information about your ReplicaSet objects:
|
||||
-->
|
||||
1. 展示您的 ReplicaSet 对象信息:
|
||||
3. 展示你的 ReplicaSet 对象信息:
|
||||
|
||||
```shell
|
||||
kubectl get replicasets
|
||||
kubectl describe replicasets
|
||||
|
@ -102,7 +98,8 @@ Here is the configuration file for the application Deployment:
|
|||
<!--
|
||||
1. Create a Service object that exposes the deployment:
|
||||
-->
|
||||
1. 创建一个服务对象来暴露 deployment:
|
||||
4. 创建一个服务对象来暴露 Deployment:
|
||||
|
||||
```shell
|
||||
kubectl expose deployment hello-world --type=NodePort --name=example-service
|
||||
```
|
||||
|
@ -110,14 +107,17 @@ Here is the configuration file for the application Deployment:
|
|||
<!--
|
||||
1. Display information about the Service:
|
||||
-->
|
||||
1. 展示服务信息:
|
||||
5. 展示 Service 信息:
|
||||
|
||||
```shell
|
||||
kubectl describe services example-service
|
||||
```
|
||||
<!--
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```shell
|
||||
Name: example-service
|
||||
Namespace: default
|
||||
|
@ -133,23 +133,27 @@ Here is the configuration file for the application Deployment:
|
|||
Session Affinity: None
|
||||
Events: <none>
|
||||
```
|
||||
<!--
|
||||
|
||||
<!--
|
||||
Make a note of the NodePort value for the service. For example,
|
||||
in the preceding output, the NodePort value is 31496.
|
||||
-->
|
||||
-->
|
||||
注意服务中的 NodePort 值。例如在上面的输出中,NodePort 是 31496。
|
||||
|
||||
<!--
|
||||
1. List the pods that are running the Hello World application:
|
||||
-->
|
||||
1. 列出运行 Hello World 应用的 pod:
|
||||
7. 列出运行 Hello World 应用的 Pod:
|
||||
|
||||
```shell
|
||||
kubectl get pods --selector="run=load-balancer-example" --output=wide
|
||||
```
|
||||
<!--
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```shell
|
||||
NAME READY STATUS ... IP NODE
|
||||
hello-world-2895499144-bsbk5 1/1 Running ... 10.200.1.4 worker1
|
||||
|
@ -171,25 +175,28 @@ Here is the configuration file for the application Deployment:
|
|||
|
||||
1. Use the node address and node port to access the Hello World application:
|
||||
-->
|
||||
1. 获取运行 Hello World 的 pod 的其中一个节点的公共 IP 地址。如何获得此地址取决于您设置集群的方式。
|
||||
例如,如果您使用的是 Minikube,则可以通过运行 `kubectl cluster-info` 来查看节点地址。
|
||||
如果您使用的是 Google Compute Engine 实例,则可以使用 `gcloud compute instances list` 命令查看节点的公共地址。
|
||||
8. 获取运行 Hello World 的 pod 的其中一个节点的公共 IP 地址。如何获得此地址取决于你设置集群的方式。
|
||||
例如,如果你使用的是 Minikube,则可以通过运行 `kubectl cluster-info` 来查看节点地址。
|
||||
如果你使用的是 Google Compute Engine 实例,则可以使用 `gcloud compute instances list` 命令查看节点的公共地址。
|
||||
|
||||
1. 在您选择的节点上,创建一个防火墙规则以开放 node port 上的 TCP 流量。
|
||||
例如,如果您的服务的 NodePort 值为 31568,请创建一个防火墙规则以允许 31568 端口上的 TCP 流量。
|
||||
9. 在你选择的节点上,创建一个防火墙规则以开放节点端口上的 TCP 流量。
|
||||
例如,如果你的服务的 NodePort 值为 31568,请创建一个防火墙规则以允许 31568 端口上的 TCP 流量。
|
||||
不同的云提供商提供了不同方法来配置防火墙规则。
|
||||
|
||||
1. 使用节点地址和 node port 来访问 Hello World 应用:
|
||||
10. 使用节点地址和 node port 来访问 Hello World 应用:
|
||||
|
||||
```shell
|
||||
curl http://<public-node-ip>:<node-port>
|
||||
```
|
||||
<!--
|
||||
|
||||
<!--
|
||||
where `<public-node-ip>` is the public IP address of your node,
|
||||
and `<node-port>` is the NodePort value for your service. The
|
||||
response to a successful request is a hello message:
|
||||
-->
|
||||
这里的 `<public-node-ip>` 是您节点的公共 IP 地址,`<node-port>` 是您服务的 NodePort 值。
|
||||
-->
|
||||
这里的 `<public-node-ip>` 是你节点的公共 IP 地址,`<node-port>` 是你服务的 NodePort 值。
|
||||
对于请求成功的响应是一个 hello 消息:
|
||||
|
||||
```shell
|
||||
Hello Kubernetes!
|
||||
```
|
||||
|
@ -203,20 +210,19 @@ to create a Service.
|
|||
-->
|
||||
## 使用服务配置文件
|
||||
|
||||
作为 `kubectl expose` 的替代方法,您可以使用 [服务配置文件](/docs/concepts/services-networking/service/) 来创建服务。
|
||||
|
||||
|
||||
|
||||
作为 `kubectl expose` 的替代方法,你可以使用
|
||||
[服务配置文件](/zh/docs/concepts/services-networking/service/) 来创建服务。
|
||||
|
||||
## {{% heading "cleanup" %}}
|
||||
|
||||
|
||||
<!--
|
||||
To delete the Service, enter this command:
|
||||
-->
|
||||
想要删除服务,输入以下命令:
|
||||
|
||||
kubectl delete services example-service
|
||||
```shell
|
||||
kubectl delete services example-service
|
||||
```
|
||||
|
||||
<!--
|
||||
To delete the Deployment, the ReplicaSet, and the Pods that are running
|
||||
|
@ -224,17 +230,15 @@ the Hello World application, enter this command:
|
|||
-->
|
||||
想要删除运行 Hello World 应用的 Deployment、ReplicaSet 和 Pod,输入以下命令:
|
||||
|
||||
kubectl delete deployment hello-world
|
||||
|
||||
|
||||
|
||||
```shell
|
||||
kubectl delete deployment hello-world
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
<!--
|
||||
Learn more about
|
||||
[connecting applications with services](/docs/concepts/services-networking/connect-applications-service/).
|
||||
-->
|
||||
学习更多关于如何 [通过服务连接应用](/docs/concepts/services-networking/connect-applications-service/)。
|
||||
- 进一步了解[通过服务连接应用](/zh/docs/concepts/services-networking/connect-applications-service/)。
|
||||
|
||||
|
|
|
@ -1,18 +1,13 @@
|
|||
---
|
||||
reviewers:
|
||||
- bryk
|
||||
- mikedanese
|
||||
- rf232
|
||||
title: 网页界面 (Dashboard)
|
||||
title: Web 界面 (Dashboard)
|
||||
content_type: concept
|
||||
weight: 10
|
||||
card:
|
||||
name: tasks
|
||||
weight: 30
|
||||
title: 使用网页界面 Dashboard
|
||||
title: 使用 Web 界面 Dashboard
|
||||
---
|
||||
<!--
|
||||
---
|
||||
reviewers:
|
||||
- bryk
|
||||
- mikedanese
|
||||
|
@ -24,35 +19,40 @@ card:
|
|||
name: tasks
|
||||
weight: 30
|
||||
title: Use the Web UI Dashboard
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
Dashboard is a web-based Kubernetes user interface. You can use Dashboard to deploy containerized applications to a Kubernetes cluster, troubleshoot your containerized application, and manage the cluster resources. You can use Dashboard to get an overview of applications running on your cluster, as well as for creating or modifying individual Kubernetes resources (such as Deployments, Jobs, DaemonSets, etc). For example, you can scale a Deployment, initiate a rolling update, restart a pod or deploy new applications using a deploy wizard.
|
||||
Dashboard is a web-based Kubernetes user interface.
|
||||
You can use Dashboard to deploy containerized applications to a Kubernetes cluster,
|
||||
troubleshoot your containerized application, and manage the cluster resources.
|
||||
You can use Dashboard to get an overview of applications running on your cluster,
|
||||
as well as for creating or modifying individual Kubernetes resources
|
||||
(such as Deployments, Jobs, DaemonSets, etc).
|
||||
For example, you can scale a Deployment, initiate a rolling update, restart a pod
|
||||
or deploy new applications using a deploy wizard.
|
||||
|
||||
Dashboard also provides information on the state of Kubernetes resources in your cluster and on any errors that may have occurred.
|
||||
-->
|
||||
Dashboard 是基于网页的 Kubernetes 用户界面。您可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群资源。您可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源(如 Deployment,Job,DaemonSet 等等)。例如,您可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。
|
||||
Dashboard 是基于网页的 Kubernetes 用户界面。
|
||||
你可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群资源。
|
||||
你可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源
|
||||
(如 Deployment,Job,DaemonSet 等等)。
|
||||
例如,你可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。
|
||||
|
||||
Dashboard 同时展示了 Kubernetes 集群中的资源状态信息和所有报错信息。
|
||||
|
||||

|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## Deploying the Dashboard UI
|
||||
-->
|
||||
## 部署 Dashboard UI
|
||||
|
||||
<!--
|
||||
The Dashboard UI is not deployed by default. To deploy it, run the following command:
|
||||
-->
|
||||
## 部署 Dashboard UI
|
||||
默认情况下不会部署 Dashboard。可以通过以下命令部署:
|
||||
|
||||
```
|
||||
|
@ -61,15 +61,19 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/a
|
|||
|
||||
<!--
|
||||
## Accessing the Dashboard UI
|
||||
|
||||
To protect your cluster data, Dashboard deploys with a minimal RBAC configuration by default.
|
||||
Currently, Dashboard only supports logging in with a Bearer Token.
|
||||
To create a token for this demo, you can follow our guide on
|
||||
[creating a sample user](https://github.com/kubernetes/dashboard/wiki/Creating-sample-user).
|
||||
-->
|
||||
## 访问 Dashboard UI
|
||||
|
||||
<!--
|
||||
To protect your cluster data, Dashboard deploys with a minimal RBAC configuration by default. Currently, Dashboard only supports logging in with a Bearer Token. To create a token for this demo, you can follow our guide on [creating a sample user](https://github.com/kubernetes/dashboard/wiki/Creating-sample-user).
|
||||
-->
|
||||
为了保护您的集群数据,默认情况下,Dashboard 会使用最少的 RBAC 配置进行部署。
|
||||
为了保护你的集群数据,默认情况下,Dashboard 会使用最少的 RBAC 配置进行部署。
|
||||
当前,Dashboard 仅支持使用 Bearer 令牌登录。
|
||||
要为此样本演示创建令牌,您可以按照[创建示例用户](https://github.com/kubernetes/dashboard/wiki/Creating-sample-user)上的指南进行操作。
|
||||
要为此样本演示创建令牌,你可以按照
|
||||
[创建示例用户](https://github.com/kubernetes/dashboard/wiki/Creating-sample-user)
|
||||
上的指南进行操作。
|
||||
|
||||
<!--
|
||||
The sample user created in the tutorial will have administrative privileges and is for educational purposes only.
|
||||
|
@ -80,12 +84,12 @@ The sample user created in the tutorial will have administrative privileges and
|
|||
|
||||
<!--
|
||||
### Command line proxy
|
||||
-->
|
||||
### 命令行代理
|
||||
<!--
|
||||
|
||||
You can access Dashboard using the kubectl command-line tool by running the following command:
|
||||
-->
|
||||
您可以使用 kubectl 命令行工具访问 Dashboard,命令如下:
|
||||
### 命令行代理
|
||||
|
||||
你可以使用 kubectl 命令行工具访问 Dashboard,命令如下:
|
||||
|
||||
```
|
||||
kubectl proxy
|
||||
|
@ -116,7 +120,10 @@ Kubeconfig 身份验证方法不支持外部身份提供程序或基于 x509 证
|
|||
<!--
|
||||
When you access Dashboard on an empty cluster, you'll see the welcome page. This page contains a link to this document as well as a button to deploy your first application. In addition, you can view which system applications are running by default in the `kube-system` [namespace](/docs/tasks/administer-cluster/namespaces/) of your cluster, for example the Dashboard itself.
|
||||
-->
|
||||
当访问空集群的 Dashboard 时,您会看到欢迎界面。页面包含一个指向此文档的链接,以及一个用于部署第一个应用程序的按钮。此外,您可以看到在默认情况下有哪些默认系统应用运行在 `kube-system` [命名空间](/docs/tasks/administer-cluster/namespaces/) 中,比如 Dashboard 自己。
|
||||
当访问空集群的 Dashboard 时,你会看到欢迎界面。
|
||||
页面包含一个指向此文档的链接,以及一个用于部署第一个应用程序的按钮。
|
||||
此外,你可以看到在默认情况下有哪些默认系统应用运行在 `kube-system`
|
||||
[名字空间](/zh/docs/tasks/administer-cluster/namespaces/) 中,比如 Dashboard 自己。
|
||||
|
||||
<!--
|
||||

|
||||
|
@ -125,183 +132,242 @@ When you access Dashboard on an empty cluster, you'll see the welcome page. This
|
|||
|
||||
<!--
|
||||
## Deploying containerized applications
|
||||
|
||||
Dashboard lets you create and deploy a containerized application as a Deployment and optional Service with a simple wizard. You can either manually specify application details, or upload a YAML or JSON file containing application configuration.
|
||||
-->
|
||||
## 部署容器化应用
|
||||
|
||||
<!--
|
||||
Dashboard lets you create and deploy a containerized application as a Deployment and optional Service with a simple wizard. You can either manually specify application details, or upload a YAML or JSON file containing application configuration.
|
||||
-->
|
||||
通过一个简单的部署向导,您可以使用 Dashboard 将容器化应用作为一个 Deployment 和可选的 Service 进行创建和部署。可以手工指定应用的详细配置,或者上传一个包含应用配置的 YAML 或 JSON 文件。
|
||||
通过一个简单的部署向导,你可以使用 Dashboard 将容器化应用作为一个 Deployment 和可选的 Service 进行创建和部署。可以手工指定应用的详细配置,或者上传一个包含应用配置的 YAML 或 JSON 文件。
|
||||
|
||||
<!--
|
||||
Click the **CREATE** button in the upper right corner of any page to begin.
|
||||
-->
|
||||
点击任何页面右上角的 **创建** 按钮以开始。
|
||||
点击任何页面右上角的 **CREATE** 按钮以开始。
|
||||
|
||||
<!--
|
||||
### Specifying application details
|
||||
|
||||
The deploy wizard expects that you provide the following information:
|
||||
-->
|
||||
### 指定应用的详细配置
|
||||
|
||||
<!--
|
||||
The deploy wizard expects that you provide the following information:
|
||||
-->
|
||||
部署向导需要您提供以下信息:
|
||||
部署向导需要你提供以下信息:
|
||||
|
||||
<!--
|
||||
- **App name** (mandatory): Name for your application. A [label](/docs/concepts/overview/working-with-objects/labels/) with the name will be added to the Deployment and Service, if any, that will be deployed.
|
||||
-->
|
||||
- **应用名称**(必填):应用的名称。内容为`应用名称`的[标签](/docs/concepts/overview/working-with-objects/labels/) 会被添加到任何将被部署的 Deployment 和 Service。
|
||||
- **应用名称**(必填):应用的名称。内容为`应用名称`的
|
||||
[标签](/zh/docs/concepts/overview/working-with-objects/labels/)
|
||||
会被添加到任何将被部署的 Deployment 和 Service。
|
||||
|
||||
<!--
|
||||
<!--
|
||||
The application name must be unique within the selected Kubernetes [namespace](/docs/tasks/administer-cluster/namespaces/). It must start with a lowercase character, and end with a lowercase character or a number, and contain only lowercase letters, numbers and dashes (-). It is limited to 24 characters. Leading and trailing spaces are ignored.
|
||||
-->
|
||||
在选定的 Kubernetes [命名空间](/docs/tasks/administer-cluster/namespaces/) 中,应用名称必须唯一。必须由小写字母开头,以数字或者小写字母结尾,并且只含有小写字母、数字和中划线(-)。小于等于24个字符。开头和结尾的空格会被忽略。
|
||||
-->
|
||||
在选定的 Kubernetes [名字空间](/zh/docs/tasks/administer-cluster/namespaces/) 中,
|
||||
应用名称必须唯一。必须由小写字母开头,以数字或者小写字母结尾,
|
||||
并且只含有小写字母、数字和中划线(-)。小于等于24个字符。开头和结尾的空格会被忽略。
|
||||
|
||||
<!--
|
||||
- **Container image** (mandatory): The URL of a public Docker [container image](/docs/concepts/containers/images/) on any registry, or a private image (commonly hosted on the Google Container Registry or Docker Hub). The container image specification must end with a colon.
|
||||
-->
|
||||
- **容器镜像**(必填):公共镜像仓库上的 Docker [容器镜像](/docs/concepts/containers/images/) 或者私有镜像仓库(通常是 Google Container Registery 或者 Docker Hub)的 URL。容器镜像参数说明必须以冒号结尾。
|
||||
- **容器镜像**(必填):公共镜像仓库上的 Docker
|
||||
[容器镜像](/zh/docs/concepts/containers/images/) 或者私有镜像仓库
|
||||
(通常是 Google Container Registery 或者 Docker Hub)的 URL。容器镜像参数说明必须以冒号结尾。
|
||||
|
||||
<!--
|
||||
- **Number of pods** (mandatory): The target number of Pods you want your application to be deployed in. The value must be a positive integer.
|
||||
-->
|
||||
- **pod 的数量**(必填):您希望应用程序部署的 Pod 的数量。值必须为正整数。
|
||||
- **Pod 的数量**(必填):你希望应用程序部署的 Pod 的数量。值必须为正整数。
|
||||
|
||||
<!--
|
||||
A [Deployment](/docs/concepts/workloads/controllers/deployment/) will be created to maintain the desired number of Pods across your cluster.
|
||||
-->
|
||||
系统会创建一个 [Deployment](/docs/concepts/workloads/controllers/deployment/) 用于保证集群中运行了期望的 Pod 数量。
|
||||
<!--
|
||||
A [Deployment](/docs/concepts/workloads/controllers/deployment/) will be created to
|
||||
maintain the desired number of Pods across your cluster.
|
||||
-->
|
||||
系统会创建一个 [Deployment](/zh/docs/concepts/workloads/controllers/deployment/)
|
||||
以保证集群中运行期望的 Pod 数量。
|
||||
|
||||
<!--
|
||||
- **Service** (optional): For some parts of your application (e.g. frontends) you may want to expose a [Service](/docs/concepts/services-networking/service/) onto an external, maybe public IP address outside of your cluster (external Service). For external Services, you may need to open up one or more ports to do so. Find more details [here](/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/).
|
||||
-->
|
||||
- **服务**(可选):对于部分应用(比如前端),您可能想对外暴露一个 [Service](/docs/concepts/services-networking/service/) ,这个 Service(外部 Service)可能用的是集群之外的公网 IP 地址。对于外部 Service 的情况,需要开放一个或者多个端口来满足。更多信息请参考 [这里](/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/)。
|
||||
- **服务**(可选):对于部分应用(比如前端),你可能想对外暴露一个
|
||||
[Service](/zh/docs/concepts/services-networking/service/) ,这个 Service
|
||||
可能用的是集群之外的公网 IP 地址(外部 Service)。
|
||||
对于外部 Service 的情况,需要开放一个或者多个端口来满足。
|
||||
更多信息请参考 [这里](/zh/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/)。
|
||||
|
||||
<!--
|
||||
<!--
|
||||
For external Services, you may need to open up one or more ports to do so.
|
||||
-->
|
||||
{{< note >}}
|
||||
对于外部服务,你可能需要开放一个或多个端口才行。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
Other Services that are only visible from inside the cluster are called internal Services.
|
||||
-->
|
||||
-->
|
||||
其它只能对集群内部可见的 Service 称为内部 Service。
|
||||
|
||||
<!--
|
||||
Irrespective of the Service type, if you choose to create a Service and your container listens on a port (incoming), you need to specify two ports. The Service will be created mapping the port (incoming) to the target port seen by the container. This Service will route to your deployed Pods. Supported protocols are TCP and UDP. The internal DNS name for this Service will be the value you specified as application name above.
|
||||
-->
|
||||
不管哪种 Service 类型,如果您选择创建一个 Service,而且容器在一个端口上开启了监听(入向的),那么您需要定义两个端口。创建的 Service 会把(入向的)端口映射到容器可见的目标端口。该 Service 会把流量路由到您部署的 Pod。支持的协议有 TCP 和 UDP。这个 Service 的内部 DNS 解析名就是之前您定义的应用名称的值。
|
||||
<!--
|
||||
Irrespective of the Service type, if you choose to create a Service and your container listens
|
||||
on a port (incoming), you need to specify two ports.
|
||||
The Service will be created mapping the port (incoming) to the target port seen by the container.
|
||||
This Service will route to your deployed Pods. Supported protocols are TCP and UDP.
|
||||
The internal DNS name for this Service will be the value you specified as application name above.
|
||||
-->
|
||||
不管哪种 Service 类型,如果你选择创建一个 Service,而且容器在一个端口上开启了监听(入向的),
|
||||
那么你需要定义两个端口。创建的 Service 会把(入向的)端口映射到容器可见的目标端口。
|
||||
该 Service 会把流量路由到你部署的 Pod。支持的协议有 TCP 和 UDP。
|
||||
这个 Service 的内部 DNS 解析名就是之前你定义的应用名称的值。
|
||||
|
||||
<!--
|
||||
If needed, you can expand the **Advanced options** section where you can specify more settings:
|
||||
-->
|
||||
如果需要,您可以打开 **高级选项** 部分,这里您可以定义更多设置:
|
||||
如果需要,你可以打开 **Advanced Options** 部分,这里你可以定义更多设置:
|
||||
|
||||
<!--
|
||||
- **Description**: The text you enter here will be added as an [annotation](/docs/concepts/overview/working-with-objects/annotations/) to the Deployment and displayed in the application's details.
|
||||
- **Description**: The text you enter here will be added as an
|
||||
[annotation](/docs/concepts/overview/working-with-objects/annotations/)
|
||||
to the Deployment and displayed in the application's details.
|
||||
-->
|
||||
- **描述**:这里您输入的文本会作为一个 [注解](/docs/concepts/overview/working-with-objects/annotations/) 添加到 Deployment,并显示在应用的详细信息中。
|
||||
- **描述**:这里你输入的文本会作为一个
|
||||
[注解](/zh/docs/concepts/overview/working-with-objects/annotations/)
|
||||
添加到 Deployment,并显示在应用的详细信息中。
|
||||
|
||||
<!--
|
||||
- **Labels**: Default [labels](/docs/concepts/overview/working-with-objects/labels/) to be used for your application are application name and version. You can specify additional labels to be applied to the Deployment, Service (if any), and Pods, such as release, environment, tier, partition, and release track.
|
||||
-->
|
||||
- **标签**:应用默认使用的 [标签](/docs/concepts/overview/working-with-objects/labels/) 是应用名称和版本。您可以为 Deployment、Service(如果有)定义额外的标签,比如 release(版本)、environment(环境)、tier(层级)、partition(分区) 和 release track(版本跟踪)。
|
||||
- **标签**:应用默认使用的
|
||||
[标签](/zh/docs/concepts/overview/working-with-objects/labels/) 是应用名称和版本。
|
||||
你可以为 Deployment、Service(如果有)定义额外的标签,比如 release(版本)、
|
||||
environment(环境)、tier(层级)、partition(分区) 和 release track(版本跟踪)。
|
||||
|
||||
<!--
|
||||
Example:
|
||||
-->
|
||||
<!-- Example: -->
|
||||
例子:
|
||||
|
||||
```conf
|
||||
release=1.0
|
||||
tier=frontend
|
||||
environment=pod
|
||||
track=stable
|
||||
```
|
||||
release=1.0
|
||||
tier=frontend
|
||||
environment=pod
|
||||
track=stable
|
||||
```
|
||||
|
||||
<!--
|
||||
- **Namespace**: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called [namespaces](/docs/tasks/administer-cluster/namespaces/). They let you partition resources into logically named groups.
|
||||
-->
|
||||
- **命名空间**:Kubernetes 支持多个虚拟集群依附于同一个物理集群。这些虚拟集群被称为 [命名空间](/docs/tasks/administer-cluster/namespaces/),可以让您将资源划分为逻辑命名的组。
|
||||
- **名字空间**:Kubernetes 支持多个虚拟集群依附于同一个物理集群。
|
||||
这些虚拟集群被称为
|
||||
[名字空间](/zh/docs/tasks/administer-cluster/namespaces/),
|
||||
可以让你将资源划分为逻辑命名的组。
|
||||
|
||||
<!--
|
||||
Dashboard offers all available namespaces in a dropdown list, and allows you to create a new namespace. The namespace name may contain a maximum of 63 alphanumeric characters and dashes (-) but can not contain capital letters.
|
||||
-->
|
||||
Dashboard 通过下拉菜单提供所有可用的命名空间,并允许您创建新的命名空间。命名空间的名称最长可以包含 63 个字母或数字和中横线(-),但是不能包含大写字母。
|
||||
<!--
|
||||
Namespace names should not consist of only numbers. If the name is set as a number, such as 10, the pod will be put in the default namespace.
|
||||
-->
|
||||
命名空间的名称不能只包含数字。如果名字被设置成一个数字,比如 10,pod 就
|
||||
<!--
|
||||
Dashboard offers all available namespaces in a dropdown list, and allows you to create a new namespace.
|
||||
The namespace name may contain a maximum of 63 alphanumeric characters and dashes (-)
|
||||
but can not contain capital letters.
|
||||
-->
|
||||
Dashboard 通过下拉菜单提供所有可用的名字空间,并允许你创建新的名字空间。
|
||||
名字空间的名称最长可以包含 63 个字母或数字和中横线(-),但是不能包含大写字母。
|
||||
|
||||
<!--
|
||||
In case the creation of the namespace is successful, it is selected by default. If the creation fails, the first namespace is selected.
|
||||
-->
|
||||
在 namespace 创建成功的情况下,默认会使用新创建的命名空间。如果创建失败,那么第一个命名空间会被选中。
|
||||
<!--
|
||||
Namespace names should not consist of only numbers.
|
||||
If the name is set as a number, such as 10, the pod will be put in the default namespace.
|
||||
-->
|
||||
名字空间的名称不能只包含数字。如果名字被设置成一个数字,比如 10,pod 就
|
||||
|
||||
<!--
|
||||
In case the creation of the namespace is successful, it is selected by default.
|
||||
If the creation fails, the first namespace is selected.
|
||||
-->
|
||||
在名字空间创建成功的情况下,默认会使用新创建的名字空间。如果创建失败,那么第一个名字空间会被选中。
|
||||
|
||||
<!--
|
||||
- **Image Pull Secret**: In case the specified Docker container image is private, it may require [pull secret](/docs/concepts/configuration/secret/) credentials.
|
||||
-->
|
||||
- **镜像拉取 Secret**:如果要使用私有的 Docker 容器镜像,需要拉取 [secret](/docs/concepts/configuration/secret/) 凭证。
|
||||
- **镜像拉取 Secret**:如果要使用私有的 Docker 容器镜像,需要拉取
|
||||
[Secret](/zh/docs/concepts/configuration/secret/) 凭证。
|
||||
|
||||
<!--
|
||||
Dashboard offers all available secrets in a dropdown list, and allows you to create a new secret. The secret name must follow the DNS domain name syntax, e.g. `new.image-pull.secret`. The content of a secret must be base64-encoded and specified in a [`.dockercfg`](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) file. The secret name may consist of a maximum of 253 characters.
|
||||
-->
|
||||
Dashboard 通过下拉菜单提供所有可用的 secret,并允许您创建新的 secret。secret 名称必须遵循 DNS 域名语法,比如 `new.image-pull.secret`。secret 的内容必须是 base64 编码的,并且在一个 [`.dockercfg`](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) 文件中声明。secret 名称最大可以包含 253 个字符。
|
||||
<!--
|
||||
Dashboard offers all available secrets in a dropdown list, and allows you to create a new secret.
|
||||
The secret name must follow the DNS domain name syntax, e.g. `new.image-pull.secret`.
|
||||
The content of a secret must be base64-encoded and specified in a
|
||||
[`.dockercfg`](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) file.
|
||||
The secret name may consist of a maximum of 253 characters.
|
||||
-->
|
||||
Dashboard 通过下拉菜单提供所有可用的 Secret,并允许你创建新的 Secret。
|
||||
Secret 名称必须遵循 DNS 域名语法,比如 `new.image-pull.secret`。
|
||||
Secret 的内容必须是 base64 编码的,并且在一个
|
||||
[`.dockercfg`](/zh/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod)
|
||||
文件中声明。Secret 名称最大可以包含 253 个字符。
|
||||
|
||||
<!--
|
||||
In case the creation of the image pull secret is successful, it is selected by default. If the creation fails, no secret is applied.
|
||||
-->
|
||||
在镜像拉取 secret 创建成功的情况下,默认会使用新创建的 secret。如果创建失败,则不会使用任何 secret。
|
||||
<!--
|
||||
In case the creation of the image pull secret is successful, it is selected by default.
|
||||
If the creation fails, no secret is applied.
|
||||
-->
|
||||
在镜像拉取 Secret 创建成功的情况下,默认会使用新创建的 Secret。
|
||||
如果创建失败,则不会使用任何 Secret。
|
||||
|
||||
<!--
|
||||
- **CPU requirement (cores)** and **Memory requirement (MiB)**: You can specify the minimum [resource limits](/docs/tasks/configure-pod-container/limit-range/) for the container. By default, Pods run with unbounded CPU and memory limits.
|
||||
-->
|
||||
- **CPU 需求(核数)**和**内存需求(MiB)**:您可以为容器定义最小的 [资源限制](/docs/tasks/configure-pod-container/limit-range/)。默认情况下,Pod 没有 CPU 和内存限制。
|
||||
- **CPU 需求(核数)**和**内存需求(MiB)**:你可以为容器定义最小的
|
||||
[资源限制](/zh/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)。
|
||||
默认情况下,Pod 没有 CPU 和内存限制。
|
||||
|
||||
<!--
|
||||
- **Run command** and **Run command arguments**: By default, your containers run the specified Docker image's default [entrypoint command](/docs/user-guide/containers/#containers-and-commands). You can use the command options and arguments to override the default.
|
||||
-->
|
||||
- **运行命令**和**运行命令参数**:默认情况下,您的容器会运行 Docker 镜像的默认 [入口命令](/docs/user-guide/containers/#containers-and-commands)。您可以使用 command 选项覆盖默认值。
|
||||
- **运行命令**和**运行命令参数**:默认情况下,你的容器会运行 Docker 镜像的默认
|
||||
[入口命令](/zh/docs/tasks/inject-data-application/define-command-argument-container/)。
|
||||
你可以使用 command 选项覆盖默认值。
|
||||
|
||||
<!--
|
||||
- **Run as privileged**: This setting determines whether processes in [privileged containers](/docs/user-guide/pods/#privileged-mode-for-pod-containers) are equivalent to processes running as root on the host. Privileged containers can make use of capabilities like manipulating the network stack and accessing devices.
|
||||
-->
|
||||
- **以特权运行**:这个设置决定了在 [特权容器](/docs/user-guide/pods/#privileged-mode-for-pod-containers) 中运行的进程是否像主机中使用 root 运行的进程一样。特权容器可以使用诸如操纵网络堆栈和访问设备的功能。
|
||||
- **以特权模式运行**:这个设置决定了在
|
||||
[特权容器](/zh/docs/concepts/workloads/pods/#privileged-mode-for-containers)
|
||||
中运行的进程是否像主机中使用 root 运行的进程一样。
|
||||
特权容器可以使用诸如操纵网络堆栈和访问设备的功能。
|
||||
|
||||
<!--
|
||||
- **Environment variables**: Kubernetes exposes Services through [environment variables](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/). You can compose environment variable or pass arguments to your commands using the values of environment variables. They can be used in applications to find a Service. Values can reference other variables using the `$(VAR_NAME)` syntax.
|
||||
-->
|
||||
- **环境变量**:Kubernetes 通过 [环境变量](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/) 暴露 Service。您可以构建环境变量,或者将环境变量的值作为参数传递给您的命令。它们可以被应用用于查找 Service。值可以通过 `$(VAR_NAME)` 语法关联其他变量。
|
||||
- **环境变量**:Kubernetes 通过
|
||||
[环境变量](/zh/docs/tasks/inject-data-application/environment-variable-expose-pod-information/)
|
||||
暴露 Service。你可以构建环境变量,或者将环境变量的值作为参数传递给你的命令。
|
||||
它们可以被应用用于查找 Service。值可以通过 `$(VAR_NAME)` 语法关联其他变量。
|
||||
|
||||
<!--
|
||||
### Uploading a YAML or JSON file
|
||||
|
||||
Kubernetes supports declarative configuration. In this style, all configuration is stored in YAML or JSON configuration files using the Kubernetes [API](/docs/concepts/overview/kubernetes-api/) resource schemas.
|
||||
-->
|
||||
### 上传 YAML 或者 JSON 文件
|
||||
|
||||
<!--
|
||||
Kubernetes supports declarative configuration. In this style, all configuration is stored in YAML or JSON configuration files using the Kubernetes [API](/docs/concepts/overview/kubernetes-api/) resource schemas.
|
||||
-->
|
||||
Kubernetes 支持声明式配置。所有的配置都存储在遵循 Kubernetes [API](/docs/concepts/overview/kubernetes-api/) 架构的 YAML 或者 JSON 配置文件中。
|
||||
Kubernetes 支持声明式配置。所有的配置都存储在遵循 Kubernetes
|
||||
[API](/zh/docs/concepts/overview/kubernetes-api/) 规范的 YAML 或者 JSON 配置文件中。
|
||||
|
||||
<!--
|
||||
As an alternative to specifying application details in the deploy wizard, you can define your application in YAML or JSON files, and upload the files using Dashboard:
|
||||
-->
|
||||
作为一种替代在部署向导中指定应用详情的方式,您可以在 YAML 或者 JSON 文件中定义应用,并且使用 Dashboard 上传文件:
|
||||
作为一种替代在部署向导中指定应用详情的方式,你可以在 YAML 或者 JSON 文件中定义应用,并且使用 Dashboard 上传文件:
|
||||
|
||||
<!--
|
||||
## Using Dashboard
|
||||
-->
|
||||
## 使用 Dashboard
|
||||
<!--
|
||||
|
||||
Following sections describe views of the Kubernetes Dashboard UI; what they provide and how can they be used.
|
||||
-->
|
||||
## 使用 Dashboard
|
||||
|
||||
以下各节描述了 Kubernetes Dashboard UI 视图;包括它们提供的内容,以及怎么使用它们。
|
||||
|
||||
<!--
|
||||
### Navigation
|
||||
-->
|
||||
### 导航栏
|
||||
|
||||
<!--
|
||||
When there are Kubernetes objects defined in the cluster, Dashboard shows them in the initial view. By default only objects from the _default_ namespace are shown and this can be changed using the namespace selector located in the navigation menu.
|
||||
-->
|
||||
当在集群中定义 Kubernetes 对象时,Dashboard 会在初始视图中显示它们。默认情况下只会显示 _默认_ 命名空间中的对象,可以通过更改导航栏菜单中的命名空间筛选器进行改变。
|
||||
### 导航
|
||||
|
||||
当在集群中定义 Kubernetes 对象时,Dashboard 会在初始视图中显示它们。
|
||||
默认情况下只会显示 _默认_ 名字空间中的对象,可以通过更改导航栏菜单中的名字空间筛选器进行改变。
|
||||
|
||||
<!--
|
||||
Dashboard shows most Kubernetes object kinds and groups them in a few menu categories.
|
||||
|
@ -310,58 +376,66 @@ Dashboard 展示大部分 Kubernetes 对象,并将它们分组放在几个菜
|
|||
|
||||
<!--
|
||||
#### Admin Overview
|
||||
|
||||
For cluster and namespace administrators, Dashboard lists Nodes, Namespaces and Persistent Volumes and has detail views for them. Node list view contains CPU and memory usage metrics aggregated across all Nodes. The details view shows the metrics for a Node, its specification, status, allocated resources, events and pods running on the node.
|
||||
-->
|
||||
#### 管理概述
|
||||
|
||||
<!--
|
||||
For cluster and namespace administrators, Dashboard lists Nodes, Namespaces and Persistent Volumes and has detail views for them. Node list view contains CPU and memory usage metrics aggregated across all Nodes. The details view shows the metrics for a Node, its specification, status, allocated resources, events and pods running on the node.
|
||||
-->
|
||||
集群和命名空间管理的视图, Dashboard 会列出节点、命名空间和持久卷,并且有它们的详细视图。节点列表视图包含从所有节点聚合的 CPU 和内存使用的度量值。详细信息视图显示了一个节点的度量值,它的规格、状态、分配的资源、事件和这个节点上运行的 Pod。
|
||||
集群和名字空间管理的视图, Dashboard 会列出节点、名字空间和持久卷,并且有它们的详细视图。
|
||||
节点列表视图包含从所有节点聚合的 CPU 和内存使用的度量值。
|
||||
详细信息视图显示了一个节点的度量值,它的规格、状态、分配的资源、事件和这个节点上运行的 Pod。
|
||||
|
||||
<!--
|
||||
#### Workloads
|
||||
Shows all applications running in the selected namespace. The view lists applications by workload kind (e.g., Deployments, Replica Sets, Stateful Sets, etc.) and each workload kind can be viewed separately. The lists summarize actionable information about the workloads, such as the number of ready pods for a Replica Set or current memory usage for a Pod.
|
||||
-->
|
||||
#### 负载
|
||||
显示选中的命名空间中所有运行的应用。视图按照负载类型(如 Deployment、ReplicaSet、StatefulSet 等)罗列应用,并且每种负载都可以单独查看。列表总结了关于负载的可执行信息,比如一个 ReplicaSet 的准备状态的 Pod 数量,或者目前一个 Pod 的内存使用量。
|
||||
|
||||
显示选中的名字空间中所有运行的应用。
|
||||
视图按照负载类型(如 Deployment、ReplicaSet、StatefulSet 等)罗列应用,并且每种负载都可以单独查看。
|
||||
列表总结了关于负载的可执行信息,比如一个 ReplicaSet 的准备状态的 Pod 数量,或者目前一个 Pod 的内存使用量。
|
||||
|
||||
<!--
|
||||
Detail views for workloads show status and specification information and surface relationships between objects. For example, Pods that Replica Set is controlling or New Replica Sets and Horizontal Pod Autoscalers for Deployments.
|
||||
-->
|
||||
工作负载的详情视图展示了对象的状态、详细信息和相互关系。例如,ReplicaSet 所控制的 Pod,或者 Deployment 关联的 新 ReplicaSet 和 Pod 水平扩展控制器。
|
||||
工作负载的详情视图展示了对象的状态、详细信息和相互关系。
|
||||
例如,ReplicaSet 所控制的 Pod,或者 Deployment 关联的 新 ReplicaSet 和 Pod 水平扩展控制器。
|
||||
|
||||
<!--
|
||||
#### Services
|
||||
Shows Kubernetes resources that allow for exposing services to external world and discovering them within a cluster. For that reason, Service and Ingress views show Pods targeted by them, internal endpoints for cluster connections and external endpoints for external users.
|
||||
-->
|
||||
#### 服务
|
||||
展示允许暴露给外网服务和允许集群内部发现的 Kubernetes 资源。因此,Service 和 Ingress 视图展示他们关联的 Pod、给集群连接使用的内部端点和给外部用户使用的外部端点。
|
||||
|
||||
展示允许暴露给外网服务和允许集群内部发现的 Kubernetes 资源。
|
||||
因此,Service 和 Ingress 视图展示他们关联的 Pod、给集群连接使用的内部端点和给外部用户使用的外部端点。
|
||||
|
||||
<!--
|
||||
#### Storage
|
||||
-->
|
||||
#### 存储
|
||||
<!--
|
||||
|
||||
Storage view shows Persistent Volume Claim resources which are used by applications for storing data.
|
||||
-->
|
||||
#### 存储
|
||||
|
||||
存储视图展示持久卷申领(PVC)资源,这些资源被应用程序用来存储数据。
|
||||
|
||||
<!--
|
||||
#### Config Maps and Secrets
|
||||
-->
|
||||
#### 配置
|
||||
<!--
|
||||
|
||||
Shows all Kubernetes resources that are used for live configuration of applications running in clusters. The view allows for editing and managing config objects and displays secrets hidden by default.
|
||||
-->
|
||||
展示的所有 Kubernetes 资源是在集群中运行的应用程序的实时配置。通过这个视图可以编辑和管理配置对象,并显示那些默认隐藏的 secret。
|
||||
#### ConfigMap 和 Secret
|
||||
|
||||
展示的所有 Kubernetes 资源是在集群中运行的应用程序的实时配置。
|
||||
通过这个视图可以编辑和管理配置对象,并显示那些默认隐藏的 secret。
|
||||
|
||||
<!--
|
||||
#### Logs viewer
|
||||
-->
|
||||
#### 日志查看器
|
||||
<!--
|
||||
|
||||
Pod lists and detail pages link to logs viewer that is built into Dashboard. The viewer allows for drilling down logs from containers belonging to a single Pod.
|
||||
-->
|
||||
#### 日志查看器
|
||||
|
||||
Pod 列表和详细信息页面可以链接到 Dashboard 内置的日志查看器。查看器可以钻取属于同一个 Pod 的不同容器的日志。
|
||||
|
||||
<!--
|
||||
|
@ -369,16 +443,11 @@ Pod 列表和详细信息页面可以链接到 Dashboard 内置的日志查看
|
|||
-->
|
||||

|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
<!--
|
||||
For more information, see the
|
||||
[Kubernetes Dashboard project page](https://github.com/kubernetes/dashboard).
|
||||
-->
|
||||
更多信息,参见
|
||||
[Kubernetes Dashboard 项目页面](https://github.com/kubernetes/dashboard).
|
||||
|
||||
更多信息,参见 [Kubernetes Dashboard 项目页面](https://github.com/kubernetes/dashboard).
|
||||
|
||||
|
|
|
@ -1,25 +1,26 @@
|
|||
---
|
||||
reviewers:
|
||||
- sig-cluster-lifecycle
|
||||
title: 升级 kubeadm 集群
|
||||
content_type: task
|
||||
weight: 20
|
||||
min-kubernetes-server-version: 1.18
|
||||
---
|
||||
<!--
|
||||
---
|
||||
reviewers:
|
||||
- sig-cluster-lifecycle
|
||||
title: Upgrading kubeadm clusters
|
||||
content_type: task
|
||||
---
|
||||
weight: 20
|
||||
min-kubernetes-server-version: 1.18
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
This page explains how to upgrade a Kubernetes cluster created with kubeadm from version
|
||||
1.16.x to version 1.17.x, and from version 1.17.x to 1.17.y (where `y > x`).
|
||||
1.17.x to version 1.18.x, and from version 1.18.x to 1.18.y (where `y > x`).
|
||||
-->
|
||||
本页介绍了如何将 `kubeadm` 创建的 Kubernetes 集群从 1.16.x 版本升级到 1.17.x 版本,以及从版本 1.17.x 升级到 1.17.y ,其中 `y > x`。
|
||||
本页介绍如何将 `kubeadm` 创建的 Kubernetes 集群从 1.17.x 版本升级到 1.18.x 版本,
|
||||
或者从版本 1.18.x 升级到 1.18.y ,其中 `y > x`。
|
||||
|
||||
<!--
|
||||
To see information about upgrading clusters created using older versions of kubeadm,
|
||||
|
@ -47,79 +48,68 @@ The upgrade workflow at high level is the following:
|
|||
-->
|
||||
升级工作的基本流程如下:
|
||||
|
||||
1. 升级主控制平面节点。
|
||||
1. 升级其他控制平面节点。
|
||||
1. 升级工作节点。
|
||||
1. 升级主控制平面节点
|
||||
1. 升级其他控制平面节点
|
||||
1. 升级工作节点
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
<!--
|
||||
- You need to have a kubeadm Kubernetes cluster running version 1.16.0 or later.
|
||||
- You need to have a kubeadm Kubernetes cluster running version 1.17.0 or later.
|
||||
- [Swap must be disabled](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux).
|
||||
- The cluster should use a static control plane and etcd pods or external etcd.
|
||||
- Make sure you read the [release notes]({{< latest-release-notes >}}) carefully.
|
||||
- Make sure to back up any important components, such as app-level state stored in a database.
|
||||
`kubeadm upgrade` does not touch your workloads, only components internal to Kubernetes, but backups are always a best practice.
|
||||
-->
|
||||
- 您需要有一个由 `kubeadm` 创建并运行着 1.16.0 或更高版本的 Kubernetes 集群。
|
||||
- [禁用 Swap](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux)。
|
||||
- 集群应使用静态的控制平面和 etcd pod 或者 外部 etcd。
|
||||
- 你需要有一个由 `kubeadm` 创建并运行着 1.17.0 或更高版本的 Kubernetes 集群。
|
||||
- [禁用交换分区](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux)。
|
||||
- 集群应使用静态的控制平面和 etcd Pod 或者 外部 etcd。
|
||||
- 务必仔细认真阅读[发行说明]({{< latest-release-notes >}})。
|
||||
- 务必备份所有重要组件,例如存储在数据库中应用层面的状态。
|
||||
`kubeadm upgrade` 不会影响您的工作负载,只会涉及 Kubernetes 内部的组件,但备份终究是好的。
|
||||
`kubeadm upgrade` 不会影响你的工作负载,只会涉及 Kubernetes 内部的组件,但备份终究是好的。
|
||||
|
||||
<!--
|
||||
### Additional information
|
||||
-->
|
||||
### 附加信息
|
||||
|
||||
<!--
|
||||
- All containers are restarted after upgrade, because the container spec hash value is changed.
|
||||
- You only can upgrade from one MINOR version to the next MINOR version,
|
||||
or between PATCH versions of the same MINOR. That is, you cannot skip MINOR versions when you upgrade.
|
||||
For example, you can upgrade from 1.y to 1.y+1, but not from 1.y to 1.y+2.
|
||||
-->
|
||||
- 升级后,因为容器 spec 哈希值已更改,所以所有容器都会重新启动。
|
||||
- 您只能从一个次版本升级到下一个次版本,或者同样次版本的补丁版。也就是说,升级时无法跳过版本。
|
||||
例如,您只能从 1.y 升级到 1.y+1,而不能从 from 1.y 升级到 1.y+2。
|
||||
### 附加信息
|
||||
|
||||
- 升级后,因为容器规约的哈希值已更改,所有容器都会被重新启动。
|
||||
- 你只能从一个次版本升级到下一个次版本,或者在次版本相同时升级补丁版本。
|
||||
也就是说,升级时不可以跳过次版本。
|
||||
例如,你只能从 1.y 升级到 1.y+1,而不能从 from 1.y 升级到 1.y+2。
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Determine which version to upgrade to
|
||||
|
||||
Find the latest stable 1.18 version:
|
||||
-->
|
||||
## 确定要升级到哪个版本
|
||||
|
||||
<!--
|
||||
Find the latest stable 1.18 version:
|
||||
|
||||
{{< tabs name="k8s_install_versions" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
apt update
|
||||
apt-cache policy kubeadm
|
||||
# find the latest 1.18 version in the list
|
||||
# it should look like 1.18.x-00, where x is the latest patch
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
yum list --showduplicates kubeadm --disableexcludes=kubernetes
|
||||
# find the latest 1.18 version in the list
|
||||
# it should look like 1.18.x-0, where x is the latest patch
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
-->
|
||||
找到最新的稳定版 1.18:
|
||||
|
||||
{{< tabs name="k8s_install_versions" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
apt update
|
||||
apt-cache policy kubeadm
|
||||
# 在列表中查找最新的 1.18 版本
|
||||
# 它看起来应该是 1.18.x-00 ,其中 x 是最新的补丁
|
||||
{{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
|
||||
```
|
||||
apt update
|
||||
apt-cache policy kubeadm
|
||||
# 在列表中查找最新的 1.18 版本
|
||||
# 它看起来应该是 1.18.x-00 ,其中 x 是最新的补丁
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
yum list --showduplicates kubeadm --disableexcludes=kubernetes
|
||||
# 在列表中查找最新的 1.18 版本
|
||||
# 它看起来应该是 1.18.x-0 ,其中 x 是最新的补丁版本
|
||||
{{% tab name="CentOS、RHEL 或 Fedora" %}}
|
||||
```
|
||||
yum list --showduplicates kubeadm --disableexcludes=kubernetes
|
||||
# 在列表中查找最新的 1.18 版本
|
||||
# 它看起来应该是 1.18.x-0 ,其中 x 是最新的补丁版本
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
@ -134,32 +124,23 @@ Find the latest stable 1.18 version:
|
|||
|
||||
<!--
|
||||
- On your first control plane node, upgrade kubeadm:
|
||||
|
||||
{{< tabs name="k8s_install_kubeadm_first_cp" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
# replace x in 1.18.x-00 with the latest patch version
|
||||
apt-mark unhold kubeadm && \
|
||||
apt-get update && apt-get install -y kubeadm=1.18.x-00 && \
|
||||
apt-mark hold kubeadm
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
# replace x in 1.18.x-0 with the latest patch version
|
||||
yum install -y kubeadm-1.18.x-0 -disableexcludes=kubernetes
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
-->
|
||||
- 在第一个控制平面节点上,升级 kubeadm :
|
||||
|
||||
{{< tabs name="k8s_install_kubeadm_first_cp" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
# 用最新的修补程序版本替换 1.18.x-00 中的 x
|
||||
apt-mark unhold kubeadm && \
|
||||
apt-get update && apt-get install -y kubeadm=1.18.x-00 && \
|
||||
apt-mark hold kubeadm
|
||||
{{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
|
||||
```shell
|
||||
# 用最新的修补程序版本替换 1.18.x-00 中的 x
|
||||
apt-mark unhold kubeadm && \
|
||||
apt-get update && apt-get install -y kubeadm=1.18.x-00 && \
|
||||
apt-mark hold kubeadm
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
# 用最新的修补程序版本替换 1.18.x-0 中的 x
|
||||
yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes
|
||||
{{% tab name="CentOS、RHEL 或 Fedora" %}}
|
||||
```shell
|
||||
# 用最新的修补程序版本替换 1.18.x-0 中的 x
|
||||
yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
@ -202,7 +183,7 @@ Find the latest stable 1.18 version:
|
|||
<!--
|
||||
You should see output similar to this:
|
||||
-->
|
||||
您应该可以看到与下面类似的输出:
|
||||
你应该可以看到与下面类似的输出:
|
||||
|
||||
```none
|
||||
[upgrade/config] Making sure the configuration is correct:
|
||||
|
@ -240,18 +221,17 @@ Find the latest stable 1.18 version:
|
|||
<!--
|
||||
This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to.
|
||||
-->
|
||||
此命令检查您的集群是否可以升级,并可以获取到升级的版本。
|
||||
此命令检查你的集群是否可以升级,并可以获取到升级的版本。
|
||||
|
||||
<!--
|
||||
`kubeadm upgrade` also automatically renews the certificates that it manages on this node.
|
||||
To opt-out of certificate renewal the flag `-certificate-renewal=false` can be used.
|
||||
For more information see the [certificate management guide](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs).
|
||||
-->
|
||||
|
||||
{{< note >}}
|
||||
`kubeadm upgrade` 也会自动对它在此节点上管理的证书进行续约。
|
||||
如果选择不对证书进行续约,可以使用标志 `--certificate-renewal=false`。
|
||||
关于更多细节信息,可参见[证书管理指南](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs)。
|
||||
关于更多细节信息,可参见[证书管理指南](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs)。
|
||||
{{</ note >}}
|
||||
|
||||
<!--
|
||||
|
@ -272,7 +252,7 @@ For more information see the [certificate management guide](/docs/tasks/administ
|
|||
<!--
|
||||
You should see output similar to this:
|
||||
-->
|
||||
您应该可以看见与下面类似的输出:
|
||||
你应该可以看见与下面类似的输出:
|
||||
|
||||
```none
|
||||
[upgrade/config] Making sure the configuration is correct:
|
||||
|
@ -364,8 +344,9 @@ For more information see the [certificate management guide](/docs/tasks/administ
|
|||
-->
|
||||
- 手动升级你的 CNI 驱动插件。
|
||||
|
||||
您的容器网络接口(CNI)驱动应该提供了程序自身的升级说明。
|
||||
检查[插件](/docs/concepts/cluster-administration/addons/)页面查找您 CNI 所提供的程序,并查看是否需要其他升级步骤。
|
||||
你的容器网络接口(CNI)驱动应该提供了程序自身的升级说明。
|
||||
参阅[插件](/zh/docs/concepts/cluster-administration/addons/)页面查找你 CNI 所提供的程序,
|
||||
并查看是否需要其他升级步骤。
|
||||
|
||||
如果 CNI 提供程序作为 DaemonSet 运行,则在其他控制平面节点上不需要此步骤。
|
||||
|
||||
|
@ -414,18 +395,27 @@ sudo kubeadm upgrade apply
|
|||
|
||||
{{< tabs name="k8s_install_kubelet" >}}
|
||||
{{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
|
||||
# 用最新的补丁版本替换 1.18.x-00 中的 x
|
||||
apt-mark unhold kubelet kubectl && \
|
||||
apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \
|
||||
apt-mark hold kubelet kubectl
|
||||
-
|
||||
# 从 apt-get 的 1.1 版本开始,你也可以使用下面的方法:
|
||||
apt-get update && \
|
||||
apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00
|
||||
```shell
|
||||
# 用最新的补丁版本替换 1.18.x-00 中的 x
|
||||
apt-mark unhold kubelet kubectl && \
|
||||
apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \
|
||||
apt-mark hold kubelet kubectl
|
||||
```
|
||||
|
||||
从 apt-get 的 1.1 版本开始,你也可以使用下面的方法:
|
||||
|
||||
```shell
|
||||
apt-get update && \
|
||||
apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS、RHEL 或 Fedora" %}}
|
||||
# 用最新的补丁版本替换 1.18.x-00 中的 x
|
||||
yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes
|
||||
|
||||
用最新的补丁版本替换 1.18.x-00 中的 x
|
||||
|
||||
```shell
|
||||
yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
@ -456,36 +446,33 @@ without compromising the minimum required capacity for running your workloads.
|
|||
|
||||
<!--
|
||||
- Upgrade kubeadm on all worker nodes:
|
||||
|
||||
{{< tabs name="k8s_install_kubeadm_worker_nodes" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
# replace x in 1.18.x-00 with the latest patch version
|
||||
apt-mark unhold kubeadm && \
|
||||
apt-get update && apt-get install -y kubeadm=1.18.x-00 && \
|
||||
apt-mark hold kubeadm
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
# replace x in 1.18.x-0 with the latest patch version
|
||||
yum install -y kubeadm-1.18.x-0 -disableexcludes=kubernetes
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
-->
|
||||
- 在所有工作节点升级 kubeadm:
|
||||
|
||||
{{< tabs name="k8s_install_kubeadm_worker_nodes" >}}
|
||||
{{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
|
||||
# 将 1.18.x-00 中的 x 替换为最新的补丁版本
|
||||
apt-mark unhold kubeadm && \
|
||||
apt-get update && apt-get install -y kubeadm=1.18.x-00 && \
|
||||
apt-mark hold kubeadm
|
||||
-
|
||||
# 从 apt-get 的 1.1 版本开始,你也可以使用下面的方法:
|
||||
apt-get update && \
|
||||
apt-get install -y --allow-change-held-packages kubeadm=1.18.x-00
|
||||
|
||||
```shell
|
||||
# 将 1.18.x-00 中的 x 替换为最新的补丁版本
|
||||
apt-mark unhold kubeadm && \
|
||||
apt-get update && apt-get install -y kubeadm=1.18.x-00 && \
|
||||
apt-mark hold kubeadm
|
||||
```
|
||||
|
||||
从 apt-get 的 1.1 版本开始,你也可以使用下面的方法:
|
||||
|
||||
```shell
|
||||
apt-get update && \
|
||||
apt-get install -y --allow-change-held-packages kubeadm=1.18.x-00
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS、RHEL 或 Fedora" %}}
|
||||
# 用最新的补丁版本替换 1.18.x-00 中的 x
|
||||
yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes
|
||||
|
||||
```shell
|
||||
# 用最新的补丁版本替换 1.18.x-00 中的 x
|
||||
yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
@ -557,18 +544,29 @@ without compromising the minimum required capacity for running your workloads.
|
|||
|
||||
{{< tabs name="k8s_kubelet_and_kubectl" >}}
|
||||
{{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
|
||||
# 将 1.18.x-00 中的 x 替换为最新的补丁版本
|
||||
apt-mark unhold kubelet kubectl && \
|
||||
apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \
|
||||
apt-mark hold kubelet kubectl
|
||||
-
|
||||
# 从 apt-get 的 1.1 版本开始,你也可以使用下面的方法:
|
||||
apt-get update && \
|
||||
apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00
|
||||
|
||||
```shell
|
||||
# 将 1.18.x-00 中的 x 替换为最新的补丁版本
|
||||
apt-mark unhold kubelet kubectl && \
|
||||
apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \
|
||||
apt-mark hold kubelet kubectl
|
||||
```
|
||||
|
||||
从 apt-get 的 1.1 版本开始,你也可以使用下面的方法:
|
||||
|
||||
```
|
||||
apt-get update && \
|
||||
apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
# 将 1.18.x-00 中的 x 替换为最新的补丁版本
|
||||
yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes
|
||||
|
||||
```shell
|
||||
# 将 1.18.x-00 中的 x 替换为最新的补丁版本
|
||||
yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
@ -601,10 +599,10 @@ without compromising the minimum required capacity for running your workloads.
|
|||
-->
|
||||
- 通过将节点标记为可调度,让节点重新上线:
|
||||
|
||||
```shell
|
||||
# 将 <node-to-drain> 替换为当前节点的名称
|
||||
kubectl uncordon <node-to-drain>
|
||||
```
|
||||
```shell
|
||||
# 将 <node-to-drain> 替换为当前节点的名称
|
||||
kubectl uncordon <node-to-drain>
|
||||
```
|
||||
|
||||
<!--
|
||||
## Verify the status of the cluster
|
||||
|
@ -638,9 +636,9 @@ To recover from a bad state, you can also run `kubeadm upgrade --force` without
|
|||
-->
|
||||
## 从故障状态恢复
|
||||
|
||||
如果 `kubeadm upgrade` 失败并且没有回滚,例如由于执行期间意外关闭,您可以再次运行 `kubeadm upgrade`。
|
||||
此命令是幂等的,并最终确保实际状态是您声明的所需状态。
|
||||
要从故障状态恢复,您还可以运行 `kubeadm upgrade --force` 而不去更改集群正在运行的版本。
|
||||
如果 `kubeadm upgrade` 失败并且没有回滚,例如由于执行期间意外关闭,你可以再次运行 `kubeadm upgrade`。
|
||||
此命令是幂等的,并最终确保实际状态是你声明的所需状态。
|
||||
要从故障状态恢复,你还可以运行 `kubeadm upgrade --force` 而不去更改集群正在运行的版本。
|
||||
|
||||
<!--
|
||||
During upgrade kubeadm writes the following backup folders under `/etc/kubernetes/tmp`:
|
||||
|
@ -690,7 +688,7 @@ and post-upgrade manifest file for a certain component, a backup file for it wil
|
|||
|
||||
`kubeadm upgrade apply` 做了以下工作:
|
||||
|
||||
- 检查您的集群是否处于可升级状态:
|
||||
- 检查你的集群是否处于可升级状态:
|
||||
- API 服务器是可访问的
|
||||
- 所有节点处于 `Ready` 状态
|
||||
- 控制面是健康的
|
||||
|
|
|
@ -1,53 +1,38 @@
|
|||
---
|
||||
reviewers:
|
||||
- caseydavenport
|
||||
title: 使用 Calico 作为 NetworkPolicy
|
||||
title: 使用 Calico 提供 NetworkPolicy
|
||||
content_type: task
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
<!-- This page shows a couple of quick ways to create a Calico cluster on Kubernetes. -->
|
||||
本页展示了两种在 Kubernetes 上快速创建 Calico 集群的方法。
|
||||
|
||||
<!--
|
||||
This page shows a couple of quick ways to create a Calico cluster on Kubernetes.
|
||||
-->
|
||||
本页展示了几种在 Kubernetes 上快速创建 Calico 集群的方法。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
<!-- Decide whether you want to deploy a [cloud](#creating-a-calico-cluster-with-google-kubernetes-engine-gke) or [local](#creating-a-local-calico-cluster-with-kubeadm) cluster. -->
|
||||
|
||||
决定您想部署一个[云](#在-Google-Kubernetes-Engine-GKE-上创建一个-Calico-集群) 还是 [本地](#使用-kubeadm-创建一个本地-Calico-集群) 集群。
|
||||
|
||||
<!--
|
||||
Decide whether you want to deploy a [cloud](#creating-a-calico-cluster-with-google-kubernetes-engine-gke) or [local](#creating-a-local-calico-cluster-with-kubeadm) cluster.
|
||||
-->
|
||||
确定你想部署一个[云版本](#gke-cluster)还是[本地版本](#local-cluster)的集群。
|
||||
|
||||
<!-- steps -->
|
||||
<!-- ## Creating a Calico cluster with Google Kubernetes Engine (GKE)
|
||||
|
||||
<!--
|
||||
## Creating a Calico cluster with Google Kubernetes Engine (GKE)
|
||||
|
||||
**Prerequisite**: [gcloud](https://cloud.google.com/sdk/docs/quickstarts).
|
||||
|
||||
1. To launch a GKE cluster with Calico, just include the `--enable-network-policy` flag.
|
||||
|
||||
**Syntax**
|
||||
```shell
|
||||
gcloud container clusters create [CLUSTER_NAME] --enable-network-policy
|
||||
```
|
||||
|
||||
**Example**
|
||||
```shell
|
||||
gcloud container clusters create my-calico-cluster --enable-network-policy
|
||||
```
|
||||
|
||||
1. To verify the deployment, use the following command.
|
||||
|
||||
```shell
|
||||
kubectl get pods --namespace=kube-system
|
||||
```
|
||||
|
||||
The Calico pods begin with `calico`. Check to make sure each one has a status of `Running`.
|
||||
-->
|
||||
## 在 Google Kubernetes Engine (GKE) 上创建一个 Calico 集群
|
||||
-->
|
||||
## 在 Google Kubernetes Engine (GKE) 上创建一个 Calico 集群 {#gke-cluster}
|
||||
|
||||
**先决条件**: [gcloud](https://cloud.google.com/sdk/docs/quickstarts)
|
||||
|
||||
1. 启动一个带有 Calico 的 GKE 集群,只需加上flag `--enable-network-policy`。
|
||||
<!--
|
||||
1. To launch a GKE cluster with Calico, just include the `--enable-network-policy` flag.
|
||||
-->
|
||||
1. To launch a GKE cluster with Calico, just include the `--enable-network-policy` flag.
|
||||
1. 启动一个带有 Calico 的 GKE 集群,只需加上参数 `--enable-network-policy`。
|
||||
|
||||
**语法**
|
||||
```shell
|
||||
|
@ -59,33 +44,36 @@ weight: 10
|
|||
gcloud container clusters create my-calico-cluster --enable-network-policy
|
||||
```
|
||||
|
||||
1. 使用如下命令验证部署是否正确。
|
||||
<!--
|
||||
1. To verify the deployment, use the following command.
|
||||
-->
|
||||
2. 使用如下命令验证部署是否正确。
|
||||
|
||||
```shell
|
||||
kubectl get pods --namespace=kube-system
|
||||
```
|
||||
|
||||
<!--
|
||||
The Calico pods begin with `calico`. Check to make sure each one has a status of `Running`.
|
||||
-->
|
||||
Calico 的 pods 名以 `calico` 打头,检查确认每个 pods 状态为 `Running`。
|
||||
|
||||
<!--
|
||||
|
||||
<!--
|
||||
## Creating a local Calico cluster with kubeadm
|
||||
|
||||
To get a local single-host Calico cluster in fifteen minutes using kubeadm, refer to the
|
||||
[Calico Quickstart](https://docs.projectcalico.org/latest/getting-started/kubernetes/).
|
||||
|
||||
-->
|
||||
## 使用 kubeadm 创建一个本地 Calico 集群 {#local-cluster}
|
||||
|
||||
## 使用 kubeadm 创建一个本地 Calico 集群
|
||||
|
||||
在15分钟内使用 kubeadm 得到一个本地单主机 Calico 集群,请参考
|
||||
使用 kubeadm 在 15 分钟内得到一个本地单主机 Calico 集群,请参考
|
||||
[Calico 快速入门](https://docs.projectcalico.org/latest/getting-started/kubernetes/)。
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!-- Once your cluster is running, you can follow the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) to try out Kubernetes NetworkPolicy. -->
|
||||
集群运行后,您可以按照 [声明 Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) 去尝试使用 Kubernetes NetworkPolicy。
|
||||
<!--
|
||||
Once your cluster is running, you can follow the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) to try out Kubernetes NetworkPolicy.
|
||||
-->
|
||||
集群运行后,您可以按照[声明网络策略](/zh/docs/tasks/administer-cluster/declare-network-policy/)
|
||||
去尝试使用 Kubernetes NetworkPolicy。
|
||||
|
||||
|
|
|
@ -1,30 +1,25 @@
|
|||
---
|
||||
reviewers:
|
||||
- danwent
|
||||
title: 使用 Cilium 作为 NetworkPolicy
|
||||
title: 使用 Cilium 提供 NetworkPolicy
|
||||
content_type: task
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
<!-- This page shows how to use Cilium for NetworkPolicy.
|
||||
<!--
|
||||
This page shows how to use Cilium for NetworkPolicy.
|
||||
|
||||
For background on Cilium, read the [Introduction to Cilium](https://cilium.readthedocs.io/en/latest/intro). -->
|
||||
|
||||
本页展示了如何使用 Cilium 作为 NetworkPolicy。
|
||||
For background on Cilium, read the [Introduction to Cilium](https://cilium.readthedocs.io/en/latest/intro).
|
||||
-->
|
||||
本页展示如何使用 Cilium 提供 NetworkPolicy。
|
||||
|
||||
关于 Cilium 的背景知识,请阅读 [Cilium 介绍](https://cilium.readthedocs.io/en/latest/intro)。
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Deploying Cilium on Minikube for Basic Testing
|
||||
|
||||
|
@ -32,32 +27,68 @@ To get familiar with Cilium easily you can follow the
|
|||
[Cilium Kubernetes Getting Started Guide](https://docs.cilium.io/en/latest/gettingstarted/minikube/)
|
||||
to perform a basic DaemonSet installation of Cilium in minikube.
|
||||
|
||||
Installation in a minikube setup uses a simple ''all-in-one'' YAML
|
||||
file that includes DaemonSet configurations for Cilium, to connect
|
||||
to the minikube's etcd instance as well as appropriate RBAC settings:
|
||||
-->
|
||||
|
||||
To start minikube, minimal version required is >= v1.3.1, run the with the
|
||||
following arguments:
|
||||
-->
|
||||
## 在 Minikube 上部署 Cilium 用于基本测试
|
||||
|
||||
为了轻松熟悉 Cilium 您可以根据[Cilium Kubernetes 入门指南](https://docs.cilium.io/en/latest/gettingstarted/minikube/)在 minikube 中执行一个 cilium 的基本的 DaemonSet 安装。
|
||||
为了轻松熟悉 Cilium 你可以根据
|
||||
[Cilium Kubernetes 入门指南](https://docs.cilium.io/en/latest/gettingstarted/minikube/)
|
||||
在 minikube 中执行一个 cilium 的基本 DaemonSet 安装。
|
||||
|
||||
在 minikube 中的安装配置使用一个简单的“一体化” YAML 文件,包括了 Cilium 的 DaemonSet 配置,连接 minikube 的 etcd 实例,以及适当的 RBAC 设置。
|
||||
要启动 minikube,需要的最低版本为 1.3.1,使用下面的参数运行:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/master/examples/kubernetes/cilium.yaml
|
||||
configmap "cilium-config" created
|
||||
secret "cilium-etcd-secrets" created
|
||||
serviceaccount "cilium" created
|
||||
clusterrolebinding "cilium" created
|
||||
daemonset "cilium" created
|
||||
clusterrole "cilium" created
|
||||
minikube version
|
||||
```
|
||||
```
|
||||
minikube version: v1.3.1
|
||||
```
|
||||
|
||||
```shell
|
||||
minikube start --network-plugin=cni --memory=4096
|
||||
```
|
||||
|
||||
<!--
|
||||
Mount the BPF filesystem:
|
||||
-->
|
||||
挂载 BPF 文件系统:
|
||||
|
||||
```shell
|
||||
minikube ssh -- sudo mount bpffs -t bpf /sys/fs/bpf
|
||||
```
|
||||
|
||||
<!--
|
||||
For minikube you can deploy this simple ''all-in-one'' YAML file that includes
|
||||
DaemonSet configurations for Cilium as well as appropriate RBAC settings:
|
||||
-->
|
||||
在 minikube 环境中,你可以部署下面的"一体化" YAML 文件,其中包含 Cilium
|
||||
的 DaemonSet 配置以及适当的 RBAC 配置:
|
||||
|
||||
|
||||
```shell
|
||||
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/master/examples/kubernetes/cilium.yaml
|
||||
```
|
||||
|
||||
```
|
||||
configmap/cilium-config created
|
||||
serviceaccount/cilium created
|
||||
serviceaccount/cilium-operator created
|
||||
clusterrole.rbac.authorization.k8s.io/cilium created
|
||||
clusterrole.rbac.authorization.k8s.io/cilium-operator created
|
||||
clusterrolebinding.rbac.authorization.k8s.io/cilium created
|
||||
clusterrolebinding.rbac.authorization.k8s.io/cilium-operator created
|
||||
daemonset.apps/cilium create
|
||||
deployment.apps/cilium-operator created
|
||||
```
|
||||
|
||||
<!--
|
||||
The remainder of the Getting Started Guide explains how to enforce both L3/L4
|
||||
(i.e., IP address + port) security policies, as well as L7 (e.g., HTTP) security
|
||||
policies using an example application.
|
||||
-->
|
||||
入门指南其余的部分用一个示例应用说明了如何强制执行L3/L4(即 IP 地址+端口)的安全策略以及L7 (如 HTTP)的安全策略。
|
||||
入门指南其余的部分用一个示例应用说明了如何强制执行 L3/L4(即 IP 地址+端口)的安全策略
|
||||
以及L7 (如 HTTP)的安全策略。
|
||||
|
||||
<!--
|
||||
## Deploying Cilium for Production Use
|
||||
|
@ -67,14 +98,14 @@ For detailed instructions around deploying Cilium for production, see:
|
|||
This documentation includes detailed requirements, instructions and example
|
||||
production DaemonSet files.
|
||||
-->
|
||||
|
||||
## 部署 Cilium 用于生产用途
|
||||
关于部署 Cilium 用于生产的详细说明,请见[Cilium Kubernetes 安装指南](https://cilium.readthedocs.io/en/latest/gettingstarted/#installation)
|
||||
,此文档包括详细的需求、说明和生产用途 DaemonSet 文件示例。
|
||||
|
||||
|
||||
关于部署 Cilium 用于生产的详细说明,请见
|
||||
[Cilium Kubernetes 安装指南](https://cilium.readthedocs.io/en/latest/gettingstarted/#installation),
|
||||
此文档包括详细的需求、说明和生产用途 DaemonSet 文件示例。
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
<!--
|
||||
## Understanding Cilium components
|
||||
|
||||
|
@ -83,53 +114,40 @@ this list of Pods run:
|
|||
-->
|
||||
## 了解 Cilium 组件
|
||||
|
||||
部署使用 Cilium 的集群会添加 Pods 到`kube-system`命名空间。 要查看此Pod列表,运行:
|
||||
部署使用 Cilium 的集群会添加 Pods 到 `kube-system` 命名空间。要查看 Pod 列表,运行:
|
||||
|
||||
```shell
|
||||
kubectl get pods --namespace=kube-system
|
||||
```
|
||||
|
||||
<!-- You'll see a list of Pods similar to this: -->
|
||||
您将看到像这样的 Pods 列表:
|
||||
你将看到像这样的 Pods 列表:
|
||||
|
||||
```console
|
||||
NAME DESIRED CURRENT READY NODE-SELECTOR AGE
|
||||
cilium 1 1 1 <none> 2m
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
cilium-6rxbd 1/1 Running 0 1m
|
||||
...
|
||||
```
|
||||
|
||||
<!--
|
||||
There are two main components to be aware of:
|
||||
|
||||
- One `cilium` Pod runs on each node in your cluster and enforces network policy
|
||||
A `cilium` Pod runs on each node in your cluster and enforces network policy
|
||||
on the traffic to/from Pods on that node using Linux BPF.
|
||||
- For production deployments, Cilium should leverage the key-value store cluster
|
||||
(e.g., etcd) used by Kubernetes, which typically runs on the Kubernetes master nodes.
|
||||
The [Cilium Kubernetes Installation Guide](https://cilium.readthedocs.io/en/latest/gettingstarted/#installation)
|
||||
includes an example DaemonSet which can be customized to point to this key-value
|
||||
store cluster. The simple ''all-in-one'' DaemonSet for minikube requires no such
|
||||
configuration because it automatically connects to the minikube's etcd instance.
|
||||
-->
|
||||
有两个主要组件需要注意:
|
||||
|
||||
- 在集群中的每个节点上都会运行一个 `cilium` Pod,并利用Linux BPF执行网络策略管理该节点上进出 Pod 的流量。
|
||||
- 对于生产部署,Cilium 应该复用 Kubernetes 所使用的键值存储集群(如 etcd),其通常在Kubernetes 的 master 节点上运行。
|
||||
[Cilium Kubernetes安装指南](https://cilium.readthedocs.io/en/latest/gettingstarted/#installation)
|
||||
包括了一个示例 DaemonSet,可以自定义指定此键值存储集群。
|
||||
简单的 minikube 的“一体化” DaemonSet 不需要这样的配置,因为它会自动连接到 minikube 的 etcd 实例。
|
||||
|
||||
|
||||
-->
|
||||
你的集群中的每个节点上都会运行一个 `cilium` Pod,通过使用 Linux BPF
|
||||
针对该节点上的 Pod 的入站、出站流量实施网络策略控制。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!-- Once your cluster is running, you can follow the
|
||||
<!--
|
||||
Once your cluster is running, you can follow the
|
||||
[Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/)
|
||||
to try out Kubernetes NetworkPolicy with Cilium.
|
||||
Have fun, and if you have questions, contact us using the
|
||||
[Cilium Slack Channel](https://cilium.herokuapp.com/). -->
|
||||
群集运行后,您可以按照[声明网络策略](/docs/tasks/administer-cluster/declare-network-policy/)
|
||||
用 Cilium 试用 Kubernetes NetworkPolicy。
|
||||
玩得开心,如果您有任何疑问,请联系我们
|
||||
[Cilium Slack Channel](https://cilium.herokuapp.com/)。
|
||||
|
||||
|
||||
[Cilium Slack Channel](https://cilium.herokuapp.com/).
|
||||
-->
|
||||
集群运行后,你可以按照
|
||||
[声明网络策略](/zh/docs/tasks/administer-cluster/declare-network-policy/)
|
||||
试用基于 Cilium 的 Kubernetes NetworkPolicy。
|
||||
玩得开心,如果你有任何疑问,请到 [Cilium Slack 频道](https://cilium.herokuapp.com/)
|
||||
联系我们。
|
||||
|
||||
|
|
|
@ -1,33 +1,42 @@
|
|||
---
|
||||
reviewers:
|
||||
- murali-reddy
|
||||
title: 使用 Kube-router 作为 NetworkPolicy
|
||||
title: 使用 kube-router 提供 NetworkPolicy
|
||||
content_type: task
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
<!-- This page shows how to use [Kube-router](https://github.com/cloudnativelabs/kube-router) for NetworkPolicy. -->
|
||||
本页展示了如何使用 [Kube-router](https://github.com/cloudnativelabs/kube-router) 作为 NetworkPolicy。
|
||||
<!--
|
||||
This page shows how to use [Kube-router](https://github.com/cloudnativelabs/kube-router) for NetworkPolicy.
|
||||
-->
|
||||
本页展示如何使用 [Kube-router](https://github.com/cloudnativelabs/kube-router) 提供 NetworkPolicy。
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
<!-- You need to have a Kubernetes cluster running. If you do not already have a cluster, you can create one by using any of the cluster installers like Kops, Bootkube, Kubeadm etc. -->
|
||||
|
||||
您需要拥有一个正在运行的 Kubernetes 集群。如果您还没有集群,可以使用任意的集群安装器如 Kops,Bootkube,Kubeadm 等创建一个。
|
||||
|
||||
<!--
|
||||
You need to have a Kubernetes cluster running. If you do not already have a cluster, you can create one by using any of the cluster installers like Kops, Bootkube, Kubeadm etc.
|
||||
-->
|
||||
你需要拥有一个运行中的 Kubernetes 集群。如果你还没有集群,可以使用任意的集群
|
||||
安装程序如 Kops、Bootkube、Kubeadm 等创建一个。
|
||||
|
||||
<!-- steps -->
|
||||
<!-- ## Installing Kube-router addon
|
||||
The Kube-router Addon comes with a Network Policy Controller that watches Kubernetes API server for any NetworkPolicy and pods updated and configures iptables rules and ipsets to allow or block traffic as directed by the policies. Please follow the [trying Kube-router with cluster installers](https://www.kube-router.io/docs/user-guide/#try-kube-router-with-cluster-installers) guide to install Kube-router addon. -->
|
||||
<!--
|
||||
## Installing Kube-router addon
|
||||
|
||||
## 安装 Kube-router 插件
|
||||
Kube-router 插件自带一个Network Policy 控制器,监视来自于Kubernetes API server 的 NetworkPolicy 和 pods 的变化,根据策略指示配置 iptables 规则和 ipsets 来允许或阻止流量。请根据 [尝试通过集群安装器使用 Kube-router](https://www.kube-router.io/docs/user-guide/#try-kube-router-with-cluster-installers) 指南安装 Kube-router 插件。
|
||||
The Kube-router Addon comes with a Network Policy Controller that watches Kubernetes API server for any NetworkPolicy and pods updated and configures iptables rules and ipsets to allow or block traffic as directed by the policies. Please follow the [trying Kube-router with cluster installers](https://www.kube-router.io/docs/user-guide/#try-kube-router-with-cluster-installers) guide to install Kube-router addon.
|
||||
-->
|
||||
## 安装 kube-router 插件
|
||||
|
||||
kube-router 插件自带一个网络策略控制器,监视来自于 Kubernetes API 服务器的
|
||||
NetworkPolicy 和 Pod 的变化,根据策略指示配置 iptables 规则和 ipsets 来允许或阻止流量。
|
||||
请根据 [通过集群安装程序尝试 kube-router](https://www.kube-router.io/docs/user-guide/#try-kube-router-with-cluster-installers) 指南安装 kube-router 插件。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!-- Once you have installed the Kube-router addon, you can follow the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) to try out Kubernetes NetworkPolicy. -->
|
||||
在您安装 Kube-router 插件后,可以根据 [声明 Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) 去尝试使用 Kubernetes NetworkPolicy。
|
||||
<!--
|
||||
Once you have installed the Kube-router addon, you can follow the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) to try out Kubernetes NetworkPolicy.
|
||||
-->
|
||||
在你安装了 kube-router 插件后,可以参考
|
||||
[声明网络策略](/zh/docs/tasks/administer-cluster/declare-network-policy/)
|
||||
去尝试使用 Kubernetes NetworkPolicy。
|
||||
|
||||
|
|
|
@ -1,7 +1,5 @@
|
|||
---
|
||||
reviewers:
|
||||
- chrismarino
|
||||
title: 使用 Romana 作为 NetworkPolicy
|
||||
title: 使用 Romana 提供 NetworkPolicy
|
||||
content_type: task
|
||||
weight: 40
|
||||
---
|
||||
|
@ -11,15 +9,10 @@ weight: 40
|
|||
<!-- This page shows how to use Romana for NetworkPolicy. -->
|
||||
本页展示如何使用 Romana 作为 NetworkPolicy。
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
<!-- Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/getting-started-guides/kubeadm/). -->
|
||||
完成[kubeadm 入门指南](/docs/getting-started-guides/kubeadm/)中的1、2、3步。
|
||||
|
||||
|
||||
完成 [kubeadm 入门指南](/zh/docs/reference/setup-tools/kubeadm/kubeadm/)中的 1、2、3 步。
|
||||
|
||||
<!-- steps -->
|
||||
<!--
|
||||
|
@ -37,23 +30,22 @@ To apply network policies use one of the following:
|
|||
-->
|
||||
## 使用 kubeadm 安装 Romana
|
||||
|
||||
按照[容器化安装指南](https://github.com/romana/romana/tree/master/containerize)获取 kubeadm。
|
||||
按照[容器化安装指南](https://github.com/romana/romana/tree/master/containerize),使用 kubeadm 安装。
|
||||
|
||||
## 运用网络策略
|
||||
## 应用网络策略
|
||||
|
||||
使用以下的一种方式去运用网络策略:
|
||||
使用以下的一种方式应用网络策略:
|
||||
|
||||
* [Romana 网络策略](https://github.com/romana/romana/wiki/Romana-policies)
|
||||
* [Romana 网络策略例子](https://github.com/romana/core/blob/master/doc/policy.md)
|
||||
* NetworkPolicy API
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
Once you have installed Romana, you can follow the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) to try out Kubernetes NetworkPolicy.
|
||||
-->
|
||||
Romana 安装完成后,您可以按照[声明 Network Policy](/docs/tasks/administer-cluster/declare-network-policy/)去尝试使用 Kubernetes NetworkPolicy。
|
||||
|
||||
Romana 安装完成后,你可以按照
|
||||
[声明网络策略](/zh/docs/tasks/administer-cluster/declare-network-policy/)
|
||||
去尝试使用 Kubernetes NetworkPolicy。
|
||||
|
||||
|
|
|
@ -1,28 +1,28 @@
|
|||
---
|
||||
reviewers:
|
||||
- bboreham
|
||||
title: 使用 Weave Net 作为 NetworkPolicy
|
||||
title: 使用 Weave Net 提供 NetworkPolicy
|
||||
content_type: task
|
||||
weight: 50
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!-- This page shows how to use Weave Net for NetworkPolicy. -->
|
||||
|
||||
本页展示了如何使用使用 Weave Net 作为 NetworkPolicy。
|
||||
|
||||
|
||||
<!--
|
||||
This page shows how to use Weave Net for NetworkPolicy.
|
||||
-->
|
||||
本页展示如何使用使用 Weave Net 提供 NetworkPolicy。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
<!--
|
||||
You need to have a Kubernetes cluster. Follow the [kubeadm getting started guide](/docs/getting-started-guides/kubeadm/) to bootstrap one.
|
||||
-->
|
||||
您需要拥有一个 Kubernetes 集群。按照[kubeadm 入门指南](/docs/getting-started-guides/kubeadm/)来引导一个。
|
||||
你需要拥有一个 Kubernetes 集群。按照
|
||||
[kubeadm 入门指南](/zh/docs/reference/setup-tools/kubeadm/kubeadm/)
|
||||
来启动一个。
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Install the Weave Net addon
|
||||
|
||||
|
@ -32,20 +32,21 @@ The Weave Net addon for Kubernetes comes with a [Network Policy Controller](http
|
|||
-->
|
||||
## 安装 Weave Net 插件
|
||||
|
||||
按照[通过插件集成Kubernetes](https://www.weave.works/docs/net/latest/kube-addon/)指南。
|
||||
按照[通过插件集成 Kubernetes](https://www.weave.works/docs/net/latest/kube-addon/)
|
||||
指南执行安装。
|
||||
|
||||
Kubernetes 的 Weave Net 插件带有[网络策略控制器](https://www.weave.works/docs/net/latest/kube-addon/#npc),可自动监控 Kubernetes 所有名称空间中的任何 NetworkPolicy 注释。 配置`iptables`规则以允许或阻止策略指示的流量。
|
||||
Kubernetes 的 Weave Net 插件带有
|
||||
[网络策略控制器](https://www.weave.works/docs/net/latest/kube-addon/#npc),
|
||||
可自动监控 Kubernetes 所有名字空间的 NetworkPolicy 注释,
|
||||
配置 `iptables` 规则以允许或阻止策略指示的流量。
|
||||
|
||||
<!--
|
||||
|
||||
## Test the installation
|
||||
|
||||
Verify that the weave works.
|
||||
|
||||
Enter the following command:
|
||||
|
||||
-->
|
||||
|
||||
## 测试安装
|
||||
|
||||
验证 weave 是否有效。
|
||||
|
@ -67,17 +68,21 @@ weave-net-7nmwt 2/2 Running 3 9d
|
|||
weave-net-pmw8w 2/2 Running 0 9d 192.168.2.216 worknode2
|
||||
```
|
||||
|
||||
<!-- Each Node has a weave Pod, and all Pods are `Running` and `2/2 READY`. (`2/2` means that each Pod has `weave` and `weave-npc`.) -->
|
||||
每个 Node 都有一个 weave Pod,所有 Pod 都是`Running`和`2/2 READY`。(`2/2`表示每个Pod都有`weave`和`weave-npc`。)
|
||||
|
||||
|
||||
<!--
|
||||
Each Node has a weave Pod, and all Pods are `Running` and `2/2 READY`. (`2/2` means that each Pod has `weave` and `weave-npc`.)
|
||||
-->
|
||||
每个 Node 都有一个 weave Pod,所有 Pod 都是`Running` 和 `2/2 READY`。
|
||||
(`2/2` 表示每个 Pod 都有 `weave` 和 `weave-npc`)
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
Once you have installed the Weave Net addon, you can follow the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) to try out Kubernetes NetworkPolicy. If you have any question, contact us at [#weave-community on Slack or Weave User Group](https://github.com/weaveworks/weave#getting-help).
|
||||
-->
|
||||
|
||||
安装Weave Net插件后,您可以按照[声明网络策略](/docs/tasks/administration-cluster/declare-network-policy/)来试用 Kubernetes NetworkPolicy。 如果您有任何疑问,请联系我们[#weave-community on Slack 或 Weave User Group](https://github.com/weaveworks/weave#getting-help)。
|
||||
|
||||
安装 Weave Net 插件后,你可以参考
|
||||
[声明网络策略](/zh/docs/tasks/administration-cluster/declare-network-policy/)
|
||||
来试用 Kubernetes NetworkPolicy。
|
||||
如果你有任何疑问,请通过
|
||||
[Slack 上的 #weave-community 频道或者 Weave 用户组](https://github.com/weaveworks/weave#getting-help)
|
||||
联系我们。
|
||||
|
||||
|
|
Loading…
Reference in New Issue