[zh] Drop federation related contents
These contents are removed from English site.
This commit is contained in:
parent
fdcff32ccc
commit
a8b09e6b1b
|
@ -1,4 +1,136 @@
|
|||
---
|
||||
title: "计算、存储和网络扩展"
|
||||
weight: 30
|
||||
title: "集群管理"
|
||||
weight: 100
|
||||
content_type: concept
|
||||
description: >
|
||||
关于创建和管理 Kubernetes 集群的底层细节。
|
||||
no_list: true
|
||||
---
|
||||
|
||||
<!--
|
||||
title: Cluster Administration
|
||||
reviewers:
|
||||
- davidopp
|
||||
- lavalamp
|
||||
weight: 100
|
||||
content_type: concept
|
||||
description: >
|
||||
Lower-level detail relevant to creating or administering a Kubernetes cluster.
|
||||
no_list: true
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
<!--
|
||||
The cluster administration overview is for anyone creating or administering a Kubernetes cluster.
|
||||
It assumes some familiarity with core Kubernetes [concepts](/docs/concepts/).
|
||||
-->
|
||||
集群管理概述面向任何创建和管理 Kubernetes 集群的读者人群。
|
||||
我们假设你对一些核心的 Kubernetes [概念](/zh/docs/concepts/)大概了解。
|
||||
|
||||
|
||||
<!-- body -->
|
||||
<!--
|
||||
## Planning a cluster
|
||||
|
||||
See the guides in [Setup](/docs/setup/) for examples of how to plan, set up, and configure Kubernetes clusters. The solutions listed in this article are called *distros*.
|
||||
|
||||
Not all distros are actively maintained. Choose distros which have been tested with a recent version of Kubernetes.
|
||||
|
||||
Before choosing a guide, here are some considerations:
|
||||
-->
|
||||
## 规划集群
|
||||
|
||||
查阅[安装](/zh/docs/setup/)中的指导,获取如何规划、建立以及配置 Kubernetes 集群的示例。本文所列的文章称为*发行版* 。
|
||||
|
||||
{{< note >}}
|
||||
并非所有发行版都是被积极维护的。
|
||||
请选择使用最近 Kubernetes 版本测试过的发行版。
|
||||
{{< /note >}}
|
||||
|
||||
在选择一个指南前,有一些因素需要考虑:
|
||||
|
||||
<!--
|
||||
- Do you just want to try out Kubernetes on your computer, or do you want to build a high-availability, multi-node cluster? Choose distros best suited for your needs.
|
||||
- Will you be using **a hosted Kubernetes cluster**, such as [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**?
|
||||
- Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly support hybrid clusters. Instead, you can set up multiple clusters.
|
||||
- **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/concepts/cluster-administration/networking/) fits best.
|
||||
- Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**?
|
||||
- Do you **just want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the
|
||||
latter, choose an actively-developed distro. Some distros only use binary releases, but
|
||||
offer a greater variety of choices.
|
||||
- Familiarize yourself with the [components](/docs/admin/cluster-components/) needed to run a cluster.
|
||||
-->
|
||||
- 你是打算在你的计算机上尝试 Kubernetes,还是要构建一个高可用的多节点集群?请选择最适合你需求的发行版。
|
||||
- 您正在使用类似 [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) 这样的**被托管的 Kubernetes 集群**, 还是**管理您自己的集群**?
|
||||
- 你的集群是在**本地**还是**云(IaaS)**上?Kubernetes 不能直接支持混合集群。作为代替,你可以建立多个集群。
|
||||
- **如果你在本地配置 Kubernetes**,需要考虑哪种[网络模型](/zh/docs/concepts/cluster-administration/networking/)最适合。
|
||||
- 你的 Kubernetes 在**裸金属硬件**上还是**虚拟机(VMs)**上运行?
|
||||
- 你**只想运行一个集群**,还是打算**参与开发 Kubernetes 项目代码**?如果是后者,请选择一个处于开发状态的发行版。某些发行版只提供二进制发布版,但提供更多的选择。
|
||||
- 让你自己熟悉运行一个集群所需的[组件](/zh/docs/admin/cluster-components)。
|
||||
|
||||
<!--
|
||||
## Managing a cluster
|
||||
|
||||
* [Managing a cluster](/docs/tasks/administer-cluster/cluster-management/) describes several topics related to the lifecycle of a cluster: creating a new cluster, upgrading your cluster’s master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a running cluster.
|
||||
|
||||
* Learn how to [manage nodes](/docs/concepts/nodes/node/).
|
||||
|
||||
* Learn how to set up and manage the [resource quota](/docs/concepts/policy/resource-quotas/) for shared clusters.
|
||||
-->
|
||||
## 管理集群
|
||||
|
||||
* [管理集群](/zh/docs/tasks/administer-cluster/cluster-management/)叙述了和集群生命周期相关的几个主题:
|
||||
创建新集群、升级集群的控制节点和工作节点、执行节点维护(例如内核升级)以及升级运行中的集群的 Kubernetes API 版本。
|
||||
|
||||
* 学习如何[管理节点](/zh/docs/concepts/nodes/node/)。
|
||||
|
||||
* 学习如何设定和管理集群共享的[资源配额](/zh/docs/concepts/policy/resource-quotas/) 。
|
||||
|
||||
<!--
|
||||
## Securing a cluster
|
||||
|
||||
* [Certificates](/docs/concepts/cluster-administration/certificates/) describes the steps to generate certificates using different tool chains.
|
||||
* [Kubernetes Container Environment](/docs/concepts/containers/container-environment/) describes the environment for Kubelet managed containers on a Kubernetes node.
|
||||
* [Controlling Access to the Kubernetes API](/docs/reference/access-authn-authz/controlling-access/) describes how to set up permissions for users and service accounts.
|
||||
* [Authenticating](/docs/reference/access-authn-authz/authentication/) explains authentication in Kubernetes, including the various authentication options.
|
||||
* [Authorization](/docs/reference/access-authn-authz/authorization/) is separate from authentication, and controls how HTTP calls are handled.
|
||||
* [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/) explains plug-ins which intercepts requests to the Kubernetes API server after authentication and authorization.
|
||||
* [Using Sysctls in a Kubernetes Cluster](/docs/concepts/cluster-administration/sysctl-cluster/) describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters .
|
||||
* [Auditing](/docs/tasks/debug-application-cluster/audit/) describes how to interact with Kubernetes' audit logs.
|
||||
-->
|
||||
## 保护集群
|
||||
|
||||
* [证书](/zh/docs/concepts/cluster-administration/certificates/)节描述了使用不同的工具链生成证书的步骤。
|
||||
* [Kubernetes 容器环境](/zh/docs/concepts/containers/container-environment-variables/)描述了 Kubernetes 节点上由 Kubelet 管理的容器的环境。
|
||||
* [控制到 Kubernetes API 的访问](/zh/docs/reference/access-authn-authz/controlling-access/)描述了如何为用户和 service accounts 建立权限许可。
|
||||
* [认证](/zh/docs/reference/access-authn-authz/authentication/)节阐述了 Kubernetes 中的身份认证功能,包括许多认证选项。
|
||||
* [鉴权](/zh/docs/admin/authorization/)从认证中分离出来,用于控制如何处理 HTTP 请求。
|
||||
* [使用准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers) 阐述了在认证和授权之后拦截到 Kubernetes API 服务的请求的插件。
|
||||
* [在 Kubernetes 集群中使用 Sysctls](/zh/docs/concepts/cluster-administration/sysctl-cluster/) 描述了管理员如何使用 `sysctl` 命令行工具来设置内核参数。
|
||||
* [审计](/zh/docs/tasks/debug-application-cluster/audit/)描述了如何与 Kubernetes 的审计日志交互。
|
||||
|
||||
<!--
|
||||
### Securing the kubelet
|
||||
|
||||
* [Master-Node communication](/docs/concepts/architecture/master-node-communication/)
|
||||
* [TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
|
||||
* [Kubelet authentication/authorization](/docs/admin/kubelet-authentication-authorization/)
|
||||
-->
|
||||
### 保护 kubelet
|
||||
|
||||
* [主控节点通信](/zh/docs/concepts/cluster-administration/master-node-communication/)
|
||||
* [TLS 引导](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
|
||||
* [Kubelet 认证/授权](/zh/docs/admin/kubelet-authentication-authorization/)
|
||||
|
||||
<!--
|
||||
## Optional Cluster Services
|
||||
|
||||
* [DNS Integration](/docs/concepts/services-networking/dns-pod-service/) describes how to resolve a DNS name directly to a Kubernetes service.
|
||||
* [Logging and Monitoring Cluster Activity](/docs/concepts/cluster-administration/logging/) explains how logging in Kubernetes works and how to implement it.
|
||||
-->
|
||||
|
||||
## 可选集群服务
|
||||
|
||||
* [DNS 集成](/zh/docs/concepts/services-networking/dns-pod-service/)描述了如何将一个 DNS 名解析到一个 Kubernetes service。
|
||||
* [记录和监控集群活动](/zh/docs/concepts/cluster-administration/logging/)阐述了 Kubernetes 的日志如何工作以及怎样实现。
|
||||
|
||||
|
|
|
@ -1,145 +0,0 @@
|
|||
---
|
||||
title: 集群管理概述
|
||||
content_type: concept
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
The cluster administration overview is for anyone creating or administering a Kubernetes cluster.
|
||||
It assumes some familiarity with core Kubernetes [concepts](/docs/concepts/).
|
||||
-->
|
||||
|
||||
集群管理概述面向任何创建和管理 Kubernetes 集群的读者人群。
|
||||
我们假设你对[用户指南](/docs/user-guide/)中的概念大概了解。
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## Planning a cluster
|
||||
|
||||
See the guides in [Setup](/docs/setup/) for examples of how to plan, set up, and configure Kubernetes clusters. The solutions listed in this article are called *distros*.
|
||||
|
||||
Before choosing a guide, here are some considerations:
|
||||
-->
|
||||
|
||||
## 规划集群
|
||||
|
||||
查阅[安装](/docs/setup/)中的指导,获取如何规划、建立以及配置 Kubernetes 集群的示例。本文所列的文章称为*发行版* 。
|
||||
|
||||
在选择一个指南前,有一些因素需要考虑:
|
||||
|
||||
<!--
|
||||
- Do you just want to try out Kubernetes on your computer, or do you want to build a high-availability, multi-node cluster? Choose distros best suited for your needs.
|
||||
- **If you are designing for high-availability**, learn about configuring [clusters in multiple zones](/docs/concepts/cluster-administration/federation/).
|
||||
- Will you be using **a hosted Kubernetes cluster**, such as [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**?
|
||||
- Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly support hybrid clusters. Instead, you can set up multiple clusters.
|
||||
- **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/concepts/cluster-administration/networking/) fits best.
|
||||
- Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**?
|
||||
- Do you **just want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the
|
||||
latter, choose an actively-developed distro. Some distros only use binary releases, but
|
||||
offer a greater variety of choices.
|
||||
- Familiarize yourself with the [components](/docs/admin/cluster-components/) needed to run a cluster.
|
||||
-->
|
||||
|
||||
- 你是打算在你的电脑上尝试 Kubernetes,还是要构建一个高可用的多节点集群?请选择最适合你需求的发行版。
|
||||
- **如果你正在设计一个高可用集群**,请了解[在多个 zones 中配置集群](/docs/concepts/cluster-administration/federation/)。
|
||||
- 您正在使用类似 [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) 这样的 **被托管的Kubernetes集群**, 还是 **管理您自己的集群**?
|
||||
- 你的集群是在 **本地** 还是 **云(IaaS)** 上?Kubernetes 不能直接支持混合集群。作为代替,你可以建立多个集群。
|
||||
- **如果你在本地配置 Kubernetes**,需要考虑哪种[网络模型](/docs/concepts/cluster-administration/networking/)最适合。
|
||||
- 你的 Kubernetes 在 **裸金属硬件** 还是 **虚拟机(VMs)** 上运行?
|
||||
- 你 **只想运行一个集群**,还是打算 **活动开发 Kubernetes 项目代码**?如果是后者,请选择一个活动开发的发行版。某些发行版只提供二进制发布版,但提供更多的选择。
|
||||
- 让你自己熟悉运行一个集群所需的[组件](/docs/admin/cluster-components)。
|
||||
|
||||
<!--
|
||||
Note: Not all distros are actively maintained. Choose distros which have been tested with a recent version of Kubernetes.
|
||||
-->
|
||||
|
||||
请注意:不是所有的发行版都被积极维护着。请选择测试过最近版本的 Kubernetes 的发行版。
|
||||
|
||||
<!--
|
||||
## Managing a cluster
|
||||
|
||||
* [Managing a cluster](/docs/tasks/administer-cluster/cluster-management/) describes several topics related to the lifecycle of a cluster: creating a new cluster, upgrading your cluster’s master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a running cluster.
|
||||
|
||||
* Learn how to [manage nodes](/docs/concepts/nodes/node/).
|
||||
|
||||
* Learn how to set up and manage the [resource quota](/docs/concepts/policy/resource-quotas/) for shared clusters.
|
||||
-->
|
||||
|
||||
## 管理集群
|
||||
|
||||
* [管理集群](/docs/concepts/cluster-administration/cluster-management/)叙述了和集群生命周期相关的几个主题:创建一个新集群、升级集群的 master 和 worker 节点、执行节点维护(例如内核升级)以及升级活动集群的 Kubernetes API 版本。
|
||||
|
||||
* 学习如何[管理节点](/docs/concepts/nodes/node/)。
|
||||
|
||||
* 学习如何设定和管理集群共享的[资源配额](/docs/concepts/policy/resource-quotas/) 。
|
||||
|
||||
<!--
|
||||
## Securing a cluster
|
||||
|
||||
* [Certificates](/docs/concepts/cluster-administration/certificates/) describes the steps to generate certificates using different tool chains.
|
||||
|
||||
* [Kubernetes Container Environment](/docs/concepts/containers/container-environment-variables/) describes the environment for Kubelet managed containers on a Kubernetes node.
|
||||
|
||||
* [Controlling Access to the Kubernetes API](/docs/reference/access-authn-authz/controlling-access/) describes how to set up permissions for users and service accounts.
|
||||
|
||||
* [Authenticating](/docs/reference/access-authn-authz/authentication/) explains authentication in Kubernetes, including the various authentication options.
|
||||
|
||||
* [Authorization](/docs/reference/access-authn-authz/authorization/) is separate from authentication, and controls how HTTP calls are handled.
|
||||
|
||||
* [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/) explains plug-ins which intercepts requests to the Kubernetes API server after authentication and authorization.
|
||||
|
||||
* [Using Sysctls in a Kubernetes Cluster](/docs/concepts/cluster-administration/sysctl-cluster/) describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters .
|
||||
|
||||
* [Auditing](/docs/tasks/debug-application-cluster/audit/) describes how to interact with Kubernetes' audit logs.
|
||||
-->
|
||||
|
||||
## 集群安全
|
||||
|
||||
* [Certificates](/docs/concepts/cluster-administration/certificates/) 描述了使用不同的工具链生成证书的步骤。
|
||||
|
||||
* [Kubernetes 容器环境](/docs/concepts/containers/container-environment-variables/)描述了 Kubernetes 节点上由 Kubelet 管理的容器的环境。
|
||||
|
||||
* [控制到 Kubernetes API 的访问](/docs/reference/access-authn-authz/controlling-access/)描述了如何为用户和 service accounts 建立权限许可。
|
||||
|
||||
* [用户认证](/docs/reference/access-authn-authz/authentication/)阐述了 Kubernetes 中的认证功能,包括许多认证选项。
|
||||
|
||||
* [授权](/docs/admin/authorization)从认证中分离出来,用于控制如何处理 HTTP 请求。
|
||||
|
||||
* [使用 Admission Controllers](/docs/admin/admission-controllers) 阐述了在认证和授权之后拦截到 Kubernetes API 服务的请求的插件。
|
||||
|
||||
* [在 Kubernetes Cluster 中使用 Sysctls](/docs/concepts/cluster-administration/sysctl-cluster/) 描述了管理员如何使用 `sysctl` 命令行工具来设置内核参数。
|
||||
|
||||
* [审计](/docs/tasks/debug-application-cluster/audit/)描述了如何与 Kubernetes 的审计日志交互。
|
||||
|
||||
<!--
|
||||
### Securing the kubelet
|
||||
* [Master-Node communication](/docs/concepts/architecture/master-node-communication/)
|
||||
* [TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
|
||||
* [Kubelet authentication/authorization](/docs/admin/kubelet-authentication-authorization/)
|
||||
-->
|
||||
|
||||
### 保护 kubelet
|
||||
|
||||
* [Master 节点通信](/docs/concepts/cluster-administration/master-node-communication/)
|
||||
* [TLS 引导](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
|
||||
* [Kubelet 认证/授权](/docs/admin/kubelet-authentication-authorization/)
|
||||
|
||||
<!--
|
||||
## Optional Cluster Services
|
||||
|
||||
* [DNS Integration](/docs/concepts/services-networking/dns-pod-service/) describes how to resolve a DNS name directly to a Kubernetes service.
|
||||
|
||||
* [Logging and Monitoring Cluster Activity](/docs/concepts/cluster-administration/logging/) explains how logging in Kubernetes works and how to implement it.
|
||||
-->
|
||||
|
||||
## 可选集群服务
|
||||
|
||||
* [DNS 与 SkyDNS 集成](/docs/concepts/services-networking/dns-pod-service/)描述了如何将一个 DNS 名解析到一个 Kubernetes service。
|
||||
|
||||
* [记录和监控集群活动](/docs/concepts/cluster-administration/logging/)阐述了 Kubernetes 的日志如何工作以及怎样实现。
|
||||
|
||||
|
|
@ -1,118 +0,0 @@
|
|||
---
|
||||
title: 联邦
|
||||
content_type: concept
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
本页面阐明了为何以及如何使用联邦创建Kubernetes集群。
|
||||
|
||||
|
||||
<!-- body -->
|
||||
## 为何使用联邦
|
||||
|
||||
联邦可以使多个集群的管理简单化。它提供了两个主要构件模块:
|
||||
|
||||
* 跨集群同步资源:联邦能够让资源在多个集群中同步。例如,你可以确保在多个集群中存在同样的部署。
|
||||
* 跨集群发现:联邦能够在所有集群的后端自动配置DNS服务和负载均衡。例如,通过多个集群的后端,你可以确保全局的VIP或DNS记录可用。
|
||||
|
||||
联邦技术的其他应用场景:
|
||||
|
||||
* 高可用性:通过跨集群分摊负载,自动配置DNS服务和负载均衡,联邦将集群失败所带来的影响降到最低。
|
||||
* 避免供应商锁定:跨集群使迁移应用程序变得更容易,联邦服务避免了供应商锁定。
|
||||
|
||||
|
||||
只有在多个集群的场景下联邦服务才是有帮助的。这里列出了一些你会使用多个集群的原因:
|
||||
|
||||
* 降低延迟:在多个区域含有集群,可使用离用户最近的集群来服务用户,从而最大限度降低延迟。
|
||||
* 故障隔离:对于故障隔离,也许有多个小的集群比有一个大的集群要更好一些(例如:一个云供应商的不同可用域里有多个集群)。详细信息请参阅[多集群指南](/docs/admin/multi-cluster)。
|
||||
* 可伸缩性:对于单个kubernetes集群是有伸缩性限制的(但对于大多数用户来说并非如此。更多细节参考[Kubernetes扩展和性能目标](https://git.k8s.io/community/sig-scalability/goals.md))。
|
||||
* [混合云](#混合云的能力):可以有多个集群,它们分别拥有不同的云供应商或者本地数据中心。
|
||||
|
||||
### 注意事项
|
||||
|
||||
虽然联邦有很多吸引人的场景,但这里还是有一些需要关注的事项:
|
||||
|
||||
* 增加网络的带宽和损耗:联邦控制面会监控所有的集群,来确保集群的当前状态与预期一致。那么当这些集群运行在一个或者多个云提供者的不同区域中,则会带来重大的网络损耗。
|
||||
* 降低集群的隔离:当联邦控制面中存在一个故障时,会影响所有的集群。把联邦控制面的逻辑降到最小可以缓解这个问题。 无论何时,它都是kubernetes集群里控制面的代表。设计和实现也使其变得更安全,避免多集群运行中断。
|
||||
* 完整性:联邦项目相对较新,还不是很成熟。不是所有资源都可用,且很多资源才刚刚开始。[Issue 38893](https://github.com/kubernetes/kubernetes/issues/38893) 列举了一些团队正忙于解决的系统已知问题。
|
||||
|
||||
### 混合云的能力
|
||||
|
||||
Kubernetes集群里的联邦包括运行在不同云供应商上的集群(例如,谷歌云、亚马逊),和本地部署的集群(例如,OpenStack)。只需在适当的云供应商和/或位置创建所需的所有集群,并将每个集群的API endpoint和凭据注册到您的联邦API服务中(详情参考[联邦管理指南](/docs/admin/federation/))。
|
||||
|
||||
在此之后,您的[API资源](#api资源)就可以跨越不同的集群和云供应商。
|
||||
|
||||
## 建立联邦
|
||||
|
||||
若要能联合多个集群,首先需要建立一个联邦控制面。参照[安装指南](/docs/tutorials/federation/set-up-cluster-federation-kubefed/) 建立联邦控制面。
|
||||
|
||||
## API资源
|
||||
|
||||
控制面建立完成后,就可以开始创建联邦API资源了。
|
||||
以下指南详细介绍了一些资源:
|
||||
|
||||
* [Cluster](/docs/tasks/administer-federation/cluster/)
|
||||
* [ConfigMap](/docs/tasks/administer-federation/configmap/)
|
||||
* [DaemonSets](/docs/tasks/administer-federation/daemonset/)
|
||||
* [Deployment](/docs/tasks/administer-federation/deployment/)
|
||||
* [Events](/docs/tasks/administer-federation/events/)
|
||||
* [Ingress](/docs/tasks/administer-federation/ingress/)
|
||||
* [Namespaces](/docs/tasks/administer-federation/namespaces/)
|
||||
* [ReplicaSets](/docs/tasks/administer-federation/replicaset/)
|
||||
* [Secrets](/docs/tasks/administer-federation/secret/)
|
||||
* [Services](/docs/concepts/cluster-administration/federation-service-discovery/)
|
||||
|
||||
[API参考文档](/docs/reference/federation/)列举了联邦API服务支持的所有资源。
|
||||
|
||||
## 级联删除
|
||||
|
||||
Kubernetes1.6版本支持联邦资源级联删除。使用级联删除,即当删除联邦控制面的一个资源时,也删除了所有底层集群中的相应资源。
|
||||
|
||||
当使用REST API时,级联删除功能不是默认开启的。若使用REST API从联邦控制面删除一个资源时,要开启级联删除功能,即需配置选项 `DeleteOptions.orphanDependents=false`。使用`kubectl delete`使级联删除功能默认开启。使用`kubectl delete --cascade=false`禁用级联删除功能。
|
||||
|
||||
注意:Kubernetes1.5版本开始支持联邦资源子集的级联删除。
|
||||
|
||||
## 单个集群的范围
|
||||
|
||||
对于IaaS供应商如谷歌计算引擎或亚马逊网络服务,一个虚拟机存在于一个[域](https://cloud.google.com/compute/docs/zones)或[可用域](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html)中。
|
||||
我们建议一个Kubernetes集群里的所有虚机应该在相同的可用域里,因为:
|
||||
|
||||
- 与单一的全局Kubernetes集群对比,该方式有较少的单点故障。
|
||||
- 与跨可用域的集群对比,该方式更容易推断单区域集群的可用性属性。
|
||||
- 当Kubernetes开发者设计一个系统(例如,对延迟、带宽或相关故障进行假设),他们也会假设所有的机器都在一个单一的数据中心,或者以其他方式紧密相连。
|
||||
|
||||
每个可用区域里包含多个集群当然是可以的,但是总的来说我们认为集群数越少越好。
|
||||
偏爱较少集群数的原因是:
|
||||
|
||||
- 在某些情况下,在一个集群里有更多的节点,可以改进Pods的装箱问题(更少的资源碎片)。
|
||||
- 减少操作开销(尽管随着OPS工具和流程的成熟而降低了这块的优势)。
|
||||
- 为每个集群的固定资源花费降低开销,例如,使用apiserver的虚拟机(但是在全体集群开销中,中小型集群的开销占比要小的多)。
|
||||
|
||||
多集群的原因包括:
|
||||
|
||||
- 严格的安全性策略要求隔离一类工作与另一类工作(但是,请参见下面的集群分割)。
|
||||
- 测试集群或其他集群软件直至最优的新Kubernetes版本发布。
|
||||
|
||||
## 选择合适的集群数
|
||||
|
||||
Kubernetes集群数量选择也许是一个相对静止的选择,因为对其重新审核的情况很少。相比之下,一个集群中的节点数和一个服务中的pods数可能会根据负载和增长频繁变化。
|
||||
|
||||
选择集群的数量,首先,需要决定哪些区域对于将要运行在Kubernetes上的服务,可以有足够的时间到达所有的终端用户(如果使用内容分发网络,则不需要考虑CDN-hosted内容的延迟需求)。法律问题也可能影响这一点。例如,拥有全球客户群的公司可能会对于在美国、欧盟、亚太和南非地区拥有集群起到决定权。使用`R`代表区域的数量。
|
||||
|
||||
其次,决定有多少集群在同一时间不可用,而一些仍然可用。使用`U`代表不可用的数量。如果不确定,最好选择1。
|
||||
|
||||
如果允许负载均衡在集群故障发生时将通信引导到任何区域,那么至少需要较大的`R`或`U + 1`集群。若非如此(例如,若要在集群故障发生时确保所有用户的低延迟),则需要`R * (U + 1)`集群(在每一个`R`区域里都有`U + 1`)。在任何情况下,尝试将每个集群放在不同的区域中。
|
||||
|
||||
最后,如果你的集群需求超过一个Kubernetes集群推荐的最大节点数,那么你可能需要更多的集群。Kubernetes1.3版本支持多达1000个节点的集群规模。
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* 进一步学习[联邦提案](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/design-proposals/multicluster/federation.md)。
|
||||
* 集群联邦参考该[配置指导](/docs/tutorials/federation/set-up-cluster-federation-kubefed/)。
|
||||
* 查看[Kubecon2016浅谈联邦](https://www.youtube.com/watch?v=pq9lbkmxpS8)
|
||||
|
||||
|
||||
|
||||
|
|
@ -602,15 +602,11 @@ LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.el
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
<!--
|
||||
Kubernetes also supports Federated Services, which can span multiple
|
||||
clusters and cloud providers, to provide increased availability,
|
||||
better fault tolerance and greater scalability for your services. See
|
||||
the [Federated Services User Guide](/docs/concepts/cluster-administration/federation-service-discovery/)
|
||||
for further information.
|
||||
* Learn more about [Using a Service to Access an Application in a Cluster](/docs/tasks/access-application-cluster/service-access-application-cluster/)
|
||||
* Learn more about [Connecting a Front End to a Back End Using a Service](/docs/tasks/access-application-cluster/connecting-frontend-backend/)
|
||||
* Learn more about [Creating an External Load Balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/)
|
||||
-->
|
||||
|
||||
Kubernetes 也支持联合 Service,能够跨多个集群和云提供商,为 Service 提供逐步增强的可用性、更优的容错、更好的可伸缩性。
|
||||
查看 [联合 Service 用户指南](/docs/concepts/cluster-administration/federation-service-discovery/) 获取更进一步信息。
|
||||
|
||||
* 进一步了解如何[使用 Service 访问集群中的应用](/zh/docs/tasks/access-application-cluster/service-access-application-cluster/)
|
||||
* 进一步了解如何[使用 Service 将前端连接到后端](/zh/docs/tasks/access-application-cluster/connecting-frontend-backend/)
|
||||
* 进一步了解如何[创建外部负载均衡器](/zh/docs/tasks/access-application-cluster/create-external-load-balancer/)
|
||||
|
|
|
@ -4,13 +4,11 @@ content_type: concept
|
|||
weight: 40
|
||||
---
|
||||
<!--
|
||||
---
|
||||
reviewers:
|
||||
- bprashanth
|
||||
title: Ingress
|
||||
content_type: concept
|
||||
weight: 40
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -7,7 +7,6 @@ content_type: concept
|
|||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Reference
|
||||
approvers:
|
||||
- chenopis
|
||||
|
@ -15,7 +14,6 @@ linkTitle: "Reference"
|
|||
main_menu: true
|
||||
weight: 70
|
||||
content_type: concept
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -25,20 +23,8 @@ This section of the Kubernetes documentation contains references.
|
|||
-->
|
||||
这是 Kubernetes 文档的参考部分。
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## API 参考
|
||||
|
||||
* [Kubernetes API 概述](/docs/reference/using-api/api-overview/) - Kubernetes API 概述。
|
||||
* Kubernetes API 版本
|
||||
* [1.17](/docs/reference/generated/kubernetes-api/v1.17/)
|
||||
* [1.16](/docs/reference/generated/kubernetes-api/v1.16/)
|
||||
* [1.15](/docs/reference/generated/kubernetes-api/v1.15/)
|
||||
* [1.14](/docs/reference/generated/kubernetes-api/v1.14/)
|
||||
* [1.13](/docs/reference/generated/kubernetes-api/v1.13/)
|
||||
|
||||
<!--
|
||||
## API Reference
|
||||
|
||||
|
@ -50,16 +36,15 @@ This section of the Kubernetes documentation contains references.
|
|||
* [1.14](/docs/reference/generated/kubernetes-api/v1.14/)
|
||||
* [1.13](/docs/reference/generated/kubernetes-api/v1.13/)
|
||||
-->
|
||||
## API 参考
|
||||
|
||||
## API 客户端库
|
||||
|
||||
如果您需要通过编程语言调用 Kubernetes API,您可以使用
|
||||
[客户端库](/docs/reference/using-api/client-libraries/)。以下是官方支持的客户端库:
|
||||
|
||||
- [Kubernetes Go 语言客户端库](https://github.com/kubernetes/client-go/)
|
||||
- [Kubernetes Python 语言客户端库](https://github.com/kubernetes-client/python)
|
||||
- [Kubernetes Java 语言客户端库](https://github.com/kubernetes-client/java)
|
||||
- [Kubernetes JavaScript 语言客户端库](https://github.com/kubernetes-client/javascript)
|
||||
* [Kubernetes API 概述](/docs/reference/using-api/api-overview/) - Kubernetes API 概述。
|
||||
* Kubernetes API 版本
|
||||
* [1.17](/docs/reference/generated/kubernetes-api/v1.17/)
|
||||
* [1.16](/docs/reference/generated/kubernetes-api/v1.16/)
|
||||
* [1.15](/docs/reference/generated/kubernetes-api/v1.15/)
|
||||
* [1.14](/docs/reference/generated/kubernetes-api/v1.14/)
|
||||
* [1.13](/docs/reference/generated/kubernetes-api/v1.13/)
|
||||
|
||||
<!--
|
||||
## API Client Libraries
|
||||
|
@ -73,13 +58,15 @@ client libraries:
|
|||
- [Kubernetes Java client library](https://github.com/kubernetes-client/java)
|
||||
- [Kubernetes JavaScript client library](https://github.com/kubernetes-client/javascript)
|
||||
-->
|
||||
## API 客户端库
|
||||
|
||||
## CLI 参考
|
||||
如果您需要通过编程语言调用 Kubernetes API,您可以使用
|
||||
[客户端库](/docs/reference/using-api/client-libraries/)。以下是官方支持的客户端库:
|
||||
|
||||
* [kubectl](/docs/user-guide/kubectl-overview) - 主要的 CLI 工具,用于运行命令和管理 Kubernetes 集群。
|
||||
* [JSONPath](/docs/user-guide/jsonpath/) - 通过 kubectl 使用 [JSONPath 表达式](http://goessner.net/articles/JsonPath/) 的语法指南。
|
||||
* [kubeadm](/docs/admin/kubeadm/) - 此 CLI 工具可轻松配置安全的 Kubernetes 集群。
|
||||
* [kubefed](/docs/admin/kubefed/) - 此 CLI 工具可帮助您管理集群联邦。
|
||||
- [Kubernetes Go 语言客户端库](https://github.com/kubernetes/client-go/)
|
||||
- [Kubernetes Python 语言客户端库](https://github.com/kubernetes-client/python)
|
||||
- [Kubernetes Java 语言客户端库](https://github.com/kubernetes-client/java)
|
||||
- [Kubernetes JavaScript 语言客户端库](https://github.com/kubernetes-client/javascript)
|
||||
|
||||
<!--
|
||||
## CLI Reference
|
||||
|
@ -89,16 +76,12 @@ client libraries:
|
|||
* [kubeadm](/docs/admin/kubeadm/) - CLI tool to easily provision a secure Kubernetes cluster.
|
||||
* [kubefed](/docs/admin/kubefed/) - CLI tool to help you administrate your federated clusters.
|
||||
-->
|
||||
## CLI 参考
|
||||
|
||||
## 配置参考
|
||||
|
||||
* [kubelet](/docs/admin/kubelet/) - 在每个节点上运行的主 *节点代理* 。kubelet 采用一组 PodSpecs 并确保所描述的容器健康地运行。
|
||||
* [kube-apiserver](/docs/admin/kube-apiserver/) - REST API,用于验证和配置 API 对象(如 pod,服务,副本控制器)的数据。
|
||||
* [kube-controller-manager](/docs/admin/kube-controller-manager/) - 一个守护进程,它嵌入到了 Kubernetes 的附带的核心控制循环。
|
||||
* [kube-proxy](/docs/admin/kube-proxy/) - 可以跨一组后端进行简单的 TCP/UDP 流转发或循环 TCP/UDP 转发。
|
||||
* [kube-scheduler](/docs/admin/kube-scheduler/) - 一个调度程序,用于管理可用性、性能和容量。
|
||||
* [federation-apiserver](/docs/admin/federation-apiserver/) - 联邦集群的 API 服务器。
|
||||
* [federation-controller-manager](/docs/admin/federation-controller-manager/) - 一个守护进程,它嵌入到了 Kubernetes 联邦的附带的核心控制循环。
|
||||
* [kubectl](/docs/user-guide/kubectl-overview) - 主要的 CLI 工具,用于运行命令和管理 Kubernetes 集群。
|
||||
* [JSONPath](/docs/user-guide/jsonpath/) - 通过 kubectl 使用 [JSONPath 表达式](http://goessner.net/articles/JsonPath/) 的语法指南。
|
||||
* [kubeadm](/docs/admin/kubeadm/) - 此 CLI 工具可轻松配置安全的 Kubernetes 集群。
|
||||
* [kubefed](/docs/admin/kubefed/) - 此 CLI 工具可帮助您管理集群联邦。
|
||||
|
||||
<!--
|
||||
## Config Reference
|
||||
|
@ -108,18 +91,21 @@ client libraries:
|
|||
* [kube-controller-manager](/docs/admin/kube-controller-manager/) - Daemon that embeds the core control loops shipped with Kubernetes.
|
||||
* [kube-proxy](/docs/admin/kube-proxy/) - Can do simple TCP/UDP stream forwarding or round-robin TCP/UDP forwarding across a set of back-ends.
|
||||
* [kube-scheduler](/docs/admin/kube-scheduler/) - Scheduler that manages availability, performance, and capacity.
|
||||
* [federation-apiserver](/docs/admin/federation-apiserver/) - API server for federated clusters.
|
||||
* [federation-controller-manager](/docs/admin/federation-controller-manager/) - Daemon that embeds the core control loops shipped with Kubernetes federation.
|
||||
-->
|
||||
## 配置参考
|
||||
|
||||
## 设计文档
|
||||
|
||||
Kubernetes 功能的设计文档归档,不妨考虑从 [Kubernetes 架构](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) 和 [Kubernetes 设计概述](https://git.k8s.io/community/contributors/design-proposals)开始阅读。
|
||||
* [kubelet](/docs/admin/kubelet/) - 在每个节点上运行的主 *节点代理* 。kubelet 采用一组 PodSpecs 并确保所描述的容器健康地运行。
|
||||
* [kube-apiserver](/docs/admin/kube-apiserver/) - REST API,用于验证和配置 API 对象(如 pod,服务,副本控制器)的数据。
|
||||
* [kube-controller-manager](/docs/admin/kube-controller-manager/) - 一个守护进程,它嵌入到了 Kubernetes 的附带的核心控制循环。
|
||||
* [kube-proxy](/docs/admin/kube-proxy/) - 可以跨一组后端进行简单的 TCP/UDP 流转发或循环 TCP/UDP 转发。
|
||||
* [kube-scheduler](/docs/admin/kube-scheduler/) - 一个调度程序,用于管理可用性、性能和容量。
|
||||
|
||||
<!--
|
||||
## Design Docs
|
||||
|
||||
An archive of the design docs for Kubernetes functionality. Good starting points are [Kubernetes Architecture](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) and [Kubernetes Design Overview](https://git.k8s.io/community/contributors/design-proposals).
|
||||
|
||||
-->
|
||||
## 设计文档
|
||||
|
||||
Kubernetes 功能的设计文档归档,不妨考虑从 [Kubernetes 架构](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) 和 [Kubernetes 设计概述](https://git.k8s.io/community/contributors/design-proposals)开始阅读。
|
||||
|
||||
|
|
|
@ -38,15 +38,6 @@ Kubernetes 包含一些内置工具,可以帮助用户更好的使用 Kubernet
|
|||
-->
|
||||
[`kubeadm`](/docs/tasks/tools/install-kubeadm/) 是一个命令行工具,可以用来在物理机、云服务器或虚拟机(目前处于 alpha 阶段)上轻松部署一个安全可靠的 Kubernetes 集群。
|
||||
|
||||
## Kubefed
|
||||
|
||||
<!--
|
||||
[`kubefed`](/docs/tasks/federation/set-up-cluster-federation-kubefed/) is the command line tool
|
||||
to help you administrate your federated clusters.
|
||||
-->
|
||||
[`kubefed`](/docs/tasks/federation/set-up-cluster-federation-kubefed/) 是一个命令行工具,可以用来帮助用户管理联邦集群。
|
||||
|
||||
|
||||
## Minikube
|
||||
|
||||
<!--
|
||||
|
|
|
@ -504,7 +504,6 @@ Systemd-resolved 会用一个 stub 文件来覆盖 `/etc/resolv.conf`从而在
|
|||
kubeadm (>= 1.11) 会自动检测`systemd-resolved`并对应的更改 kubelet 的标签。
|
||||
|
||||
<!--
|
||||
|
||||
Kubernetes installs do not configure the nodes' `resolv.conf` files to use the
|
||||
cluster DNS by default, because that process is inherently distribution-specific.
|
||||
This should probably be implemented eventually.
|
||||
|
@ -522,9 +521,7 @@ If you are using Alpine version 3.3 or earlier as your base image, DNS may not
|
|||
work properly owing to a known issue with Alpine.
|
||||
Check [here](https://github.com/kubernetes/kubernetes/issues/30215)
|
||||
for more information.
|
||||
|
||||
-->
|
||||
|
||||
Kubernetes 的安装并不会默认配置节点的 `resolv.conf` 文件来使用集群的 DNS 服务,因为这个配置对于不同的发行版本是不一样的。这个问题应该迟早会被解决的。
|
||||
|
||||
Linux 的 libc 会在仅有三个 DNS 的 `nameserver` 和六个 DNS 的`search` 记录时会不可思议的卡死 ([详情请查阅这个2005年的bug](https://bugzilla.redhat.com/show_bug.cgi?id=168253))。Kubernetes 需要占用一个 `nameserver` 记录和三个`search`记录。这意味着如果一个本地的安装已经使用了三个`nameserver`或者使用了超过三个的 `search`记录,那有些配置很可能会丢失。有一个不完整的解决方案就是在节点上使用`dnsmasq`来提供更多的`nameserver`配置,但是无法提供更多的`search`记录。您也可以使用kubelet 的 `--resolv-conf` 标签来解决这个问题。
|
||||
|
@ -532,24 +529,6 @@ Linux 的 libc 会在仅有三个 DNS 的 `nameserver` 和六个 DNS 的`search`
|
|||
如果您是使用 Alpine 3.3 或者更早版本作为您的基础镜像,DNS 可能会由于Alpine 一个已知的问题导致无法正常工作,请查看[这里](https://github.com/kubernetes/kubernetes/issues/30215)获取更多资料。
|
||||
|
||||
<!--
|
||||
|
||||
## Kubernetes Federation (Multiple Zone support)
|
||||
|
||||
Release 1.3 introduced Cluster Federation support for multi-site Kubernetes
|
||||
installations. This required some minor (backward-compatible) changes to the
|
||||
way the Kubernetes cluster DNS server processes DNS queries, to facilitate
|
||||
the lookup of federated services (which span multiple Kubernetes clusters).
|
||||
See the [Cluster Federation Administrators' Guide](/docs/concepts/cluster-administration/federation/)
|
||||
for more details on Cluster Federation and multi-site support.
|
||||
|
||||
## -->
|
||||
|
||||
## Kubernetes Federation (支持多区域部署)
|
||||
|
||||
自从 1.3 版本支持了多个 Kubernetes 的联邦集群后,集群 DNS 服务在处理 DNS 请求时需要有一些微弱的调整 (这是向下兼容的),从而可以使用跨越多个 Kubernetes 集群的联邦服务。请看 [联邦集群管理向导](/docs/concepts/cluster-administration/federation/) 获取更多关于联邦集群和多点支持的信息。
|
||||
|
||||
<!--
|
||||
|
||||
## References
|
||||
|
||||
- [DNS for Services and Pods](/docs/concepts/services-networking/dns-pod-service/)
|
||||
|
@ -557,10 +536,7 @@ for more details on Cluster Federation and multi-site support.
|
|||
|
||||
## What's next
|
||||
- [Autoscaling the DNS Service in a Cluster](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/).
|
||||
|
||||
|
||||
|
||||
## -->
|
||||
-->
|
||||
|
||||
## 参考
|
||||
|
||||
|
@ -572,6 +548,3 @@ for more details on Cluster Federation and multi-site support.
|
|||
- [集群里自动伸缩 DNS Service](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/).
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,4 +0,0 @@
|
|||
---
|
||||
title: "联邦 - 在多个集群上运行一个应用"
|
||||
weight: 120
|
||||
---
|
|
@ -1,4 +0,0 @@
|
|||
---
|
||||
title: "管理联邦控制平面"
|
||||
weight: 160
|
||||
---
|
|
@ -1,148 +0,0 @@
|
|||
---
|
||||
title: 联邦 ConfigMap
|
||||
content_type: task
|
||||
---
|
||||
<!--
|
||||
---
|
||||
title: Federated ConfigMap
|
||||
content_type: task
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< deprecationfilewarning >}}
|
||||
{{< include "federation-deprecation-warning-note.md" >}}
|
||||
{{< /deprecationfilewarning >}}
|
||||
<!--
|
||||
This guide explains how to use ConfigMaps in a Federation control plane.
|
||||
|
||||
Federated ConfigMaps are very similar to the traditional [Kubernetes
|
||||
ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/) and provide the same functionality.
|
||||
Creating them in the federation control plane ensures that they are synchronized
|
||||
across all the clusters in federation.
|
||||
-->
|
||||
本指南介绍如何在联邦控制平面中使用 ConfigMap。
|
||||
|
||||
联邦 ConfigMap 与传统 [Kubernetes
|
||||
ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) 非常相似且提供相同的功能。
|
||||
在联邦控制平面中创建它们可以确保它们在联邦的所有集群中同步。
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* {{< include "federated-task-tutorial-prereqs.md" >}}
|
||||
<!--
|
||||
* You should also have a basic
|
||||
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
|
||||
general and [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/) in particular.
|
||||
-->
|
||||
* 通常我们还期望您拥有基本的 [Kubernetes 应用知识](/docs/tutorials/kubernetes-basics/),
|
||||
特别是 [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) 相关的应用知识。
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Creating a Federated ConfigMap
|
||||
|
||||
The API for Federated ConfigMap is 100% compatible with the
|
||||
API for traditional Kubernetes ConfigMap. You can create a ConfigMap by sending
|
||||
a request to the federation apiserver.
|
||||
|
||||
You can do that using [kubectl](/docs/user-guide/kubectl/) by running:
|
||||
|
||||
``` shell
|
||||
kubectl --context=federation-cluster create -f myconfigmap.yaml
|
||||
```
|
||||
|
||||
The `--context=federation-cluster` flag tells kubectl to submit the
|
||||
request to the Federation apiserver instead of sending it to a Kubernetes
|
||||
cluster.
|
||||
|
||||
Once a Federated ConfigMap is created, the federation control plane will create
|
||||
a matching ConfigMap in all underlying Kubernetes clusters.
|
||||
You can verify this by checking each of the underlying clusters, for example:
|
||||
|
||||
``` shell
|
||||
kubectl --context=gce-asia-east1a get configmap myconfigmap
|
||||
```
|
||||
|
||||
The above assumes that you have a context named 'gce-asia-east1a'
|
||||
configured in your client for your cluster in that zone.
|
||||
|
||||
These ConfigMaps in underlying clusters will match the Federated ConfigMap.
|
||||
-->
|
||||
## 创建联邦 ConfigMap
|
||||
|
||||
联邦 ConfigMap 的 API 100% 兼容传统 Kubernetes ConfigMap 的 API。您可以通过向联邦 apiserver 发送请求来创建 ConfigMap。
|
||||
您可以通过使用 [kubectl](/docs/user-guide/kubectl/) 运行下面的指令来创建联邦 ConfigMap:
|
||||
|
||||
``` shell
|
||||
kubectl --context=federation-cluster create -f myconfigmap.yaml
|
||||
```
|
||||
|
||||
`--context=federation-cluster` 参数告诉 kubectl 将请求提交到联邦 apiserver 而不是发送给某一个 Kubernetes 集群。
|
||||
|
||||
一旦联邦 ConfigMap 被创建,联邦控制平面就会在所有底层 Kubernetes 集群中创建匹配的 ConfigMap。
|
||||
您可以通过检查底层每个集群来对其进行验证,例如:
|
||||
|
||||
``` shell
|
||||
kubectl --context=gce-asia-east1a get configmap myconfigmap
|
||||
```
|
||||
|
||||
上面的命令假定您在客户端中配置了一个叫做 ‘gce-asia-east1a’ 的上下文。
|
||||
|
||||
这些底层集群中的 ConfigMap 将与 联邦 ConfigMap 相匹配。
|
||||
|
||||
<!--
|
||||
## Updating a Federated ConfigMap
|
||||
|
||||
You can update a Federated ConfigMap as you would update a Kubernetes
|
||||
ConfigMap; however, for a Federated ConfigMap, you must send the request to
|
||||
the federation apiserver instead of sending it to a specific Kubernetes cluster.
|
||||
The federation control plane ensures that whenever the Federated ConfigMap is
|
||||
updated, it updates the corresponding ConfigMaps in all underlying clusters to
|
||||
match it.
|
||||
-->
|
||||
## 更新联邦 ConfigMap
|
||||
|
||||
您可以像更新 Kubernetes ConfigMap 一样更新联邦 ConfigMap。
|
||||
但是对于联邦 ConfigMap,您必须发送请求到联邦 apiserver 而不是某个特定的 Kubernetes 集群。
|
||||
联邦控制平面会确保每当联邦 ConfigMap 更新时,它会更新所有底层集群中的 ConfigMap 来和更新后的内容保持一致。
|
||||
|
||||
<!--
|
||||
## Deleting a Federated ConfigMap
|
||||
|
||||
You can delete a Federated ConfigMap as you would delete a Kubernetes
|
||||
ConfigMap; however, for a Federated ConfigMap, you must send the request to
|
||||
the federation apiserver instead of sending it to a specific Kubernetes cluster.
|
||||
|
||||
For example, you can do that using kubectl by running:
|
||||
|
||||
```shell
|
||||
kubectl --context=federation-cluster delete configmap
|
||||
```
|
||||
-->
|
||||
## 删除联邦 ConfigMap
|
||||
|
||||
您可以像删除 Kubernetes ConfigMap 一样删除联邦 ConfigMap。
|
||||
但是,对于联邦 ConfigMap,您必须发送请求到联邦 apiserver 而不是某个特定的 Kubernetes 集群。
|
||||
例如,您可以使用 kubectl 运行下面的命令来删除联邦 ConfigMap:
|
||||
|
||||
```shell
|
||||
kubectl --context=federation-cluster delete configmap
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Deleting a Federated ConfigMap does not delete the corresponding ConfigMaps from underlying clusters. You must delete the underlying ConfigMaps manually.
|
||||
-->
|
||||
要注意的是这时删除联邦 ConfigMap 并不会删除底层集群中对应的 ConfigMap。您必须自己手动删除底层集群中的 ConfigMap。
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,136 +0,0 @@
|
|||
---
|
||||
title: 联邦 DaemonSet
|
||||
content_type: task
|
||||
---
|
||||
<!--
|
||||
---
|
||||
title: Federated DaemonSet
|
||||
content_type: task
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< deprecationfilewarning >}}
|
||||
{{< include "federation-deprecation-warning-note.md" >}}
|
||||
{{< /deprecationfilewarning >}}
|
||||
|
||||
<!--
|
||||
This guide explains how to use DaemonSets in a federation control plane.
|
||||
|
||||
DaemonSets in the federation control plane ("Federated Daemonsets" in
|
||||
this guide) are very similar to the traditional Kubernetes
|
||||
[DaemonSets](/docs/concepts/workloads/controllers/daemonset/) and provide the same functionality.
|
||||
Creating them in the federation control plane ensures that they are synchronized
|
||||
across all the clusters in federation.
|
||||
-->
|
||||
本指南说明了如何在联邦控制平面中使用 DaemonSet。
|
||||
|
||||
联邦控制平面中的 DaemonSet(在本指南中称为 “联邦 DaemonSet”)与传统的 Kubernetes [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) 非常类似,并提供相同的功能。在联邦控制平面中创建联邦 DaemonSet 可以确保它们同步到联邦的所有集群中。
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* {{< include "federated-task-tutorial-prereqs.md" >}}
|
||||
<!--
|
||||
* You should also have a basic
|
||||
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
|
||||
general and [DaemonSets](/docs/concepts/workloads/controllers/daemonset/) in particular.
|
||||
-->
|
||||
* 你还应该具备基本的 [Kubernetes 应用知识](/docs/tutorials/kubernetes-basics/),特别是 [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) 相关的应用知识。
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Creating a Federated Daemonset
|
||||
|
||||
The API for Federated Daemonset is 100% compatible with the
|
||||
API for traditional Kubernetes DaemonSet. You can create a DaemonSet by sending
|
||||
a request to the federation apiserver.
|
||||
|
||||
You can do that using [kubectl](/docs/user-guide/kubectl/) by running:
|
||||
|
||||
``` shell
|
||||
kubectl --context=federation-cluster create -f mydaemonset.yaml
|
||||
```
|
||||
|
||||
The `--context=federation-cluster` flag tells kubectl to submit the
|
||||
request to the Federation apiserver instead of sending it to a Kubernetes
|
||||
cluster.
|
||||
|
||||
Once a Federated Daemonset is created, the federation control plane will create
|
||||
a matching DaemonSet in all underlying Kubernetes clusters.
|
||||
You can verify this by checking each of the underlying clusters, for example:
|
||||
|
||||
``` shell
|
||||
kubectl --context=gce-asia-east1a get daemonset mydaemonset
|
||||
```
|
||||
|
||||
The above assumes that you have a context named 'gce-asia-east1a'
|
||||
configured in your client for your cluster in that zone.
|
||||
-->
|
||||
## 创建联邦 Daemonset
|
||||
|
||||
联邦 Daemonset 的 API 和传统的 Kubernetes Daemonset API 是 100% 兼容的。您可以通过向联邦 apiserver 发送请求来创建一个 DaemonSet。
|
||||
|
||||
您可以通过使用 [kubectl](/docs/user-guide/kubectl/) 运行下面的指令来创建联邦 Daemonset:
|
||||
|
||||
``` shell
|
||||
kubectl --context=federation-cluster create -f mydaemonset.yaml
|
||||
```
|
||||
|
||||
`--context=federation-cluster` 参数告诉 kubectl 发送请求到联邦 apiserver 而不是某个 Kubernetes 集群。
|
||||
|
||||
一旦联邦 Daemonset 被创建,联邦控制平面就会在所有底层 Kubernetes 集群中创建匹配的 Daemonset。您可以通过检查底层每个集群来对其进行验证,例如:
|
||||
|
||||
``` shell
|
||||
kubectl --context=gce-asia-east1a get daemonset mydaemonset
|
||||
```
|
||||
|
||||
上面的命令假定您在客户端中配置了一个叫做 ‘gce-asia-east1a’ 的上下文。
|
||||
|
||||
|
||||
<!--
|
||||
## Updating a Federated Daemonset
|
||||
|
||||
You can update a Federated Daemonset as you would update a Kubernetes
|
||||
DaemonSet; however, for a Federated Daemonset, you must send the request to
|
||||
the federation apiserver instead of sending it to a specific Kubernetes cluster.
|
||||
The federation control plane ensures that whenever the Federated Daemonset is
|
||||
updated, it updates the corresponding DaemonSets in all underlying clusters to
|
||||
match it.
|
||||
-->
|
||||
## 更新联邦 Daemonset
|
||||
|
||||
您可以像更新 Kubernetes Daemonset 一样更新联邦 Daemonset。但是,对于联邦 Daemonset,您必须发送请求到联邦 apiserver 而不是某个特定的 Kubernetes 集群。联邦控制平面会确保每当联邦 Daemonset 更新时,它会更新所有底层集群中的 Daemonset 来和更新后的内容保持一致。
|
||||
|
||||
<!--
|
||||
## Deleting a Federated Daemonset
|
||||
|
||||
You can delete a Federated Daemonset as you would delete a Kubernetes
|
||||
DaemonSet; however, for a Federated Daemonset, you must send the request to
|
||||
the federation apiserver instead of sending it to a specific Kubernetes cluster.
|
||||
|
||||
For example, you can do that using kubectl by running:
|
||||
|
||||
```shell
|
||||
kubectl --context=federation-cluster delete daemonset mydaemonset
|
||||
```
|
||||
-->
|
||||
## 删除联邦 Daemonset
|
||||
|
||||
您可以像删除 Kubernetes Daemonset 一样删除联邦 Daemonset。但是,对于联邦 Daemonset,您必须发送请求到联邦 apiserver 而不是某个特定的 Kubernetes 集群。
|
||||
|
||||
例如,您可以使用 kubectl 运行下面的命令来删除联邦 Daemonset:
|
||||
|
||||
```shell
|
||||
kubectl --context=federation-cluster delete daemonset mydaemonset
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,177 +0,0 @@
|
|||
---
|
||||
title: 联邦 Deployment
|
||||
content_type: task
|
||||
---
|
||||
<!--
|
||||
---
|
||||
title: Federated Deployment
|
||||
content_type: task
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< deprecationfilewarning >}}
|
||||
{{< include "federation-deprecation-warning-note.md" >}}
|
||||
{{< /deprecationfilewarning >}}
|
||||
|
||||
<!--
|
||||
This guide explains how to use Deployments in the Federation control plane.
|
||||
|
||||
Deployments in the federation control plane (referred to as "Federated Deployments" in
|
||||
this guide) are very similar to the traditional [Kubernetes
|
||||
Deployment](/docs/concepts/workloads/controllers/deployment/) and provide the same functionality.
|
||||
Creating them in the federation control plane ensures that the desired number of
|
||||
replicas exist across the registered clusters.
|
||||
-->
|
||||
本指南说明了如何在联邦控制平面中使用 Deployment。
|
||||
|
||||
联邦控制平面中的 Deployment(在本指南中称为 “联邦 Deployment”)与传统的 [Kubernetes
|
||||
Deployment](/docs/concepts/workloads/controllers/deployment/) 非常类似,并提供相同的功能。在联邦控制平面中创建联邦 Deployment 确保所需的副本数存在于注册的群集中。
|
||||
|
||||
{{< feature-state for_k8s_version="1.5" state="alpha" >}}
|
||||
|
||||
<!--
|
||||
Some features
|
||||
(such as full rollout compatibility) are still in development.
|
||||
-->
|
||||
一些特性(例如完整的 rollout 兼容性)仍在开发中。
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* {{< include "federated-task-tutorial-prereqs.md" >}}
|
||||
<!--
|
||||
* You should also have a basic
|
||||
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
|
||||
general and [Deployments](/docs/concepts/workloads/controllers/deployment/) in particular.
|
||||
-->
|
||||
* 您还应当拥有基本的 [Kubernetes 应用知识](/docs/tutorials/kubernetes-basics/),特别是在 [Deployments](/docs/concepts/workloads/controllers/deployment/) 方面。
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
<!--
|
||||
## Creating a Federated Deployment
|
||||
|
||||
The API for Federated Deployment is compatible with the
|
||||
API for traditional Kubernetes Deployment. You can create a Deployment by sending
|
||||
a request to the federation apiserver.
|
||||
|
||||
You can do that using [kubectl](/docs/user-guide/kubectl/) by running:
|
||||
|
||||
``` shell
|
||||
kubectl --context=federation-cluster create -f mydeployment.yaml
|
||||
```
|
||||
|
||||
The `--context=federation-cluster` flag tells kubectl to submit the
|
||||
request to the Federation apiserver instead of sending it to a Kubernetes
|
||||
cluster.
|
||||
|
||||
Once a Federated Deployment is created, the federation control plane will create
|
||||
a Deployment in all underlying Kubernetes clusters.
|
||||
You can verify this by checking each of the underlying clusters, for example:
|
||||
|
||||
``` shell
|
||||
kubectl --context=gce-asia-east1a get deployment mydep
|
||||
```
|
||||
|
||||
The above assumes that you have a context named 'gce-asia-east1a'
|
||||
configured in your client for your cluster in that zone.
|
||||
|
||||
These Deployments in underlying clusters will match the federation Deployment
|
||||
_except_ in the number of replicas and revision-related annotations.
|
||||
Federation control plane ensures that the
|
||||
sum of replicas in each cluster combined matches the desired number of replicas in the
|
||||
Federated Deployment.
|
||||
-->
|
||||
## 创建联邦 Deployment
|
||||
|
||||
联邦 Deployment 的 API 和传统的 Kubernetes Deployment API 是兼容的。 您可以通过向联邦 apiserver 发送请求来创建一个 Deployment。
|
||||
|
||||
您可以通过使用 [kubectl](/docs/user-guide/kubectl/) 运行下面的指令:
|
||||
|
||||
``` shell
|
||||
kubectl --context=federation-cluster create -f mydeployment.yaml
|
||||
```
|
||||
|
||||
`--context=federation-cluster` 参数告诉 kubectl 发送请求到联邦 apiserver 而不是某个 Kubernetes 集群。
|
||||
|
||||
一旦联邦 Deployment 被创建,联邦控制平面会在所有底层 Kubernetes 集群中创建一个 Deployment。 您可以通过检查底层每个集群来对其进行验证,例如:
|
||||
|
||||
``` shell
|
||||
kubectl --context=gce-asia-east1a get deployment mydep
|
||||
```
|
||||
|
||||
上面的命令假定您在客户端中配置了一个叫做 ‘gce-asia-east1a’ 的上下文,
|
||||
|
||||
底层集群中的这些 Deployment 会匹配联邦 Deployment 中副本数和修订版本相关注解_之外_的信息。 联邦控制平面确保所有集群中的副本总数与联邦 Deployment 中请求的副本数量匹配。
|
||||
|
||||
<!--
|
||||
### Spreading Replicas in Underlying Clusters
|
||||
|
||||
By default, replicas are spread equally in all the underlying clusters. For example:
|
||||
if you have 3 registered clusters and you create a Federated Deployment with
|
||||
`spec.replicas = 9`, then each Deployment in the 3 clusters will have
|
||||
`spec.replicas=3`.
|
||||
To modify the number of replicas in each cluster, you can specify
|
||||
[FederatedReplicaSetPreference](https://github.com/kubernetes/federation/blob/{{< param "githubbranch" >}}/apis/federation/types.go)
|
||||
as an annotation with key `federation.kubernetes.io/deployment-preferences`
|
||||
on Federated Deployment.
|
||||
-->
|
||||
### 在底层集群中分布副本
|
||||
|
||||
默认情况下,副本会被平均分布到所有的底层集群中。例如:如果您有 3 个注册的集群并且创建了一个副本数为 9(`spec.replicas = 9`) 的联邦 Deployment,那么这 3 个集群中的每个 Deployment 都将有 3 个副本 (`spec.replicas=3`)。
|
||||
为修改每个集群中的副本数,您可以在联邦 Deployment 中以注解的形式指定 [FederatedReplicaSetPreference](https://github.com/kubernetes/federation/blob/{{< param "githubbranch" >}}/apis/federation/types.go),其中注解的键为 `federation.kubernetes.io/deployment-preferences`。
|
||||
|
||||
|
||||
<!--
|
||||
## Updating a Federated Deployment
|
||||
|
||||
You can update a Federated Deployment as you would update a Kubernetes
|
||||
Deployment; however, for a Federated Deployment, you must send the request to
|
||||
the federation apiserver instead of sending it to a specific Kubernetes cluster.
|
||||
The federation control plane ensures that whenever the Federated Deployment is
|
||||
updated, it updates the corresponding Deployments in all underlying clusters to
|
||||
match it. So if the rolling update strategy was chosen then the underlying
|
||||
cluster will do the rolling update independently and `maxSurge` and `maxUnavailable`
|
||||
will apply only to individual clusters. This behavior may change in the future.
|
||||
|
||||
If your update includes a change in number of replicas, the federation
|
||||
control plane will change the number of replicas in underlying clusters to
|
||||
ensure that their sum remains equal to the number of desired replicas in
|
||||
Federated Deployment.
|
||||
-->
|
||||
## 更新联邦 Deployment
|
||||
|
||||
您可以像更新 Kubernetes Deployment 一样更新联邦 Deployment。但是,对于联邦 Deployment,您必须发送请求到联邦 apiserver 而不是某个特定的 Kubernetes 集群。联邦控制平面会确保每当联邦 Deployment 更新时,它会更新所有底层集群中相应的 Deployment 来和更新后的内容保持一致。 所以如果(在联邦 Deployment 中)选择了滚动更新,那么底层集群会独立地进行滚动更新,并且联邦 Deployment 中的 `maxSurge` 和 `maxUnavailable` 只会应用于独立的集群中。将来这种行为可能会改变。
|
||||
|
||||
如果您的更新包括副本数量的变化,联邦控制平面会改变底层集群中的副本数量,以确保它们的总数等于联邦 Deployment 中请求的数量。
|
||||
|
||||
<!--
|
||||
## Deleting a Federated Deployment
|
||||
|
||||
You can delete a Federated Deployment as you would delete a Kubernetes
|
||||
Deployment; however, for a Federated Deployment, you must send the request to
|
||||
the federation apiserver instead of sending it to a specific Kubernetes cluster.
|
||||
|
||||
For example, you can do that using kubectl by running:
|
||||
|
||||
```shell
|
||||
kubectl --context=federation-cluster delete deployment mydep
|
||||
```
|
||||
-->
|
||||
## 删除联邦 Deployment
|
||||
|
||||
您可以像删除 Kubernetes Deployment 一样删除联邦 Deployment。但是,对于联邦 Deployment,您必须发送请求到联邦 apiserver 而不是某个特定的 Kubernetes 集群。
|
||||
|
||||
例如,您可以使用 kubectl 运行下面的命令来删除联邦 Deployment:
|
||||
|
||||
```shell
|
||||
kubectl --context=federation-cluster delete deployment mydep
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,87 +0,0 @@
|
|||
---
|
||||
title: 联邦事件
|
||||
content_type: concept
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Federated Events
|
||||
content_type: concept
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< deprecationfilewarning >}}
|
||||
{{< include "federation-deprecation-warning-note.md" >}}
|
||||
{{< /deprecationfilewarning >}}
|
||||
|
||||
<!--
|
||||
This guide explains how to use events in federation control plane to help in debugging.
|
||||
-->
|
||||
本指南介绍如何在联邦控制平面中使用事件来帮助调试。
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## Prerequisites
|
||||
-->
|
||||
|
||||
## 先决条件
|
||||
|
||||
<!--
|
||||
This guide assumes that you have a running Kubernetes Cluster
|
||||
Federation installation. If not, then head over to the
|
||||
[federation admin guide](/docs/concepts/cluster-administration/federation/) to learn how to
|
||||
bring up a cluster federation (or have your cluster administrator do
|
||||
this for you). Other tutorials, for example
|
||||
[this one](https://github.com/kelseyhightower/kubernetes-cluster-federation)
|
||||
by Kelsey Hightower, are also available to help you.
|
||||
-->
|
||||
|
||||
本指南假定您正在运行 Kubernetes 集群联邦安装。
|
||||
如果没有,请转到[联邦管理员指南](/docs/concepts/cluster-administration/federation/),了解如何启动集群联邦(或让集群管理员为您执行此操作)。
|
||||
其他教程,例如[这个](https://github.com/kelseyhightower/kubernetes-cluster-federation)由 Kelsey Hightower,也可为您提供帮助。
|
||||
|
||||
<!--
|
||||
You should also have a basic
|
||||
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
|
||||
general.
|
||||
-->
|
||||
你还应该具备 [kubernetes 基本工作知识](/docs/tutorials/kubernetes-basics/)。
|
||||
|
||||
<!--
|
||||
## View federation events
|
||||
-->
|
||||
|
||||
## 查看联邦事件
|
||||
|
||||
<!--
|
||||
Events in federation control plane (referred to as "federation events" in
|
||||
this guide) are very similar to the traditional Kubernetes
|
||||
Events providing the same functionality.
|
||||
Federation Events are stored only in federation control plane and are not passed on to the underlying Kubernetes clusters.
|
||||
-->
|
||||
联邦控制平面中的事件(本指南中称为“联邦事件”)与提供相同功能的传统 Kubernetes 事件非常相似。
|
||||
联邦事件仅存储在联邦控制平面中,不会传递给基础 Kubernetes 集群。
|
||||
|
||||
<!--
|
||||
Federation controllers create events as they process API resources to surface to the
|
||||
user, the state that they are in.
|
||||
You can get all events from federation apiserver by running:
|
||||
-->
|
||||
联邦控制器在处理 API 资源时创建事件,以便向用户显示它们所处的状态。您可以通过运行以下命令从联邦 apiserver 获取所有事件:
|
||||
|
||||
```shell
|
||||
kubectl --context=federation-cluster get events
|
||||
```
|
||||
|
||||
<!--
|
||||
The standard kubectl get, update, delete commands will all work.
|
||||
-->
|
||||
标准的 kubectl get,update,delete 命令都可以正常工作。
|
||||
|
||||
|
|
@ -1,191 +0,0 @@
|
|||
---
|
||||
title: 联邦 Job
|
||||
content_type: task
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Federated Jobs
|
||||
content_type: task
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< deprecationfilewarning >}}
|
||||
{{< include "federation-deprecation-warning-note.md" >}}
|
||||
{{< /deprecationfilewarning >}}
|
||||
|
||||
<!--
|
||||
This guide explains how to use jobs in the federation control plane.
|
||||
|
||||
Jobs in the federation control plane (referred to as "federated jobs" in
|
||||
this guide) are similar to the traditional [Kubernetes
|
||||
jobs](/docs/concepts/workloads/controllers/job/), and provide the same functionality.
|
||||
Creating jobs in the federation control plane ensures that the desired number of
|
||||
parallelism and completions exist across the registered clusters.
|
||||
-->
|
||||
本指南解释了如何在联邦控制平面中使用 job。
|
||||
|
||||
联邦控制平面中的一次性任务(在本指南中称为“联邦一次性任务”)类似于传统的 [Kubernetes 一次性任务](/docs/concepts/workloads/controllers/job/),并且提供相同的功能。
|
||||
在联邦控制平面中创建 job 可以确保在已注册的集群中存在所需的并行性和完成数。
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* {{< include "federated-task-tutorial-prereqs.md" >}}
|
||||
* 你需要具备基本的 [Kubernetes 的工作知识](/docs/tutorials/kubernetes-basics/),特别是 [job](/docs/concepts/workloads/controllers/jobs-run-to-completion/)。
|
||||
|
||||
<!--
|
||||
* You should also have a basic
|
||||
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
|
||||
general and [jobs](/docs/concepts/workloads/controllers/jobs-run-to-completion/) in particular.
|
||||
-->
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Creating a federated job
|
||||
-->
|
||||
|
||||
## 创建一个联邦 job
|
||||
|
||||
<!--
|
||||
The API for federated jobs is fully compatible with the
|
||||
API for traditional Kubernetes jobs. You can create a job by sending
|
||||
a request to the federation apiserver.
|
||||
|
||||
You can do that using [kubectl](/docs/user-guide/kubectl/) by running:
|
||||
-->
|
||||
|
||||
用于联邦 job 的 API 与用于传统 Kubernetes job 的 API 完全兼容。您可以通过向联邦 apiserver 发送请求来创建 job。
|
||||
|
||||
你可以使用 [kubectl](/docs/user-guide/kubectl/) 来运行:
|
||||
|
||||
``` shell
|
||||
kubectl --context=federation-cluster create -f myjob.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
The `--context=federation-cluster` flag tells kubectl to submit the
|
||||
request to the federation API server instead of sending it to a Kubernetes
|
||||
cluster.
|
||||
-->
|
||||
`--context=federation-cluster` 参数告诉 kubectl 将请求提交到联邦 API 服务器,而不是发送到 Kubernetes 集群。
|
||||
|
||||
<!--
|
||||
Once a federated job is created, the federation control plane creates
|
||||
a job in all underlying Kubernetes clusters.
|
||||
You can verify this by checking each of the underlying clusters, for example:
|
||||
-->
|
||||
一旦创建了联邦 job,联邦控制平面将在所有底层 Kubernetes 集群中创建一个 job。
|
||||
你可以通过检查每个集群底层来验证这一点,例如:
|
||||
|
||||
``` shell
|
||||
kubectl --context=gce-asia-east1a get job myjob
|
||||
```
|
||||
|
||||
<!--
|
||||
The previous example assumes that you have a context named `gce-asia-east1a`
|
||||
configured in your client for your cluster in that zone.
|
||||
-->
|
||||
前面的示例假设你的客户端中为该区域中的集群配置了一个名为 `gce-asia-east1a` 的上下文。
|
||||
|
||||
<!--
|
||||
The jobs in the underlying clusters match the federated job
|
||||
except in the number of parallelism and completions. The federation control plane ensures that the
|
||||
sum of the parallelism and completions in each cluster matches the desired number of parallelism and completions in the
|
||||
federated job.
|
||||
-->
|
||||
集群底层中的 job 与联邦 job 匹配,但并行性和完成数不匹配。
|
||||
联邦控制平面确保每个集群中的并行性和完成数之和与联合作业中所需的并行度和完成数匹配。
|
||||
|
||||
<!--
|
||||
### Spreading job tasks in underlying clusters
|
||||
-->
|
||||
|
||||
### 将 job 任务分散到集群底层中
|
||||
|
||||
<!--
|
||||
By default, parallelism and completions are spread equally in all underlying clusters. For example:
|
||||
if you have 3 registered clusters and you create a federated job with
|
||||
`spec.parallelism = 9` and `spec.completions = 18`, then each job in the 3 clusters has
|
||||
`spec.parallelism = 3` and `spec.completions = 6`.
|
||||
To modify the number of parallelism and completions in each cluster, you can specify
|
||||
[ReplicaAllocationPreferences](https://github.com/kubernetes/federation/blob/{{< param "githubbranch" >}}/apis/federation/types.go)
|
||||
as an annotation with key `federation.kubernetes.io/job-preferences`
|
||||
on the federated job.
|
||||
-->
|
||||
默认情况下,并行性和完成数在所有底层集群中平均分布。例如:
|
||||
如果你有 3 个已注册的集群,并且创建了一个联邦 job
|
||||
`spec.parallelism = 9` 和 `spec.completions = 18`,那么 3 个集群中的每个 job 都有 `spec.parallelism = 3` 和 `spec.completions = 6`。
|
||||
要修改每个集群中的并行性和完成数,可以指定 [ReplicaAllocationPreferences](https://github.com/kubernetes/federation/blob/{{< param "githubbranch" >}}/apis/federation/types.go)
|
||||
作为 `federation.kubernetes.io/job-preferences` 联邦 job 上的 key 的注释。
|
||||
|
||||
<!--
|
||||
## Updating a federated job
|
||||
-->
|
||||
|
||||
## 更新联邦 job
|
||||
|
||||
<!--
|
||||
You can update a federated job as you would update a Kubernetes
|
||||
job; however, for a federated job, you must send the request to
|
||||
the federation API server instead of sending it to a specific Kubernetes cluster.
|
||||
The federation control plane ensures that whenever the federated job is
|
||||
updated, it updates the corresponding job in all underlying clusters to
|
||||
match it.
|
||||
-->
|
||||
可以像更新 Kubernetes job 一样更新联邦 job;但是,对于联邦 job,必须将请求发送到联邦 API 服务器,不是发送到指定的 Kubernetes 集群。
|
||||
联邦控制平面确保无论何时更新联邦 job,它都会更新所有集群底层中的相应 job 以匹配它。
|
||||
|
||||
<!--
|
||||
If your update includes a change in number of parallelism and completions, the federation
|
||||
control plane changes the number of parallelism and completions in underlying clusters to
|
||||
ensure that their sum remains equal to the number of desired parallelism and completions in
|
||||
federated job.
|
||||
-->
|
||||
如果您的更新包含并行性和完成数的更改,则联邦控制平面将更改集群底层中的并行性和完成数,
|
||||
确保它们的总和仍然等于联邦 job 中所需的并行性和完成数。
|
||||
|
||||
<!--
|
||||
## Deleting a federated job
|
||||
-->
|
||||
|
||||
## 删除联邦 job
|
||||
|
||||
<!--
|
||||
You can delete a federated job as you would delete a Kubernetes
|
||||
job; however, for a federated job, you must send the request to
|
||||
the federation API server instead of sending it to a specific Kubernetes cluster.
|
||||
-->
|
||||
可以删除联邦 job,就像删除 Kubernetes job 一样;但是,对于联邦 job,必须将请求发送到联邦 API 服务器,不是发送到指定的 Kubernetes 集群。
|
||||
|
||||
<!--
|
||||
For example, with kubectl:
|
||||
-->
|
||||
例如,使用 kubectl:
|
||||
|
||||
```shell
|
||||
kubectl --context=federation-cluster delete job myjob
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
|
||||
<!--
|
||||
Deleting a federated job will not delete the
|
||||
corresponding jobs from underlying clusters.
|
||||
You must delete the underlying jobs manually.
|
||||
-->
|
||||
删除联邦作业不会从基础集群中删除相应的 job。
|
||||
您必须手动删除基础 job。
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,158 +0,0 @@
|
|||
---
|
||||
title: 联邦命名空间
|
||||
content_type: task
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Federated Namespaces
|
||||
content_type: task
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< deprecationfilewarning >}}
|
||||
{{< include "federation-deprecation-warning-note.md" >}}
|
||||
{{< /deprecationfilewarning >}}
|
||||
|
||||
<!--
|
||||
This guide explains how to use Namespaces in Federation control plane.
|
||||
-->
|
||||
本指南介绍如何在联邦控制平面中使用命名空间。
|
||||
|
||||
<!--
|
||||
Namespaces in federation control plane (referred to as "federated Namespaces" in
|
||||
this guide) are very similar to the traditional [Kubernetes
|
||||
Namespaces](/docs/concepts/overview/working-with-objects/namespaces/) providing the same functionality.
|
||||
Creating them in the federation control plane ensures that they are synchronized
|
||||
across all the clusters in federation.
|
||||
-->
|
||||
联邦控制平面中的命名空间(本指南中称为“联邦命名空间”)与提供相同功能的传统 Kubernetes 命名空间非常相似。
|
||||
在联邦控制平面中创建它们可确保它们在联邦中的所有集群之间同步
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* {{< include "federated-task-tutorial-prereqs.md" >}}
|
||||
* 您还需要具备基本的 [Kubernetes 工作知识](/docs/tutorials/Kubernetes-basics/),
|
||||
特别是[命名空间](/docs/concepts/overview/working-objects/Namespaces/)。
|
||||
|
||||
<!--
|
||||
You are also expected to have a basic
|
||||
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
|
||||
general and [Namespaces](/docs/concepts/overview/working-with-objects/namespaces/) in particular.
|
||||
-->
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Creating a Federated Namespace
|
||||
-->
|
||||
|
||||
## 创建联邦命名空间
|
||||
|
||||
<!--
|
||||
The API for Federated Namespaces is 100% compatible with the
|
||||
API for traditional Kubernetes Namespaces. You can create a Namespace by sending
|
||||
a request to the federation apiserver.
|
||||
-->
|
||||
联邦命名空间的 API 与传统 Kubernetes 命名空间的 API 100% 兼容。您可以通过向联邦身份验证程序发送请求来创建命名空间。
|
||||
|
||||
<!--
|
||||
You can do that using kubectl by running:
|
||||
-->
|
||||
您可以通过运行以下命令使用 kubectl 执行此操作:
|
||||
|
||||
``` shell
|
||||
kubectl --context=federation-cluster create -f myns.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
The `--context=federation-cluster` flag tells kubectl to submit the
|
||||
request to the Federation apiserver instead of sending it to a Kubernetes
|
||||
cluster.
|
||||
-->
|
||||
`--context=federation-cluster` 参数通知 kubectl 将请求提交给联邦 apiserver,而不是将其发送到 Kubernetes 集群。
|
||||
|
||||
<!--
|
||||
Once a federated Namespace is created, the federation control plane will create
|
||||
a matching Namespace in all underlying Kubernetes clusters.
|
||||
You can verify this by checking each of the underlying clusters, for example:
|
||||
-->
|
||||
创建联邦命名空间后,联邦控制平面将在所有基础 Kubernetes 集群中创建匹配的命名空间。您可以通过检查每个基础集群来验证这一点,例如:
|
||||
|
||||
``` shell
|
||||
kubectl --context=gce-asia-east1a get namespaces myns
|
||||
```
|
||||
|
||||
<!--
|
||||
The above assumes that you have a context named 'gce-asia-east1a'
|
||||
configured in your client for your cluster in that zone. The name and
|
||||
spec of the underlying Namespace will match those of
|
||||
the Federated Namespace that you created above.
|
||||
-->
|
||||
以上假设您在客户端中为该区域中的集群配置了名为 “gce-asia-east1a” 的上下文。
|
||||
基础命名空间的名称和规范将与您在上面创建的联邦命名空间的名称和规范相匹配。
|
||||
|
||||
<!--
|
||||
## Updating a Federated Namespace
|
||||
-->
|
||||
|
||||
## 更新联邦命名空间
|
||||
|
||||
<!--
|
||||
You can update a federated Namespace as you would update a Kubernetes
|
||||
Namespace, just send the request to federation apiserver instead of sending it
|
||||
to a specific Kubernetes cluster.
|
||||
Federation control plane will ensure that whenever the federated Namespace is
|
||||
updated, it updates the corresponding Namespaces in all underlying clusters to
|
||||
match it.
|
||||
-->
|
||||
您可以像更新 Kubernetes 命名空间一样更新联邦命名空间,只需将请求发送到联邦身份验证程序,而不是将其发送到指定的 Kubernetes 集群。
|
||||
联邦控制平面将确保每当更新联邦命名空间时,它都会更新所有基础集群中的相应命名空间以与其匹配。
|
||||
|
||||
<!--
|
||||
## Deleting a Federated Namespace
|
||||
-->
|
||||
|
||||
## 删除联邦命名空间
|
||||
|
||||
<!--
|
||||
You can delete a federated Namespace as you would delete a Kubernetes
|
||||
Namespace, just send the request to federation apiserver instead of sending it
|
||||
to a specific Kubernetes cluster.
|
||||
-->
|
||||
你可以删除联邦命名空间,就像删除 Kubernetes 命名空间一样,只需将请求发送到联邦身份验证器,而不是发送到指定的 Kubernetes 群集。
|
||||
|
||||
<!--
|
||||
For example, you can do that using kubectl by running:
|
||||
-->
|
||||
例如,您可以通过运行以下命令使用 kubectl 执行此操作:
|
||||
|
||||
```shell
|
||||
kubectl --context=federation-cluster delete ns myns
|
||||
```
|
||||
|
||||
<!--
|
||||
As in Kubernetes, deleting a federated Namespace will delete all resources in that
|
||||
Namespace from the federation control plane.
|
||||
-->
|
||||
与在 Kubernetes 中一样,删除联邦命名空间将从联邦控制平面中删除该命名空间中的所有资源。
|
||||
|
||||
{{< note >}}
|
||||
|
||||
<!--
|
||||
At this point, deleting a federated Namespace will not delete the corresponding Namespace, or resources in those Namespaces, from underlying clusters. Users must delete them manually. We intend to fix this in the future.
|
||||
-->
|
||||
此时,删除联邦命名空间,不会从底层集群中删除相应的命名空间或这些命名空间中的资源。用户必须手动删除它们。我们打算将来解决这个问题。
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,220 +0,0 @@
|
|||
---
|
||||
title: 联邦 ReplicaSet
|
||||
content_type: task
|
||||
---
|
||||
<!--
|
||||
---
|
||||
title: Federated ReplicaSets
|
||||
content_type: task
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< deprecationfilewarning >}}
|
||||
{{< include "federation-deprecation-warning-note.md" >}}
|
||||
{{< /deprecationfilewarning >}}
|
||||
|
||||
<!--
|
||||
This guide explains how to use ReplicaSets in the Federation control plane.
|
||||
|
||||
ReplicaSets in the federation control plane (referred to as "federated ReplicaSets" in
|
||||
this guide) are very similar to the traditional [Kubernetes
|
||||
ReplicaSets](/docs/concepts/workloads/controllers/replicaset/), and provide the same functionality.
|
||||
Creating them in the federation control plane ensures that the desired number of
|
||||
replicas exist across the registered clusters.
|
||||
-->
|
||||
本指南阐述了如何在联邦控制平面中使用 ReplicaSet。
|
||||
在联邦控制平面中的 ReplicaSet (在本指南中称为”联邦 ReplicaSet”) 和传统的 [Kubernetes
|
||||
ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) 很相似,提供了一样的功能。在联邦控制平面中创建联邦 ReplicaSet 可以确保在联邦的所有集群中都有预期数量的副本。
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* {{< include "federated-task-tutorial-prereqs.md" >}}
|
||||
<!--
|
||||
* You should also have a basic
|
||||
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
|
||||
general and [ReplicaSets](/docs/concepts/workloads/controllers/replicaset/) in particular.
|
||||
-->
|
||||
* 你还应该具备基本的 [Kubernetes 应用知识](/docs/tutorials/kubernetes-basics/),特别是 [ReplicaSets](/docs/concepts/workloads/controllers/replicaset/) 相关的应用知识。
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Creating a Federated ReplicaSet
|
||||
|
||||
The API for Federated ReplicaSet is 100% compatible with the
|
||||
API for traditional Kubernetes ReplicaSet. You can create a ReplicaSet by sending
|
||||
a request to the federation apiserver.
|
||||
|
||||
You can do that using [kubectl](/docs/user-guide/kubectl/) by running:
|
||||
|
||||
``` shell
|
||||
kubectl --context=federation-cluster create -f myrs.yaml
|
||||
```
|
||||
|
||||
The `--context=federation-cluster` flag tells kubectl to submit the
|
||||
request to the Federation apiserver instead of sending it to a Kubernetes
|
||||
cluster.
|
||||
|
||||
Once a federated ReplicaSet is created, the federation control plane will create
|
||||
a ReplicaSet in all underlying Kubernetes clusters.
|
||||
You can verify this by checking each of the underlying clusters, for example:
|
||||
|
||||
``` shell
|
||||
kubectl --context=gce-asia-east1a get rs myrs
|
||||
```
|
||||
|
||||
The above assumes that you have a context named 'gce-asia-east1a'
|
||||
configured in your client for your cluster in that zone.
|
||||
|
||||
The ReplicaSets in the underlying clusters will match the federation ReplicaSet
|
||||
except in the number of replicas. The federation control plane will ensure that the
|
||||
sum of the replicas in each cluster match the desired number of replicas in the
|
||||
federation ReplicaSet.
|
||||
-->
|
||||
## 创建联邦 ReplicaSet
|
||||
|
||||
联邦 ReplicaSet 的 API 和传统的 Kubernetes ReplicaSet API 是 100% 兼容的。您可以通过请求联邦 apiserver 来创建联邦 ReplicaSet。
|
||||
|
||||
您可以通过使用 [kubectl](/docs/user-guide/kubectl/) 运行下面的指令来创建联邦 ReplicaSet:
|
||||
|
||||
``` shell
|
||||
kubectl --context=federation-cluster create -f myrs.yaml
|
||||
```
|
||||
|
||||
`--context=federation-cluster` 参数告诉 kubectl 发送请求到联邦 apiserver 而不是某个 Kubernetes 集群。
|
||||
|
||||
一旦联邦 ReplicaSet 被创建了,联邦控制平面就会在所有底层 Kubernetes 集群中创建一个 ReplicaSet。您可以通过检查底层每个集群来对其进行验证,例如:
|
||||
|
||||
``` shell
|
||||
kubectl --context=gce-asia-east1a get rs myrs
|
||||
```
|
||||
|
||||
上面的命令假定您在客户端中配置了一个叫做 ‘gce-asia-east1a’ 的上下文。
|
||||
|
||||
底层集群中的 ReplicaSet 的副本数将会和联邦 ReplicaSet 的副本数保持一致。联邦控制平面将确保联邦的所有集群都和联邦 ReplicaSet 有同样的副本数。
|
||||
|
||||
<!--
|
||||
### Spreading Replicas in Underlying Clusters
|
||||
|
||||
By default, replicas are spread equally in all the underlying clusters. For example:
|
||||
if you have 3 registered clusters and you create a federated ReplicaSet with
|
||||
`spec.replicas = 9`, then each ReplicaSet in the 3 clusters will have
|
||||
`spec.replicas=3`.
|
||||
To modify the number of replicas in each cluster, you can add an annotation with
|
||||
key `federation.kubernetes.io/replica-set-preferences` to the federated ReplicaSet.
|
||||
The value of the annoation is a serialized JSON that contains fields shown in
|
||||
the following example:
|
||||
|
||||
```
|
||||
{
|
||||
"rebalance": true,
|
||||
"clusters": {
|
||||
"foo": {
|
||||
"minReplicas": 10,
|
||||
"maxReplicas": 50,
|
||||
"weight": 100
|
||||
},
|
||||
"bar": {
|
||||
"minReplicas": 10,
|
||||
"maxReplicas": 100,
|
||||
"weight": 200
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The `rebalance` boolean field specifies whether replicas already scheduled and running
|
||||
may be moved in order to match current state to the specified preferences.
|
||||
The `clusters` object field contains a map where users can specify the constraints
|
||||
for replica placement across the clusters (`foo` and `bar` in the example).
|
||||
For each cluster, you can specify the minimum number of replicas that should be
|
||||
assigned to it (default is zero), the maximum number of replicas the cluster can
|
||||
accept (default is unbounded) and a number expressing the relative weight of
|
||||
preferences to place additional replicas to that cluster.
|
||||
-->
|
||||
### 底层集群中副本的分布
|
||||
|
||||
默认情况下,副本在所有底层集群中是均匀分布的。例如:如果您有 3 个注册的集群并且用 `spec.replicas = 9` 参数创建了一个联邦 ReplicaSet,然后在这 3 个集群中每个 ReplicaSet 的副本数会是 `spec.replicas=3`。
|
||||
如果要修改每个集群中的副本数,您可以在联邦 ReplicaSet 中使用 `federation.kubernetes.io/replica-set-preferences` 作为注解键值来修改联合副本集。
|
||||
注解的键值是序列化的 JSON,其中包含以下示例中显示的字段:
|
||||
|
||||
```
|
||||
{
|
||||
"rebalance": true,
|
||||
"clusters": {
|
||||
"foo": {
|
||||
"minReplicas": 10,
|
||||
"maxReplicas": 50,
|
||||
"weight": 100
|
||||
},
|
||||
"bar": {
|
||||
"minReplicas": 10,
|
||||
"maxReplicas": 100,
|
||||
"weight": 200
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
`rebalance` 布尔字段指定是否可以移动已调度和正在运行的副本,以便将当前状态与指定的首选项相匹配。
|
||||
`clusters` 对象字段包含一个映射,用户可以在其中指定跨集群的副本放置的约束(示例中为 `foo` 和 `bar`)。
|
||||
对于每个集群,您可以指定应分配给它的最小副本数(默认值为零),集群可以接受的最大副本数(默认为无限制)以及表示要添加该群集的副本的首选项的相对权重的数字。
|
||||
|
||||
<!--
|
||||
## Updating a Federated ReplicaSet
|
||||
|
||||
You can update a federated ReplicaSet as you would update a Kubernetes
|
||||
ReplicaSet; however, for a federated ReplicaSet, you must send the request to
|
||||
the federation apiserver instead of sending it to a specific Kubernetes cluster.
|
||||
The Federation control plane ensures that whenever the federated ReplicaSet is
|
||||
updated, it updates the corresponding ReplicaSet in all underlying clusters to
|
||||
match it.
|
||||
If your update includes a change in number of replicas, the federation
|
||||
control plane will change the number of replicas in underlying clusters to
|
||||
ensure that their sum remains equal to the number of desired replicas in
|
||||
federated ReplicaSet.
|
||||
-->
|
||||
## 更新联邦 ReplicaSet
|
||||
|
||||
您可以像更新 Kubernetes ReplicaSet 一样更新联邦 ReplicaSet。但是对于联邦 ReplicaSet,您必须发送请求到联邦 apiserver 而不是某个特定的 Kubernetes 集群。联邦控制平面会确保任何时候联邦 ReplicaSet 更新后,它会将对应的 ReplicaSet 更新到所有的底层集群中来和它保持一致。
|
||||
|
||||
如果您做了包含副本数量的更改,联邦控制平面将会更改底层集群中的副本数以确保它们的总数和联邦 ReplicaSet 期望的副本数保持一致。
|
||||
|
||||
<!--
|
||||
## Deleting a Federated ReplicaSet
|
||||
|
||||
You can delete a federated ReplicaSet as you would delete a Kubernetes
|
||||
ReplicaSet; however, for a federated ReplicaSet, you must send the request to
|
||||
the federation apiserver instead of sending it to a specific Kubernetes cluster.
|
||||
|
||||
For example, you can do that using kubectl by running:
|
||||
|
||||
```shell
|
||||
kubectl --context=federation-cluster delete rs myrs
|
||||
```
|
||||
-->
|
||||
## 删除联邦 ReplicaSet
|
||||
|
||||
您可以像删除 Kubernetes ReplicaSet 一样删除联邦 ReplicaSet。但是对于联邦 ReplicaSet ,您必须发送请求到联邦 apiserver 而不是某个特定的 Kubernetes 集群。
|
||||
|
||||
例如,您可以使用 kubectl 运行下面的命令来删除联邦 ReplicaSet:
|
||||
|
||||
```shell
|
||||
kubectl --context=federation-cluster delete rs myrs
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
At this point, deleting a federated ReplicaSet will not delete the corresponding ReplicaSets from underlying clusters. You must delete the underlying ReplicaSets manually. We intend to fix this in the future.
|
||||
-->
|
||||
要注意的是这时删除联邦 ReplicaSet 并不会删除底层集群中对应的 ReplicaSet。您必须自己手动删除底层集群中的 ReplicaSet。我们打算在将来修复这个问题。
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,164 +0,0 @@
|
|||
---
|
||||
title: 联邦 Secret
|
||||
content_type: concept
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Federated Secrets
|
||||
content_type: concept
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< deprecationfilewarning >}}
|
||||
{{< include "federation-deprecation-warning-note.md" >}}
|
||||
{{< /deprecationfilewarning >}}
|
||||
|
||||
<!--
|
||||
This guide explains how to use secrets in Federation control plane.
|
||||
|
||||
Secrets in federation control plane (referred to as "federated secrets" in
|
||||
this guide) are very similar to the traditional [Kubernetes
|
||||
Secrets](/docs/concepts/configuration/secret/) providing the same functionality.
|
||||
Creating them in the federation control plane ensures that they are synchronized
|
||||
across all the clusters in federation.
|
||||
-->
|
||||
本指南解释了如何在联邦控制平面中使用 secret。
|
||||
|
||||
联邦控制平面中的 Secret(在本指南中称为“联邦 secret”)与提供相同功能的传统 [Kubernetes Secret](/docs/concepts/configuration/secret/) 非常相似。
|
||||
在联邦控制平面中创建它们可以确保它们跨联邦中的所有集群同步。
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## Prerequisites
|
||||
-->
|
||||
|
||||
## 先决条件
|
||||
|
||||
<!--
|
||||
This guide assumes that you have a running Kubernetes Cluster
|
||||
Federation installation. If not, then head over to the
|
||||
[federation admin guide](/docs/admin/federation/) to learn how to
|
||||
bring up a cluster federation (or have your cluster administrator do
|
||||
this for you). Other tutorials, for example
|
||||
[this one](https://github.com/kelseyhightower/kubernetes-cluster-federation)
|
||||
by Kelsey Hightower, are also available to help you.
|
||||
-->
|
||||
本指南假设你有一个正在运行的 Kubernetes 集群联邦安装。
|
||||
如果没有,请访问[联邦管理指南](/docs/admin/federation/),了解如何启动联邦集群(或者让集群管理员为你做这件事)。
|
||||
其他教程,例如[这里](https://github.com/kelseyhightower/kubernetes-cluster-federation) Kelsey Hightower,也可以帮助您。
|
||||
|
||||
<!--
|
||||
You should also have a basic
|
||||
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
|
||||
general and [Secrets](/docs/concepts/configuration/secret/) in particular.
|
||||
-->
|
||||
你还应该具有一个基本的 [Kubernetes 工作知识](/docs/tutorials/kubernetes-basics/),
|
||||
特别是 [Secret](/docs/concepts/configuration/secret/)。
|
||||
|
||||
<!--
|
||||
## Creating a Federated Secret
|
||||
-->
|
||||
|
||||
## 创建联邦 Secret
|
||||
|
||||
<!--
|
||||
The API for Federated Secret is 100% compatible with the
|
||||
API for traditional Kubernetes Secret. You can create a secret by sending
|
||||
a request to the federation apiserver.
|
||||
-->
|
||||
用于联邦 Secret 的 API 与用于传统的 Kubernetes Secret 的 API 100% 兼容。
|
||||
您可以通过向联邦 apiserver 发送请求来创建一个 Secret。
|
||||
|
||||
<!--
|
||||
You can do that using [kubectl](/docs/user-guide/kubectl/) by running:
|
||||
-->
|
||||
你可以使用 [kubectl](/docs/user-guide/kubectl/) 来运行:
|
||||
|
||||
``` shell
|
||||
kubectl --context=federation-cluster create -f mysecret.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
The `--context=federation-cluster` flag tells kubectl to submit the
|
||||
request to the Federation apiserver instead of sending it to a Kubernetes
|
||||
cluster.
|
||||
-->
|
||||
`--context=federation-cluster` 参数通知 kubectl 将请求提交给联邦 apiserver,而不是将其发送到 Kubernetes 集群。
|
||||
|
||||
<!--
|
||||
Once a federated secret is created, the federation control plane will create
|
||||
a matching secret in all underlying Kubernetes clusters.
|
||||
You can verify this by checking each of the underlying clusters, for example:
|
||||
-->
|
||||
创建联邦命名空间后,联邦控制平面将在所有基础 Kubernetes 集群中创建匹配的命名空间。您可以通过检查每个基础集群来验证这一点,例如:
|
||||
|
||||
``` shell
|
||||
kubectl --context=gce-asia-east1a get secret mysecret
|
||||
```
|
||||
|
||||
<!--
|
||||
The above assumes that you have a context named 'gce-asia-east1a'
|
||||
configured in your client for your cluster in that zone.
|
||||
|
||||
These secrets in underlying clusters will match the federated secret.
|
||||
-->
|
||||
以上假设您在客户端中为该区域中的集群配置了名为 “gce-asia-east1a” 的上下文。
|
||||
集群底层中的这些 secret 将与联邦 secret 匹配。
|
||||
|
||||
<!--
|
||||
## Updating a Federated Secret
|
||||
-->
|
||||
|
||||
## 更新联邦 Secret
|
||||
|
||||
<!--
|
||||
You can update a federated secret as you would update a Kubernetes
|
||||
secret; however, for a federated secret, you must send the request to
|
||||
the federation apiserver instead of sending it to a specific Kubernetes cluster.
|
||||
The Federation control plane ensures that whenever the federated secret is
|
||||
updated, it updates the corresponding secrets in all underlying clusters to
|
||||
match it.
|
||||
-->
|
||||
您可以像更新 Kubernetes secret 一样更新联邦 secret,但是,对于联邦 secret 必须将请求发送到联邦 apiserver,
|
||||
而不是将其发送到指定的 Kubernetes 集群。联邦控制平面将确保每当更新联邦 secret 时,它都会更新所有基础集群中的相应 secret 以与其匹配。
|
||||
|
||||
<!--
|
||||
## Deleting a Federated Secret
|
||||
-->
|
||||
|
||||
## 删除联邦 Secret
|
||||
|
||||
<!--
|
||||
You can delete a federated secret as you would delete a Kubernetes
|
||||
secret; however, for a federated secret, you must send the request to
|
||||
the federation apiserver instead of sending it to a specific Kubernetes cluster.
|
||||
-->
|
||||
你可以删除一个联邦 secret,就像删除一个 Kubernetes secret 一样;但是,
|
||||
对于联邦 secret,必须将请求发送到联邦 apiserver,而不是发送到指定的 Kubernetes 集群。
|
||||
|
||||
<!--
|
||||
For example, you can do that using kubectl by running:
|
||||
-->
|
||||
例如,您可以通过运行以下命令使用 kubectl 执行此操作:
|
||||
|
||||
```shell
|
||||
kubectl --context=federation-cluster delete secret mysecret
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
|
||||
<!--
|
||||
At this point, deleting a federated secret will not delete the corresponding secrets from underlying clusters. You must delete the underlying secrets manually. We intend to fix this in the future.
|
||||
-->
|
||||
此时,删除联邦 secret 不会从集群底层中删除相应的 secret。你必须手动删除底层 secret。我们打算将来解决这个问题。
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
|
|
@ -1,509 +0,0 @@
|
|||
---
|
||||
title: 使用联合服务来实现跨集群的服务发现
|
||||
content_type: task
|
||||
weight: 140
|
||||
---
|
||||
<!-- ---
|
||||
title: Cross-cluster Service Discovery using Federated Services
|
||||
reviewers:
|
||||
- bprashanth
|
||||
- quinton-hoole
|
||||
content_type: task
|
||||
weight: 140
|
||||
--- -->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< deprecationfilewarning >}}
|
||||
{{< include "federation-deprecation-warning-note.md" >}}
|
||||
{{< /deprecationfilewarning >}}
|
||||
|
||||
<!-- This guide explains how to use Kubernetes Federated Services to deploy
|
||||
a common Service across multiple Kubernetes clusters. This makes it
|
||||
easy to achieve cross-cluster service discovery and availability zone
|
||||
fault tolerance for your Kubernetes applications. -->
|
||||
本指南说明了如何使用 Kubernetes 联合服务跨多个 Kubernetes 集群部署通用服务。这样可以轻松实现 Kubernetes 应用程序的跨集群服务发现和可用区容错。
|
||||
|
||||
|
||||
<!-- Federated Services are created in much that same way as traditional
|
||||
[Kubernetes Services](/docs/concepts/services-networking/service/) by making an API
|
||||
call which specifies the desired properties of your service. In the
|
||||
case of Federated Services, this API call is directed to the
|
||||
Federation API endpoint, rather than a Kubernetes cluster API
|
||||
endpoint. The API for Federated Services is 100% compatible with the
|
||||
API for traditional Kubernetes Services. -->
|
||||
联合服务的创建与传统服务几乎相同 [Kubernetes Services](/docs/concepts/services-networking/service/) 即通过 API 调用来指定所需的服务属性。对于联合服务,此 API 调用定向到联合身份验证 API 接入点,而不是 Kubernetes 集群 API 接入点。联合服务的 API 与传统 Kubernetes 服务的 API 是 100% 兼容的。
|
||||
|
||||
<!-- Once created, the Federated Service automatically: -->
|
||||
创建后,联合服务会自动:
|
||||
|
||||
<!-- 1. Creates matching Kubernetes Services in every cluster underlying your Cluster Federation,
|
||||
2. Monitors the health of those service "shards" (and the clusters in which they reside), and
|
||||
3. Manages a set of DNS records in a public DNS provider (like Google Cloud DNS, or AWS Route 53), thus ensuring that clients
|
||||
of your federated service can seamlessly locate an appropriate healthy service endpoint at all times, even in the event of cluster,
|
||||
availability zone or regional outages. -->
|
||||
1. 在基础集群联合的每个集群中创建匹配的 Kubernetes 服务,
|
||||
2. 监视那些服务 "分片"(及其驻留的集群)的运行状况,以及
|
||||
3. 在公共 DNS 提供商(例如 Google Cloud DNS 或 AWS Route 53)中管理一组 DNS 记录,即使在集群可用区域中断的情况下,也能确保您联合服务的客户端始终可以无缝地定位合适的健康服务接入点。
|
||||
|
||||
<!-- Clients inside your federated Kubernetes clusters (that is Pods) will
|
||||
automatically find the local shard of the Federated Service in their
|
||||
cluster if it exists and is healthy, or the closest healthy shard in a
|
||||
different cluster if it does not. -->
|
||||
如果存在健康的分片,联合 Kubernetes 集群(即 Pods )中的客户端将自动在其中找到联合服务的本地分片集群或者集群中最接近的健康分片;如果不存在,则使用最接近的其他集群的健康分片。
|
||||
|
||||
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!-- ## Prerequisites -->
|
||||
## 前提
|
||||
|
||||
<!-- This guide assumes that you have a running Kubernetes Cluster
|
||||
Federation installation. If not, then head over to the
|
||||
[federation admin guide](/docs/admin/federation/) to learn how to
|
||||
bring up a cluster federation (or have your cluster administrator do
|
||||
this for you). Other tutorials, for example
|
||||
[this one](https://github.com/kelseyhightower/kubernetes-cluster-federation)
|
||||
by Kelsey Hightower, are also available to help you. -->
|
||||
本指南假设您已经安装 Kubernetes 联合集群。如果没有,则访问 [联合集群管理指南](/docs/admin/federation/)了解如何建立联合集群(或让您的集群管理员为您执行此操作)。其他教程,例如 Kelsey Hightower 编写的 [案例](https://github.com/kelseyhightower/kubernetes-cluster-federation)或许有用。
|
||||
|
||||
<!-- You should also have a basic
|
||||
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
|
||||
general, and [Services](/docs/concepts/services-networking/service/) in particular. -->
|
||||
一般而言,您应该还有基本的 [Kubernetes 工作常识](/docs/tutorials/kubernetes-basics/),特别是 [Services](/docs/concepts/services-networking/service/)。
|
||||
|
||||
<!-- ## Hybrid cloud capabilities -->
|
||||
## 混合云功能
|
||||
|
||||
<!-- Federations of Kubernetes Clusters can include clusters running in
|
||||
different cloud providers (such as Google Cloud or AWS), and on-premises
|
||||
(such as on OpenStack). Simply create all of the clusters that you
|
||||
require, in the appropriate cloud providers and/or locations, and
|
||||
register each cluster's API endpoint and credentials with your
|
||||
Federation API Server (See the
|
||||
[federation admin guide](/docs/admin/federation/) for details). -->
|
||||
Kubernetes 联合集群需要可以在不同的云提供商(例如 Google Cloud 或 AWS)和本地(例如 OpenStack)环境中运行。只需在合适的云提供商创建所需的所有集群,向您的联合身份验证 API 服务器注册每个集群的 API 接入点和凭据(有关详细信息,请参见 [联合管理指南](/docs/admin/federation/))。
|
||||
|
||||
<!-- Thereafter, your applications and services can span different clusters
|
||||
and cloud providers as described in more detail below. -->
|
||||
此后,您的应用程序和服务可以跨越不同的集群和云提供商,如下所述。
|
||||
|
||||
<!-- ## Creating a federated service -->
|
||||
## 创建联合服务
|
||||
|
||||
<!-- This is done in the usual way, for example: -->
|
||||
常见方式创建,例如:
|
||||
|
||||
``` shell
|
||||
kubectl --context=federation-cluster create -f services/nginx.yaml
|
||||
```
|
||||
|
||||
<!-- The '--context=federation-cluster' flag tells kubectl to submit the
|
||||
request to the Federation API endpoint, with the appropriate
|
||||
credentials. If you have not yet configured such a context, visit the
|
||||
[federation admin guide](/docs/admin/federation/) or one of the
|
||||
[administration tutorials](https://github.com/kelseyhightower/kubernetes-cluster-federation)
|
||||
to find out how to do so. -->
|
||||
'--context=federation-cluster' 标志通知 kubectl 使用合适的凭据将请求提交到联合 API 接入点。如果您尚未配置此类上下文,请访问 [联合管理指南](/docs/admin/federation/)或者 [管理教程](https://github.com/kelseyhightower/kubernetes-cluster-federation)找出解决方案。
|
||||
|
||||
<!-- As described above, the Federated Service will automatically create
|
||||
and maintain matching Kubernetes services in all of the clusters
|
||||
underlying your federation. -->
|
||||
如上所述,联合服务将自动创建并在所有集群中维护匹配的 Kubernetes 服务以支持联合。
|
||||
|
||||
<!-- You can verify this by checking in each of the underlying clusters, for example: -->
|
||||
您可以通过核对每个基础集群的信息来验证这一点, 例如:
|
||||
|
||||
``` shell
|
||||
kubectl --context=gce-asia-east1a get services nginx
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
nginx ClusterIP 10.63.250.98 104.199.136.89 80/TCP 9m
|
||||
```
|
||||
|
||||
<!-- The above assumes that you have a context named 'gce-asia-east1a'
|
||||
configured in your client for your cluster in that zone. The name and
|
||||
namespace of the underlying services will automatically match those of
|
||||
the Federated Service that you created above (and if you happen to
|
||||
have had services of the same name and namespace already existing in
|
||||
any of those clusters, they will be automatically adopted by the
|
||||
Federation and updated to conform with the specification of your
|
||||
Federated Service - either way, the end result will be the same). -->
|
||||
以上假设您有一个名为 'gce-asia-east1a' 上下文在客户端中为该区域中的集群配置。基础服务的名称和命名空间将自动与您在上面创建的联合服务匹配(如果服务的名称和命名空间与集群中任意一个服务器的名称和命名空间相同,它们将被联合并更新为符合您的规范联合服务 - 无论哪种方式,最终结果都是相同的)。
|
||||
|
||||
<!-- The status of your Federated Service will automatically reflect the
|
||||
real-time status of the underlying Kubernetes services, for example: -->
|
||||
联合服务的状态将自动反映基础 Kubernetes 服务的实时状态,例如:
|
||||
|
||||
``` shell
|
||||
kubectl --context=federation-cluster describe services nginx
|
||||
```
|
||||
```
|
||||
Name: nginx
|
||||
Namespace: default
|
||||
Labels: run=nginx
|
||||
Annotations: <none>
|
||||
Selector: run=nginx
|
||||
Type: LoadBalancer
|
||||
IP: 10.63.250.98
|
||||
LoadBalancer Ingress: 104.197.246.190, 130.211.57.243, 104.196.14.231, 104.199.136.89, ...
|
||||
Port: http 80/TCP
|
||||
Endpoints: <none>
|
||||
Session Affinity: None
|
||||
Events: <none>
|
||||
```
|
||||
|
||||
<!-- {{< note >}}
|
||||
The 'LoadBalancer Ingress' addresses of your Federated Service
|
||||
correspond with the 'LoadBalancer Ingress' addresses of all of the
|
||||
underlying Kubernetes services (once these have been allocated - this
|
||||
may take a few seconds). For inter-cluster and inter-cloud-provider
|
||||
networking between service shards to work correctly, your services
|
||||
need to have an externally visible IP address. [Service Type:
|
||||
Loadbalancer](/docs/concepts/services-networking/service/#loadbalancer)
|
||||
is typically used for this, although other options
|
||||
(for example [External IPs](/docs/concepts/services-networking/service/#external-ips)) exist.
|
||||
{{< /note >}} -->
|
||||
{{< note >}}
|
||||
联合服务的 'LoadBalancer Ingress' 地址与所有基础 Kubernetes 服务的 'LoadBalancer Ingress' 地址相对应(一旦分配了这些地址,这可能需要几秒钟)。为了使服务分片之间的集群和云提供商之间的网络正常工作,您的服务需要具有一个外部可见的 IP 地址。[Service Type:Loadbalancer](/docs/concepts/services-networking/service/#loadbalancer)。尽管存在其他选项(例如 [外部 IP](/docs/concepts/services-networking/service/#external-ips)),但通常会使用 [Service 类型:Loadbalancer](/docs/concepts/services-networking/service/#loadbalancer)。
|
||||
{{< /note >}}
|
||||
|
||||
<!-- Note also that we have not yet provisioned any backend Pods to receive
|
||||
the network traffic directed to these addresses (that is 'Service
|
||||
Endpoints'), so the Federated Service does not yet consider these to
|
||||
be healthy service shards, and has accordingly not yet added their
|
||||
addresses to the DNS records for this Federated Service (more on this
|
||||
aspect later). -->
|
||||
还要注意,我们尚未设置任何后端 Pod 来接收定向到这些地址的网络流量(即 'Service Endpoints'),因此联合服务尚未将它们视为健康的服务分片,并且尚未将其地址添加到联合服务的 DNS 记录中(稍后在此方面进行介绍)。
|
||||
|
||||
<!-- ## Adding backend pods -->
|
||||
## 添加后端 pods
|
||||
|
||||
<!-- To render the underlying service shards healthy, we need to add
|
||||
backend Pods behind them. This is currently done directly against the
|
||||
API endpoints of the underlying clusters (although in future the
|
||||
Federation server will be able to do all this for you with a single
|
||||
command, to save you the trouble). For example, to create backend Pods
|
||||
in 13 underlying clusters: -->
|
||||
为了使基础服务分片健康,我们需要在它们后面添加后端 Pod。当前,这是直接针对基础集群 API 接入点完成的(尽管将来,联合服务将能够通过单个命令为您完成所有这些操作,从而省去了麻烦)。例如,在13个基础集群中创建后端 Pod:
|
||||
|
||||
``` shell
|
||||
for CLUSTER in asia-east1-c asia-east1-a asia-east1-b \
|
||||
europe-west1-d europe-west1-c europe-west1-b \
|
||||
us-central1-f us-central1-a us-central1-b us-central1-c \
|
||||
us-east1-d us-east1-c us-east1-b
|
||||
do
|
||||
kubectl --context=$CLUSTER run nginx --image=nginx:1.11.1-alpine --port=80
|
||||
done
|
||||
```
|
||||
|
||||
<!-- Note that `kubectl run` automatically adds the `run=nginx` labels required to associate the backend pods with their services. -->
|
||||
注意,`kubectl run` 会自动添加 `run=nginx` 标签,这是将后端 pod 与其服务关联起来所必需的。
|
||||
|
||||
<!-- ## Verifying public DNS records -->
|
||||
## 验证公共 DNS 记录
|
||||
|
||||
<!-- Once the above Pods have successfully started and have begun listening
|
||||
for connections, Kubernetes will report them as healthy endpoints of
|
||||
the service in that cluster (through automatic health checks). The Cluster
|
||||
Federation will in turn consider each of these
|
||||
service 'shards' to be healthy, and place them in serving by
|
||||
automatically configuring corresponding public DNS records. You can
|
||||
use your preferred interface to your configured DNS provider to verify
|
||||
this. For example, if your Federation is configured to use Google
|
||||
Cloud DNS, and a managed DNS domain 'example.com': -->
|
||||
一旦上述 Pod 成功启动并开始侦听连接,Kubernetes 就会将它们报告为该集群中服务的正常接入点(通过自动运行状况检查)。反过来,联合集群会将这些服务 '分片' 中的每一个视为健康,并通过自动配置相应的公共 DNS 记录将其置于服务中。您可以使用首选接口访问已配置的 DNS 提供程序来进行验证。例如,如果您的联邦配置为使用 Google Cloud DNS 和托管 DNS 域名 'example.com'。
|
||||
|
||||
``` shell
|
||||
gcloud dns managed-zones describe example-dot-com
|
||||
```
|
||||
```
|
||||
creationTime: '2016-06-26T18:18:39.229Z'
|
||||
description: Example domain for Kubernetes Cluster Federation
|
||||
dnsName: example.com.
|
||||
id: '3229332181334243121'
|
||||
kind: dns#managedZone
|
||||
name: example-dot-com
|
||||
nameServers:
|
||||
- ns-cloud-a1.googledomains.com.
|
||||
- ns-cloud-a2.googledomains.com.
|
||||
- ns-cloud-a3.googledomains.com.
|
||||
- ns-cloud-a4.googledomains.com.
|
||||
```
|
||||
|
||||
```shell
|
||||
gcloud dns record-sets list --zone example-dot-com
|
||||
```
|
||||
```
|
||||
NAME TYPE TTL DATA
|
||||
example.com. NS 21600 ns-cloud-e1.googledomains.com., ns-cloud-e2.googledomains.com.
|
||||
example.com. OA 21600 ns-cloud-e1.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 1209600 300
|
||||
nginx.mynamespace.myfederation.svc.example.com. A 180 104.197.246.190, 130.211.57.243, 104.196.14.231, 104.199.136.89,...
|
||||
nginx.mynamespace.myfederation.svc.us-central1-a.example.com. A 180 104.197.247.191
|
||||
nginx.mynamespace.myfederation.svc.us-central1-b.example.com. A 180 104.197.244.180
|
||||
nginx.mynamespace.myfederation.svc.us-central1-c.example.com. A 180 104.197.245.170
|
||||
nginx.mynamespace.myfederation.svc.us-central1-f.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.us-central1.example.com.
|
||||
nginx.mynamespace.myfederation.svc.us-central1.example.com. A 180 104.197.247.191, 104.197.244.180, 104.197.245.170
|
||||
nginx.mynamespace.myfederation.svc.asia-east1-a.example.com. A 180 130.211.57.243
|
||||
nginx.mynamespace.myfederation.svc.asia-east1-b.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.asia-east1.example.com.
|
||||
nginx.mynamespace.myfederation.svc.asia-east1-c.example.com. A 180 130.211.56.221
|
||||
nginx.mynamespace.myfederation.svc.asia-east1.example.com. A 180 130.211.57.243, 130.211.56.221
|
||||
nginx.mynamespace.myfederation.svc.europe-west1.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.example.com.
|
||||
nginx.mynamespace.myfederation.svc.europe-west1-d.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.europe-west1.example.com.
|
||||
... etc.
|
||||
```
|
||||
|
||||
<!-- {{< note >}}
|
||||
If your Federation is configured to use AWS Route53, you can use one of the equivalent AWS tools, for example:
|
||||
|
||||
``` shell
|
||||
aws route53 list-hosted-zones
|
||||
```
|
||||
and
|
||||
|
||||
``` shell
|
||||
aws route53 list-resource-record-sets --hosted-zone-id Z3ECL0L9QLOVBX
|
||||
```
|
||||
{{< /note >}} -->
|
||||
{{< note >}}
|
||||
如果您的联邦配置为使用 AWS Route53,则可以使用类似的 AWS 工具,例如:
|
||||
|
||||
``` shell
|
||||
aws route53 list-hosted-zones
|
||||
```
|
||||
和
|
||||
|
||||
``` shell
|
||||
aws route53 list-resource-record-sets --hosted-zone-id Z3ECL0L9QLOVBX
|
||||
```
|
||||
{{< /note >}}
|
||||
|
||||
<!-- Whatever DNS provider you use, any DNS query tool (for example 'dig'
|
||||
or 'nslookup') will of course also allow you to see the records
|
||||
created by the Federation for you. Note that you should either point
|
||||
these tools directly at your DNS provider (such as `dig
|
||||
@ns-cloud-e1.googledomains.com...`) or expect delays in the order of
|
||||
your configured TTL (180 seconds, by default) before seeing updates,
|
||||
due to caching by intermediate DNS servers. -->
|
||||
无论使用哪种 DNS 提供商,任何 DNS 查询工具(例如 'dig' 或者 'nslookup')都将允许您查看联邦会为您创建的记录。请注意,您应该将这些工具直接指向您的 DNS 提供商(例如 `dig @ ns-cloud-e1.googledomains.com ...`),或者由于中间 DNS 服务器进行了缓存,因此在看到更新之前,预计延迟会按照配置的 TTL 顺序(默认为 180 秒)进行。
|
||||
|
||||
<!-- ### Some notes about the above example -->
|
||||
### 有关上述示例的一些注意事项
|
||||
|
||||
<!-- 1. Notice that there is a normal ('A') record for each service shard that has at least one healthy backend endpoint. For example, in us-central1-a, 104.197.247.191 is the external IP address of the service shard in that zone, and in asia-east1-a the address is 130.211.56.221.
|
||||
2. Similarly, there are regional 'A' records which include all healthy shards in that region. For example, 'us-central1'. These regional records are useful for clients which do not have a particular zone preference, and as a building block for the automated locality and failover mechanism described below.
|
||||
3. For zones where there are currently no healthy backend endpoints, a CNAME ('Canonical Name') record is used to alias (automatically redirect) those queries to the next closest healthy zone. In the example, the service shard in us-central1-f currently has no healthy backend endpoints (that is Pods), so a CNAME record has been created to automatically redirect queries to other shards in that region (us-central1 in this case).
|
||||
4. Similarly, if no healthy shards exist in the enclosing region, the search progresses further afield. In the europe-west1-d availability zone, there are no healthy backends, so queries are redirected to the broader europe-west1 region (which also has no healthy backends), and onward to the global set of healthy addresses (' nginx.mynamespace.myfederation.svc.example.com.'). -->
|
||||
1. 请注意,每个具有至少一个正常后端端点的服务分片都有一条正常('A')记录。例如,在 us-central1-a 中,104.197.247.191 是该区域中服务分片的外部 IP 地址,在 asia-east1-a 中,该地址是 130.211.56.221。
|
||||
2. 同样,也有区域 'A' 记录,其中包括该区域中所有健康的分片。例如,'us-central1'。这些区域记录对于没有特定区域首选项的客户很有用,并且作为下文所述的自动位置和故障转移机制的基础。
|
||||
3. 对于当前没有健康后端终结点的区域,将使用 CNAME ('Canonical Name') 记录将这些查询别名(自动重定向)到下一个最接近的健康区域。在此示例中,us-central1-f 中的服务分片当前没有健康的后端端点(即Pods),因此已创建 CNAME 记录来自动将查询重定向到该区域中的其他分片(在本例中为 us-central1)。
|
||||
4. 类似地,如果封闭区域中不存在健康分片,则搜索将进一步进行。在 europe-west1-d 可用性区域中,没有健康的后端,因此查询将重定向到更广阔的 Europe-west1 区域(也没有健康的后端),然后再重定向到全局的健康地址集('nginx.mynamespace.myfederation.svc.example.com.')。
|
||||
|
||||
<!-- The above set of DNS records is automatically kept in sync with the
|
||||
current state of health of all service shards globally by the
|
||||
Federated Service system. DNS resolver libraries (which are invoked by
|
||||
all clients) automatically traverse the hierarchy of 'CNAME' and 'A'
|
||||
records to return the correct set of healthy IP addresses. Clients can
|
||||
then select any one of the returned addresses to initiate a network
|
||||
connection (and fail over automatically to one of the other equivalent
|
||||
addresses if required). -->
|
||||
上面的 DNS 记录集由联邦服务系统自动与全球所有服务分片的当前健康状况保持同步。DNS 解析库(由所有客户端调用)自动遍历 'CNAME' 与 'A' 记录的层次结构,以返回正确健康的 IP 地址集。然后,客户端可以选择任何返回的地址来启动网络连接(并根据需要自动故障转移到其他等效地址之一)。
|
||||
|
||||
<!-- ## Discovering a federated service -->
|
||||
## 发现联合服务
|
||||
|
||||
<!-- ### From pods inside your federated clusters -->
|
||||
### 从联合集群内的 Pods 来发现
|
||||
|
||||
<!-- By default, Kubernetes clusters come pre-configured with a
|
||||
cluster-local DNS server ('KubeDNS'), as well as an intelligently
|
||||
constructed DNS search path which together ensure that DNS queries
|
||||
like "myservice", "myservice.mynamespace",
|
||||
"bobsservice.othernamespace" etc issued by your software running
|
||||
inside Pods are automatically expanded and resolved correctly to the
|
||||
appropriate service IP of services running in the local cluster. -->
|
||||
默认情况下,Kubernetes 集群预先配置了本地集群 DNS 服务器('KubeDNS')以及智能构建的 DNS 搜索路径,这些路径共同确保由 Pods 内部运行软件发出的 DNS 查询如 "myservice", "myservice.mynamespace","bobsservice.othernamespace" 等,会自动扩展并正确解析为本地集群运行服务的相应服务 IP。
|
||||
|
||||
<!-- With the introduction of Federated Services and Cross-Cluster Service
|
||||
Discovery, this concept is extended to cover Kubernetes services
|
||||
running in any other cluster across your Cluster Federation, globally.
|
||||
To take advantage of this extended range, you use a slightly different
|
||||
DNS name of the form ```"<servicename>.<namespace>.<federationname>"```
|
||||
to resolve Federated Services. For example, you might use
|
||||
`myservice.mynamespace.myfederation`. Using a different DNS name also
|
||||
avoids having your existing applications accidentally traversing
|
||||
cross-zone or cross-region networks and you incurring perhaps unwanted
|
||||
network charges or latency, without you explicitly opting in to this
|
||||
behavior. -->
|
||||
随着联合服务和跨集群服务发现的引入,该概念已扩展到涵盖在全球集群联盟中任何其他集群中运行的 Kubernetes 服务。要利用此扩展范围,您可以使用形式稍有不同的 DNS 名称,形式为 ```"<servicename>.<namespace>.<federationname>"``` 来解析联合服务。例如,您可以使用 `myservice.mynamespace.myfederation`。使用不同的 DNS 名称还可以避免现有应用程序意外穿越跨区域或跨区域网络,并且可能招致不必要的网络费用或延迟,而无需您明确选择采取这种行为。
|
||||
|
||||
<!-- So, using our NGINX example service above, and the Federated Service
|
||||
DNS name form just described, let's consider an example: A Pod in a
|
||||
cluster in the `us-central1-f` availability zone needs to contact our
|
||||
NGINX service. Rather than use the service's traditional cluster-local
|
||||
DNS name (`"nginx.mynamespace"`, which is automatically expanded
|
||||
to `"nginx.mynamespace.svc.cluster.local"`) it can now use the
|
||||
service's Federated DNS name, which is
|
||||
`"nginx.mynamespace.myfederation"`. This will be automatically
|
||||
expanded and resolved to the closest healthy shard of my NGINX
|
||||
service, wherever in the world that may be. If a healthy shard exists
|
||||
in the local cluster, that service's cluster-local (typically
|
||||
10.x.y.z) IP address will be returned (by the cluster-local KubeDNS).
|
||||
This is almost exactly equivalent to non-federated service resolution
|
||||
(almost because KubeDNS actually returns both a CNAME and an A record
|
||||
for local federated services, but applications will be oblivious
|
||||
to this minor technical difference). -->
|
||||
因此,使用上面的 NGINX 示例服务和刚才描述的联合服务 DNS 名称表单,让我们考虑一个示例:`us-central1-f` 可用性区域集群中的 Pod 需要联系我们的 NGINX 服务。现在,可以使用服务的联合 DNS 名称,而不是使用服务的传统集群本地 DNS 名称(`"nginx.mynamespace"` 会自动扩展为 `"nginx.mynamespace.svc.cluster.local"`)。无论位于世界何处,它都会自动扩展并解析为我的 NGINX 服务中最接近的健康分片。如果本地集群中存在健康的分片,则将返回该服务的集群本地(通常为10.x.y.z)的 IP 地址(由集群本地的 KubeDNS)。这几乎完全等同于非联合服务解析(几乎是因为 KubeDNS 实际上为本地联合服务返回了 CNAME 和 A 记录,但是应用程序将忽略这种微小的技术差异)。
|
||||
|
||||
<!-- But if the service does not exist in the local cluster (or it exists
|
||||
but has no healthy backend pods), the DNS query is automatically
|
||||
expanded to ```"nginx.mynamespace.myfederation.svc.us-central1-f.example.com"```
|
||||
(that is, logically "find the external IP of one of the shards closest to
|
||||
my availability zone"). This expansion is performed automatically by
|
||||
KubeDNS, which returns the associated CNAME record. This results in
|
||||
automatic traversal of the hierarchy of DNS records in the above
|
||||
example, and ends up at one of the external IPs of the Federated
|
||||
Service in the local us-central1 region (that is 104.197.247.191,
|
||||
104.197.244.180 or 104.197.245.170). -->
|
||||
但是,如果服务在本地集群中不存在(或者存在但没有正常的后端 Pod),则 DNS 查询会自动扩展为 ```"nginx.mynamespace.myfederation.svc.us-central1-f.example.com"```(也就是说,从逻辑上 "找到最接近我可用区的一个分片的外部 IP")。此扩展由 KubeDNS 自动执行,它返回关联的 CNAME 记录。这将导致在上面的示例中自动遍历 DNS 记录的层次结构,并最终到达本地 us-central1 区域中联合服务的外部 IP 之一(即 104.197.247.191, 104.197.244.180 或 104.197.245.170 )。
|
||||
|
||||
<!-- It is of course possible to explicitly target service shards in
|
||||
availability zones and regions other than the ones local to a Pod by
|
||||
specifying the appropriate DNS names explicitly, and not relying on
|
||||
automatic DNS expansion. For example,
|
||||
"nginx.mynamespace.myfederation.svc.europe-west1.example.com" will
|
||||
resolve to all of the currently healthy service shards in Europe, even
|
||||
if the Pod issuing the lookup is located in the U.S., and irrespective
|
||||
of whether or not there are healthy shards of the service in the U.S.
|
||||
This is useful for remote monitoring and other similar applications. -->
|
||||
当然,可以通过明确地指定合适的 DNS 名称而不依赖于自动 DNS 扩展,在 Pod 本地的可用区域和可用区域之外的区域中明确地定位服务分片。例如,即使发出查询的 Pod 位于美国,"nginx.mynamespace.myfederation.svc.europe-west1.example.com" 也将解析欧洲目前所有健康的服务分片,并且无论美国是否有健康的服务分片。这对于远程监视和其他类似应用程序很有用。
|
||||
|
||||
<!-- ### From other clients outside your federated clusters -->
|
||||
### 来自联合集群之外的其他客户端
|
||||
|
||||
<!-- Much of the above discussion applies equally to external clients,
|
||||
except that the automatic DNS expansion described is no longer
|
||||
possible. So external clients need to specify one of the fully
|
||||
qualified DNS names of the Federated Service, be that a zonal,
|
||||
regional or global name. For convenience reasons, it is often a good
|
||||
idea to manually configure additional static CNAME records in your
|
||||
service, for example: -->
|
||||
上面大部分讨论都同样适用于外部客户端,除了不再描述所描述的自动 DNS 扩展。因此,外部客户端需要指定联合服务的标准 DNS 名称,可以是地带名称,区域名称或者全局名称。为了方便起见,通常最好在服务中手动配置其他静态 CNAME 记录,例如:
|
||||
|
||||
``` shell
|
||||
eu.nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.europe-west1.example.com.
|
||||
us.nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.us-central1.example.com.
|
||||
nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.example.com.
|
||||
```
|
||||
<!-- That way your clients can always use the short form on the left, and
|
||||
always be automatically routed to the closest healthy shard on their
|
||||
home continent. All of the required failover is handled for you
|
||||
automatically by Kubernetes Cluster Federation. Future releases will
|
||||
improve upon this even further. -->
|
||||
这样,您的客户就可以始终使用左侧的缩写形式,并始终被自动路由到其本国大陆上最接近的健康分片。Kubernetes 联邦集群自动为您处理所有必需的故障转移。将来的发行版将对此进行进一步改进。
|
||||
|
||||
<!-- ## Handling failures of backend pods and whole clusters -->
|
||||
## 处理后端 Pod 和整个集群的故障
|
||||
|
||||
<!-- Standard Kubernetes service cluster-IP's already ensure that
|
||||
non-responsive individual Pod endpoints are automatically taken out of
|
||||
service with low latency (a few seconds). In addition, as alluded
|
||||
above, the Kubernetes Cluster Federation system automatically monitors
|
||||
the health of clusters and the endpoints behind all of the shards of
|
||||
your Federated Service, taking shards in and out of service as
|
||||
required (for example, when all of the endpoints behind a service, or perhaps
|
||||
the entire cluster or availability zone go down, or conversely recover
|
||||
from an outage). Due to the latency inherent in DNS caching (the cache
|
||||
timeout, or TTL for Federated Service DNS records is configured to 3
|
||||
minutes, by default, but can be adjusted), it may take up to that long
|
||||
for all clients to completely fail over to an alternative cluster in
|
||||
the case of catastrophic failure. However, given the number of
|
||||
discrete IP addresses which can be returned for each regional service
|
||||
endpoint (such as us-central1 above, which has three alternatives)
|
||||
many clients will fail over automatically to one of the alternative
|
||||
IP's in less time than that given appropriate configuration. -->
|
||||
标准的 Kubernetes 服务集群 IP 已确保无响应的单个 Pod 端点以低延迟(几秒钟)自动退出服务。此外,如上所述,Kubernetes 联邦集群系统会自动监视集群的状态以及联合服务的所有分片后面的端点,并根据需要使分片进入和退出服务(例如,当服务后面的所有端点或者整个集群或可用性区域出现故障时,或者相反地从中断中恢复时)。由于 DNS 缓存固有的延迟(默认情况下,缓存超时或联合服务 DNS 记录的 TTL 配置为3分钟,可以调整),在灾难性故障的情况下,所有客户端可能要花费很长时间才能完全故障转移到备用集群。但是,鉴于每个区域服务端点可以返回的离散 IP 地址数量(例如上面的 us-central1,它有三个替代方案),与给定的合适配置相比,许多客户端将在更少的时间内自动故障转移到其他 IP。
|
||||
|
||||
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
<!-- ## Troubleshooting -->
|
||||
## 故障排除
|
||||
|
||||
<!-- ### I cannot connect to my cluster federation API -->
|
||||
### 我无法连接到联合集群 API
|
||||
<!-- Check that your -->
|
||||
检查您的
|
||||
|
||||
<!-- 1. Client (typically kubectl) is correctly configured (including API endpoints and login credentials).
|
||||
2. Cluster Federation API server is running and network-reachable. -->
|
||||
1. 客户端(通常是 kubectl)已正确配置(包括 API 端点和登录凭据)。
|
||||
2. 联合集群 API 服务器正在运行并且可以访问网络。
|
||||
|
||||
<!-- See the [federation admin guide](/docs/admin/federation/) to learn
|
||||
how to bring up a cluster federation correctly (or have your cluster administrator do this for you), and how to correctly configure your client. -->
|
||||
请参阅 [联合集群管理员指南](/docs/admin/federation/)了解如何正确启动联邦集群(或让您的集群管理员为您执行此操作),以及如何正确配置客户端。
|
||||
|
||||
<!-- ### I can create a federated service successfully against the cluster federation API, but no matching services are created in my underlying clusters -->
|
||||
### 我可以针对联合集群 API 成功创建联合服务,但是在我的基础集群中没有创建匹配的服务。
|
||||
<!-- Check that: -->
|
||||
检查:
|
||||
|
||||
<!-- 1. Your clusters are correctly registered in the Cluster Federation API (`kubectl describe clusters`).
|
||||
2. Your clusters are all 'Active'. This means that the cluster Federation system was able to connect and authenticate against the clusters' endpoints. If not, consult the logs of the federation-controller-manager pod to ascertain what the failure might be.
|
||||
```
|
||||
kubectl --namespace=federation logs $(kubectl get pods --namespace=federation -l module=federation-controller-manager -o name)
|
||||
```
|
||||
3. That the login credentials provided to the Cluster Federation API for the clusters have the correct authorization and quota to create services in the relevant namespace in the clusters. Again you should see associated error messages providing more detail in the above log file if this is not the case.
|
||||
4. Whether any other error is preventing the service creation operation from succeeding (look for `service-controller` errors in the output of `kubectl logs federation-controller-manager --namespace federation`). -->
|
||||
1. 您的集群已在联合集群 API 中正确注册(`kubectl describe clusters`)。
|
||||
2. 您的集群都是 "活跃的"。这意味着集群联合身份验证系统能够针对集群的端点进行连接和身份验证。如果不是,请查阅federation-controller-manager pod 的日志,以确定可能是什么故障。
|
||||
```
|
||||
kubectl --namespace=federation logs $(kubectl get pods --namespace=federation -l module=federation-controller-manager -o name)
|
||||
```
|
||||
3. 集群提供给联合集群 API 的登录凭据具有正确的授权和配额,可以在集群的相关命名空间中创建服务。如果不是这种情况,您将再次在上述日志文件中看到相关的错误消息,以提供更多详细信息。
|
||||
4. 是否有其他错误阻止服务创建操作成功(请在 `kubectl logs federation-controller-manager --namespace federation` 的输出中查找 `service-controller` 错误)。
|
||||
|
||||
<!-- ### I can create a federated service successfully, but no matching DNS records are created in my DNS provider.
|
||||
Check that: -->
|
||||
### 我可以成功创建联合服务,但是在我的 DNS 提供程序中没有创建匹配的 DNS 记录。
|
||||
检查:
|
||||
|
||||
<!-- 1. Your federation name, DNS provider, DNS domain name are configured correctly. Consult the [federation admin guide](/docs/admin/federation/) or [tutorial](https://github.com/kelseyhightower/kubernetes-cluster-federation) to learn
|
||||
how to configure your Cluster Federation system's DNS provider (or have your cluster administrator do this for you).
|
||||
2. Confirm that the Cluster Federation's service-controller is successfully connecting to and authenticating against your selected DNS provider (look for `service-controller` errors or successes in the output of `kubectl logs federation-controller-manager --namespace federation`).
|
||||
3. Confirm that the Cluster Federation's service-controller is successfully creating DNS records in your DNS provider (or outputting errors in its logs explaining in more detail what's failing). -->
|
||||
1. 您的联邦集群名称,DNS 提供程序,DNS 域名已正确配置。请参阅 [联邦集群管理指南](/docs/admin/federation/)或者 [教程](https://github.com/kelseyhightower/kubernetes-cluster-federation)了解如何配置联合集群系统的 DNS 提供程序(或让您的集群管理员为您执行此操作)。
|
||||
2. 确认联合集群的服务控制器已成功连接到所选的 DNS 提供程序并对其进行身份验证(在 `kubectl logs federation-controller-manager --namespace federation` 的输出中查找 `service-controller` 错误或者成功)。
|
||||
3. 确认联合集群的服务控制器已在您的 DNS 提供程序中成功创建了 DNS 记录(或在其日志中输出错误,以更详细地解释失败原因)。
|
||||
|
||||
<!-- ### Matching DNS records are created in my DNS provider, but clients are unable to resolve against those names
|
||||
Check that: -->
|
||||
### 在我的 DNS 提供程序中创建了匹配的 DNS 记录,但是客户端无法根据这些名称进行解析
|
||||
检查:
|
||||
|
||||
<!-- 1. The DNS registrar that manages your federation DNS domain has been correctly configured to point to your configured DNS provider's nameservers. See for example [Google Domains Documentation](https://support.google.com/domains/answer/3290309?hl=en&ref_topic=3251230) and [Google Cloud DNS Documentation](https://cloud.google.com/dns/update-name-servers), or equivalent guidance from your domain registrar and DNS provider. -->
|
||||
1. 已正确配置用于管理联合 DNS 域名的 DNS 注册器,使其指向已配置的 DNS 提供程序的名称服务器。例如,请参见 [Google Domains 文档](https://support.google.com/domains/answer/3290309?hl=en&ref_topic=3251230)与 [Google Cloud DNS 文档](https://cloud.google.com/dns/update-name-servers),或者域名注册商和 DNS 提供商的等效指南。
|
||||
|
||||
<!-- ### This troubleshooting guide did not help me solve my problem -->
|
||||
### 此疑难解答指南没有帮助我解决问题
|
||||
|
||||
<!-- 1. Please use one of our [support channels](/docs/tasks/debug-application-cluster/troubleshooting/) to seek assistance. -->
|
||||
1. 请使用我们的 [支持渠道](/docs/tasks/debug-application-cluster/troubleshooting/)寻求帮助。
|
||||
|
||||
<!-- ## For more information -->
|
||||
## 更多信息
|
||||
|
||||
<!-- * [Federation proposal](https://git.k8s.io/community/contributors/design-proposals/multicluster/federation.md) details use cases that motivated this work. -->
|
||||
* [联合提议](https://git.k8s.io/community/contributors/design-proposals/multicluster/federation.md) 详细介绍了促进这项工作的用例。
|
||||
|
|
@ -1,254 +0,0 @@
|
|||
---
|
||||
title: 将 CoreDNS 设置为联邦集群的 DNS 提供者
|
||||
content_type: tutorial
|
||||
weight: 130
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Set up CoreDNS as DNS provider for Cluster Federation
|
||||
content_type: tutorial
|
||||
weight: 130
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< deprecationfilewarning >}}
|
||||
{{< include "federation-deprecation-warning-note.md" >}}
|
||||
{{< /deprecationfilewarning >}}
|
||||
|
||||
<!--
|
||||
This page shows how to configure and deploy CoreDNS to be used as the
|
||||
DNS provider for Cluster Federation.
|
||||
-->
|
||||
此页面显示如何配置和部署 CoreDNS,将其用作联邦集群的 DNS 提供者
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "objectives" %}}
|
||||
|
||||
|
||||
<!--
|
||||
* Configure and deploy CoreDNS server
|
||||
* Bring up federation with CoreDNS as dns provider
|
||||
* Setup CoreDNS server in nameserver lookup chain
|
||||
-->
|
||||
|
||||
* 配置和部署 CoreDNS 服务器
|
||||
* 使用 CoreDNS 作为 dns 提供者设置联邦
|
||||
* 在 nameserver 查找链中设置 CoreDNS 服务器
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
<!--
|
||||
* You need to have a running Kubernetes cluster (which is
|
||||
referenced as host cluster). Please see one of the
|
||||
[getting started](/docs/setup/) guides for
|
||||
installation instructions for your platform.
|
||||
* Support for `LoadBalancer` services in member clusters of federation is
|
||||
mandatory to enable `CoreDNS` for service discovery across federated clusters.
|
||||
-->
|
||||
|
||||
* 你需要有一个正在运行的 Kubernetes 集群(作为主机集群引用)。请参阅[入门指南](/docs/setup/),了解平台的安装说明。
|
||||
* 必须在联邦的集群成员中支持 `LoadBalancer` 服务,用来支持跨联邦集群的 `CoreDNS` 服务发现。
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- lessoncontent -->
|
||||
|
||||
<!--
|
||||
## Deploying CoreDNS and etcd charts
|
||||
-->
|
||||
|
||||
## 部署 CoreDNS 和 etcd 图表
|
||||
|
||||
<!--
|
||||
CoreDNS can be deployed in various configurations. Explained below is a
|
||||
reference and can be tweaked to suit the needs of the platform and the
|
||||
cluster federation.
|
||||
-->
|
||||
CoreDNS 可以部署在各种配置中。下面解释的是一个参考,可以根据平台和联邦集群的需要进行调整。
|
||||
|
||||
<!--
|
||||
To deploy CoreDNS, we shall make use of helm charts. CoreDNS will be
|
||||
deployed with [etcd](https://coreos.com/etcd) as the backend and should
|
||||
be pre-installed. etcd can also be deployed using helm charts. Shown
|
||||
below are the instructions to deploy etcd.
|
||||
--->
|
||||
为了部署 CoreDNS,我们将利用图表。
|
||||
CoreDNS 将部署 [etcd](https://coreos.com/etcd) 作为后端,并且应该预先安装。etcd 也可以使用图表进行部署。下面显示了部署 etcd 的说明。
|
||||
|
||||
helm install --namespace my-namespace --name etcd-operator stable/etcd-operator
|
||||
helm upgrade --namespace my-namespace --set cluster.enabled=true etcd-operator stable/etcd-operator
|
||||
|
||||
<!--
|
||||
*Note: etcd default deployment configurations can be overridden, suiting the
|
||||
host cluster.*
|
||||
-->
|
||||
*注意:etcd 默认部署配置可以被覆盖,适合主机集群。*
|
||||
|
||||
<!--
|
||||
After deployment succeeds, etcd can be accessed with the
|
||||
[http://etcd-cluster.my-namespace:2379](http://etcd-cluster.my-namespace:2379) endpoint within the host cluster.
|
||||
-->
|
||||
部署成功后,可以使用主机集群中的 [http://etcd-cluster.my-namespace:2379](http://etcd-cluster.my-namespace:2379) 端点访问 etcd。
|
||||
|
||||
<!--
|
||||
The CoreDNS default configuration should be customized to suit the federation.
|
||||
Shown below is the Values.yaml, which overrides the default
|
||||
configuration parameters on the CoreDNS chart.
|
||||
-->
|
||||
应该定制 CoreDNS 默认配置适应联邦。
|
||||
下面显示的是 Values.yaml,它覆盖了 CoreDNS 图表上的默认配置参数。
|
||||
|
||||
```yaml
|
||||
isClusterService: false
|
||||
serviceType: "LoadBalancer"
|
||||
plugins:
|
||||
kubernetes:
|
||||
enabled: false
|
||||
etcd:
|
||||
enabled: true
|
||||
zones:
|
||||
- "example.com."
|
||||
endpoint: "http://etcd-cluster.my-namespace:2379"
|
||||
```
|
||||
|
||||
<!--
|
||||
The above configuration file needs some explanation:
|
||||
-->
|
||||
以上配置文件需要说明:
|
||||
|
||||
<!--
|
||||
- `isClusterService` specifies whether CoreDNS should be deployed as a
|
||||
cluster-service, which is the default. You need to set it to false, so
|
||||
that CoreDNS is deployed as a Kubernetes application service.
|
||||
- `serviceType` specifies the type of Kubernetes service to be created
|
||||
for CoreDNS. You need to choose either "LoadBalancer" or "NodePort" to
|
||||
make the CoreDNS service accessible outside the Kubernetes cluster.
|
||||
- Disable `plugins.kubernetes`, which is enabled by default by
|
||||
setting `plugins.kubernetes.enabled` to false.
|
||||
- Enable `plugins.etcd` by setting `plugins.etcd.enabled` to
|
||||
true.
|
||||
- Configure the DNS zone (federation domain) for which CoreDNS is
|
||||
authoritative by setting `plugins.etcd.zones` as shown above.
|
||||
- Configure the etcd endpoint which was deployed earlier by setting
|
||||
`plugins.etcd.endpoint`
|
||||
-->
|
||||
- `isClusterService` 指定是否应该将 CoreDNS 部署为集群服务,这是默认值。
|
||||
你需要将其设置为 false,以便将 CoreDNS 部署为 Kubernetes 应用程序服务。
|
||||
- `serviceType` 指定为核心用户创建的 Kubernetes 服务的类型。
|
||||
你需要选择 `LoadBalancer` 或 `NodePort`,以便在 Kubernetes 集群之外访问 CoreDNS 服务。
|
||||
- 禁用 `plugins.kubernetes`,默认情况下通过设置 `plugins.kubernetes.enabled` 为 false。
|
||||
- 启用 `plugins.etcd`,通过设置 `plugins.etcd.enabled` 为 true。
|
||||
- 通过设置 `plugins.etcd.zones` 来配置 CoreDNS 具有权威性的 DNS 域(联邦域)。如上所示。
|
||||
- 通过设置 `plugins.etcd.endpoint` 来配置早期部署的 etcd 端点
|
||||
|
||||
<!--
|
||||
Now deploy CoreDNS by running
|
||||
|
||||
helm install --namespace my-namespace --name coredns -f Values.yaml stable/coredns
|
||||
|
||||
Verify that both etcd and CoreDNS pods are running as expected.
|
||||
-->
|
||||
现在部署 CoreDNS 来运行
|
||||
|
||||
helm install --namespace my-namespace --name coredns -f Values.yaml stable/coredns
|
||||
|
||||
验证 etcd 和 CoreDNS,pod 都按预期运行。
|
||||
|
||||
<!--
|
||||
## Deploying Federation with CoreDNS as DNS provider
|
||||
-->
|
||||
|
||||
## 使用 CoreDNS 作为 DNS 提供者部署联邦
|
||||
|
||||
<!--
|
||||
The Federation control plane can be deployed using `kubefed init`. CoreDNS
|
||||
can be chosen as the DNS provider by specifying two additional parameters.
|
||||
-->
|
||||
可以使用 `kubefed init` 部署联邦控制平面。通过指定两个附加参数,可以选择 CoreDNS 作为 DNS 提供者。
|
||||
|
||||
--dns-provider=coredns
|
||||
--dns-provider-config=coredns-provider.conf
|
||||
|
||||
<!--
|
||||
coredns-provider.conf has below format:
|
||||
-->
|
||||
coredns-provider.conf 的格式如下:
|
||||
|
||||
[Global]
|
||||
etcd-endpoints = http://etcd-cluster.my-namespace:2379
|
||||
zones = example.com.
|
||||
coredns-endpoints = <coredns-server-ip>:<port>
|
||||
|
||||
<!--
|
||||
- `etcd-endpoints` is the endpoint to access etcd.
|
||||
- `zones` is the federation domain for which CoreDNS is authoritative and is same as --dns-zone-name flag of `kubefed init`.
|
||||
- `coredns-endpoints` is the endpoint to access CoreDNS server. This is an optional parameter introduced from v1.7 onwards.
|
||||
-->
|
||||
|
||||
- `etcd-endpoints` 是访问 etcd 的端点。
|
||||
- `zones` 是 CoreDNS 具有权威性的联邦域,它与 `kubefed init` 的 --dns-zone-name 参数相同。
|
||||
- `coredns-endpoints` 是访问 CoreDNS 服务器的端点。这是从 v1.7 开始引入的一个可选参数。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
`plugins.etcd.zones` in the CoreDNS configuration and the `--dns-zone-name` flag to `kubefed init` should match.
|
||||
-->
|
||||
CoreDNS 配置中的 `plugins.etcd.zones` 和 `kubefed init` 的 `--dns-zone-name` 参数应该匹配。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
## Setup CoreDNS server in nameserver resolv.conf chain
|
||||
-->
|
||||
|
||||
## 在 nameserver resolv.conf 链中设置 CoreDNS 服务器
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
The following section applies only to versions prior to v1.7
|
||||
and will be automatically taken care of if the `coredns-endpoints`
|
||||
parameter is configured in `coredns-provider.conf` as described in
|
||||
section above.
|
||||
-->
|
||||
下面的部分只适用于 v1.7 之前的版本,如果 `coredns-endpoint` 参数是
|
||||
在 `coredns-provider.conf` 中配置的,就会自动处理。
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
Once the federation control plane is deployed and federated clusters
|
||||
are joined to the federation, you need to add the CoreDNS server to the
|
||||
pod's nameserver resolv.conf chain in all the federated clusters as this
|
||||
self hosted CoreDNS server is not discoverable publicly. This can be
|
||||
achieved by adding the below line to `dnsmasq` container's arg in
|
||||
`kube-dns` deployment.
|
||||
-->
|
||||
一旦部署了联邦控制平面并将联邦集群连接到联邦,
|
||||
你需要将 CoreDNS 服务器添加到所有联邦集群中 pod 的 nameserver resolv.conf 链,因为这个自托管的 CoreDNS 服务器是不可公开发现的。
|
||||
这可以通过在 `kube-dns` 部署中将下面的行添加到 `dnsmasq` 容器的参数中来实现。
|
||||
|
||||
|
||||
--server=/example.com./<CoreDNS endpoint>
|
||||
|
||||
<!--
|
||||
Replace `example.com` above with federation domain.
|
||||
-->
|
||||
将上面的 `example.com` 替换为联邦域。
|
||||
|
||||
<!--
|
||||
Now the federated cluster is ready for cross-cluster service discovery!
|
||||
-->
|
||||
现在联邦集群已经为跨集群服务发现做好了准备!
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,331 +0,0 @@
|
|||
---
|
||||
title: 在联邦中设置放置策略
|
||||
content_type: task
|
||||
---
|
||||
|
||||
<!--
|
||||
title: Set up placement policies in Federation
|
||||
content_type: task
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< deprecationfilewarning >}}
|
||||
{{< include "federation-deprecation-warning-note.md" >}}
|
||||
{{< /deprecationfilewarning >}}
|
||||
|
||||
<!--
|
||||
This page shows how to enforce policy-based placement decisions over Federated
|
||||
resources using an external policy engine.
|
||||
-->
|
||||
此页面显示如何使用外部策略引擎对联邦资源强制执行基于策略的放置决策。
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
<!--
|
||||
You need to have a running Kubernetes cluster (which is referenced as host
|
||||
cluster). Please see one of the [getting started](/docs/setup/)
|
||||
guides for installation instructions for your platform.
|
||||
-->
|
||||
您需要一个正在运行的 Kubernetes 集群(它被引用为主机集群)。有关您的平台的安装说明,请参阅[入门](/docs/setup/)指南。
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Deploying Federation and configuring an external policy engine
|
||||
-->
|
||||
## Deploying 联邦并配置外部策略引擎
|
||||
|
||||
<!--
|
||||
The Federation control plane can be deployed using `kubefed init`.
|
||||
-->
|
||||
可以使用 `kubefed init` 部署联邦控制平面。
|
||||
|
||||
<!--
|
||||
After deploying the Federation control plane, you must configure an Admission
|
||||
Controller in the Federation API server that enforces placement decisions
|
||||
received from the external policy engine.
|
||||
-->
|
||||
Deploying 联邦控制平面之后,必须在联邦 API 服务器中配置一个准入控制器,该控制器强制执行从外部策略引擎接收到的放置决策。
|
||||
|
||||
|
||||
kubectl create -f scheduling-policy-admission.yaml
|
||||
|
||||
<!--
|
||||
Shown below is an example ConfigMap for the Admission Controller:
|
||||
-->
|
||||
下图是准入控制器的 ConfigMap 示例:
|
||||
|
||||
{{< codenew file="federation/scheduling-policy-admission.yaml" >}}
|
||||
|
||||
<!--
|
||||
The ConfigMap contains three files:
|
||||
-->
|
||||
ConfigMap 包含三个文件:
|
||||
|
||||
<!--
|
||||
* `config.yml` specifies the location of the `SchedulingPolicy` Admission
|
||||
Controller config file.
|
||||
* `scheduling-policy-config.yml` specifies the location of the kubeconfig file
|
||||
required to contact the external policy engine. This file can also include a
|
||||
`retryBackoff` value that controls the initial retry backoff delay in
|
||||
milliseconds.
|
||||
* `opa-kubeconfig` is a standard kubeconfig containing the URL and credentials
|
||||
needed to contact the external policy engine.
|
||||
-->
|
||||
* `config.yml` 指定 `调度策略` 准入控制器配置文件的位置。
|
||||
* `scheduling-policy-config.yml` 指定与外部策略引擎联系所需的 kubeconfig 文件的位置。
|
||||
该文件还可以包含一个 `retryBackoff` 值,该值以毫秒为单位控制初始重试 backoff 延迟。
|
||||
* `opa-kubeconfig` 是一个标准的 kubeconfig,包含联系外部策略引擎所需的 URL 和凭证。
|
||||
|
||||
<!--
|
||||
Edit the Federation API server deployment to enable the `SchedulingPolicy`
|
||||
Admission Controller.
|
||||
-->
|
||||
编辑联邦 API 服务器部署以启用 `SchedulingPolicy` 准入控制器。
|
||||
|
||||
kubectl -n federation-system edit deployment federation-apiserver
|
||||
|
||||
<!--
|
||||
Update the Federation API server command line arguments to enable the Admission
|
||||
Controller and mount the ConfigMap into the container. If there's an existing
|
||||
`--enable-admission-plugins` flag, append `,SchedulingPolicy` instead of adding
|
||||
another line.
|
||||
-->
|
||||
更新 Federation API 服务器命令行参数以启用准入控制器,
|
||||
并将 ConfigMap 挂载到容器中。如果存在现有的 `-enable-admissionplugins` 参数,则追加 `SchedulingPolicy` 而不是添加另一行。
|
||||
|
||||
|
||||
--enable-admission-plugins=SchedulingPolicy
|
||||
--admission-control-config-file=/etc/kubernetes/admission/config.yml
|
||||
|
||||
<!--
|
||||
Add the following volume to the Federation API server pod:
|
||||
-->
|
||||
将以下卷添加到联邦 API 服务器 pod:
|
||||
|
||||
- name: admission-config
|
||||
configMap:
|
||||
name: admission
|
||||
|
||||
<!--
|
||||
Add the following volume mount the Federation API server `apiserver` container:
|
||||
-->
|
||||
添加以下卷挂载联邦 API 服务器的 `apiserver` 容器:
|
||||
|
||||
volumeMounts:
|
||||
- name: admission-config
|
||||
mountPath: /etc/kubernetes/admission
|
||||
|
||||
<!--
|
||||
## Deploying an external policy engine
|
||||
-->
|
||||
|
||||
## Deploying 外部策略引擎
|
||||
|
||||
<!--
|
||||
The [Open Policy Agent (OPA)](http://openpolicyagent.org) is an open source,
|
||||
general-purpose policy engine that you can use to enforce policy-based placement
|
||||
decisions in the Federation control plane.
|
||||
-->
|
||||
[Open Policy Agent (OPA)](http://openpolicyagent.org) 是一个开源的通用策略引擎,
|
||||
您可以使用它在联邦控制平面中执行基于策略的放置决策。
|
||||
|
||||
<!--
|
||||
Create a Service in the host cluster to contact the external policy engine:
|
||||
-->
|
||||
在主机群集中创建服务以联系外部策略引擎:
|
||||
|
||||
kubectl create -f policy-engine-service.yaml
|
||||
|
||||
<!--
|
||||
Shown below is an example Service for OPA.
|
||||
-->
|
||||
下面显示的是 OPA 的示例服务。
|
||||
|
||||
{{< codenew file="federation/policy-engine-service.yaml" >}}
|
||||
|
||||
<!--
|
||||
Create a Deployment in the host cluster with the Federation control plane:
|
||||
-->
|
||||
使用联邦控制平面在主机群集中创建部署:
|
||||
|
||||
kubectl create -f policy-engine-deployment.yaml
|
||||
|
||||
<!--
|
||||
Shown below is an example Deployment for OPA.
|
||||
-->
|
||||
下面显示的是 OPA 的部署示例。
|
||||
|
||||
{{< codenew file="federation/policy-engine-deployment.yaml" >}}
|
||||
|
||||
<!--
|
||||
## Configuring placement policies via ConfigMaps
|
||||
-->
|
||||
|
||||
## 通过 ConfigMaps 配置放置策略
|
||||
|
||||
<!--
|
||||
The external policy engine will discover placement policies created in the
|
||||
`kube-federation-scheduling-policy` namespace in the Federation API server.
|
||||
-->
|
||||
外部策略引擎将发现在 Federation API 服务器的 `kube-federation-scheduling-policy`
|
||||
命名空间中创建的放置策略。
|
||||
|
||||
<!--
|
||||
Create the namespace if it does not already exist:
|
||||
-->
|
||||
如果命名空间尚不存在,请创建它:
|
||||
|
||||
kubectl --context=federation create namespace kube-federation-scheduling-policy
|
||||
|
||||
<!--
|
||||
Configure a sample policy to test the external policy engine:
|
||||
-->
|
||||
配置一个示例策略来测试外部策略引擎:
|
||||
|
||||
```
|
||||
# OPA supports a high-level declarative language named Rego for authoring and
|
||||
# enforcing policies. For more information on Rego, visit
|
||||
# http://openpolicyagent.org.
|
||||
|
||||
# Rego policies are namespaced by the "package" directive.
|
||||
package kubernetes.placement
|
||||
|
||||
# Imports provide aliases for data inside the policy engine. In this case, the
|
||||
# policy simply refers to "clusters" below.
|
||||
import data.kubernetes.clusters
|
||||
|
||||
# The "annotations" rule generates a JSON object containing the key
|
||||
# "federation.kubernetes.io/replica-set-preferences" mapped to <preferences>.
|
||||
# The preferences values is generated dynamically by OPA when it evaluates the
|
||||
# rule.
|
||||
#
|
||||
# The SchedulingPolicy Admission Controller running inside the Federation API
|
||||
# server will merge these annotations into incoming Federated resources. By
|
||||
# setting replica-set-preferences, we can control the placement of Federated
|
||||
# ReplicaSets.
|
||||
#
|
||||
# Rules are defined to generate JSON values (booleans, strings, objects, etc.)
|
||||
# When OPA evaluates a rule, it generates a value IF all of the expressions in
|
||||
# the body evaluate successfully. All rules can be understood intuitively as
|
||||
# <head> if <body> where <body> is true if <expr-1> AND <expr-2> AND ...
|
||||
# <expr-N> is true (for some set of data.)
|
||||
annotations["federation.kubernetes.io/replica-set-preferences"] = preferences {
|
||||
input.kind = "ReplicaSet"
|
||||
value = {"clusters": cluster_map, "rebalance": true}
|
||||
json.marshal(value, preferences)
|
||||
}
|
||||
|
||||
# This "annotations" rule generates a value for the "federation.alpha.kubernetes.io/cluster-selector"
|
||||
# annotation.
|
||||
#
|
||||
# In English, the policy asserts that resources in the "production" namespace
|
||||
# that are not annotated with "criticality=low" MUST be placed on clusters
|
||||
# labelled with "on-premises=true".
|
||||
annotations["federation.alpha.kubernetes.io/cluster-selector"] = selector {
|
||||
input.metadata.namespace = "production"
|
||||
not input.metadata.annotations.criticality = "low"
|
||||
json.marshal([{
|
||||
"operator": "=",
|
||||
"key": "on-premises",
|
||||
"values": "[true]",
|
||||
}], selector)
|
||||
}
|
||||
|
||||
# Generates a set of cluster names that satisfy the incoming Federated
|
||||
# ReplicaSet's requirements. In this case, just PCI compliance.
|
||||
replica_set_clusters[cluster_name] {
|
||||
clusters[cluster_name]
|
||||
not insufficient_pci[cluster_name]
|
||||
}
|
||||
|
||||
# Generates a set of clusters that must not be used for Federated ReplicaSets
|
||||
# that request PCI compliance.
|
||||
insufficient_pci[cluster_name] {
|
||||
clusters[cluster_name]
|
||||
input.metadata.annotations["requires-pci"] = "true"
|
||||
not pci_clusters[cluster_name]
|
||||
}
|
||||
|
||||
# Generates a set of clusters that are PCI certified. In this case, we assume
|
||||
# clusters are annotated to indicate if they have passed PCI compliance audits.
|
||||
pci_clusters[cluster_name] {
|
||||
clusters[cluster_name].metadata.annotations["pci-certified"] = "true"
|
||||
}
|
||||
|
||||
# Helper rule to generate a mapping of desired clusters to weights. In this
|
||||
# case, weights are static.
|
||||
cluster_map[cluster_name] = {"weight": 1} {
|
||||
replica_set_clusters[cluster_name]
|
||||
}
|
||||
```
|
||||
|
||||
<!--
|
||||
Shown below is the command to create the sample policy:
|
||||
-->
|
||||
下面显示的是创建示例策略的命令:
|
||||
|
||||
kubectl --context=federation -n kube-federation-scheduling-policy create configmap scheduling-policy --from-file=policy.rego
|
||||
|
||||
<!--
|
||||
This sample policy illustrates a few key ideas:
|
||||
-->
|
||||
这个示例策略说明了一些关键思想:
|
||||
|
||||
<!--
|
||||
* Placement policies can refer to any field in Federated resources.
|
||||
* Placement policies can leverage external context (for example, Cluster
|
||||
metadata) to make decisions.
|
||||
* Administrative policy can be managed centrally.
|
||||
* Policies can define simple interfaces (such as the `requires-pci` annotation) to
|
||||
avoid duplicating logic in manifests.
|
||||
-->
|
||||
|
||||
* 位置策略可以引用联邦资源中的任何字段。
|
||||
* 放置策略可以利用外部上下文(例如,集群元数据)来做出决策。
|
||||
* 管理策略可以集中管理。
|
||||
* 策略可以定义简单的接口(例如 `requirements -pci` 注解),以避免在清单中重复逻辑。
|
||||
|
||||
<!--
|
||||
## Testing placement policies
|
||||
-->
|
||||
|
||||
## 测试放置政策
|
||||
|
||||
<!--
|
||||
Annotate one of the clusters to indicate that it is PCI certified.
|
||||
-->
|
||||
注释其中一个集群以表明它是经过 PCI 认证的。
|
||||
|
||||
kubectl --context=federation annotate clusters cluster-name-1 pci-certified=true
|
||||
|
||||
<!--
|
||||
Deploy a Federated ReplicaSet to test the placement policy.
|
||||
-->
|
||||
部署联邦副本来测试放置策略。
|
||||
|
||||
{{< codenew file="federation/replicaset-example-policy.yaml" >}}
|
||||
|
||||
<!--
|
||||
Shown below is the command to deploy a ReplicaSet that *does* match the policy.
|
||||
-->
|
||||
下面显示的命令用于部署与策略匹配的副本集。
|
||||
|
||||
kubectl --context=federation create -f replicaset-example-policy.yaml
|
||||
|
||||
<!--
|
||||
Inspect the ReplicaSet to confirm the appropriate annotations have been applied:
|
||||
-->
|
||||
检查副本集以确认已应用适当的注解:
|
||||
|
||||
kubectl --context=federation get rs nginx-pci -o jsonpath='{.metadata.annotations}'
|
||||
|
||||
|
||||
|
||||
|
Loading…
Reference in New Issue