move docs to karmada website

Signed-off-by: Poor12 <shentiecheng@huawei.com>
This commit is contained in:
Poor12 2022-08-04 14:49:02 +08:00
parent a0a8b69001
commit c5bdb4df92
81 changed files with 14 additions and 8602 deletions

View File

@ -22,13 +22,13 @@ kubectl karmada promote deployment nginx -n default -c cluster1
Benefiting from the Kubernetes native API support, Karmada can easily integrate the single cluster ecosystem for multi-cluster, Benefiting from the Kubernetes native API support, Karmada can easily integrate the single cluster ecosystem for multi-cluster,
multi-cloud purpose. The following components have been verified by the Karmada community: multi-cloud purpose. The following components have been verified by the Karmada community:
- argo-cd: refer to [working with argo-cd](../working-with-argocd.md) - argo-cd: refer to [working with argo-cd](https://github.com/karmada-io/website/blob/main/docs/userguide/cicd/working-with-argocd.md)
- Flux: refer to [propagating helm charts with flux](https://github.com/karmada-io/karmada/issues/861#issuecomment-998540302) - Flux: refer to [propagating helm charts with flux](https://github.com/karmada-io/karmada/issues/861#issuecomment-998540302)
- Istio: refer to [working with Istio](../working-with-istio-on-flat-network.md) - Istio: refer to [working with Istio](https://github.com/karmada-io/website/blob/main/docs/userguide/service/working-with-istio-on-flat-network.md)
- Filebeat: refer to [working with Filebeat](../working-with-filebeat.md) - Filebeat: refer to [working with Filebeat](https://github.com/karmada-io/website/blob/main/docs/administrator/monitoring/working-with-filebeat.md)
- Submariner: refer to [working with Submariner](../working-with-submariner.md) - Submariner: refer to [working with Submariner](https://github.com/karmada-io/website/blob/main/docs/userguide/network/working-with-submariner.md)
- Velero: refer to [working with Velero](../working-with-velero.md) - Velero: refer to [working with Velero](https://github.com/karmada-io/website/blob/main/docs/administrator/backup/working-with-velero.md)
- Prometheus: refer to [working with Prometheus](../working-with-prometheus.md) - Prometheus: refer to [working with Prometheus](https://github.com/karmada-io/website/blob/main/docs/administrator/monitoring/working-with-prometheus.md)
## OverridePolicy Improvements ## OverridePolicy Improvements
@ -38,7 +38,7 @@ able to define override policies with a single policy for specified workloads.
## Karmada Installation Improvements ## Karmada Installation Improvements
Introduced `init` command to `Karmada CLI`. Users are now able to install Karmada by a single command. Introduced `init` command to `Karmada CLI`. Users are now able to install Karmada by a single command.
Please refer to [Installing Karmada](../installation/installation.md) for more details. Please refer to [Installing Karmada](https://github.com/karmada-io/website/blob/main/docs/installation/installation.md) for more details.
## Configuring Karmada Controllers ## Configuring Karmada Controllers

View File

@ -42,7 +42,7 @@ Introduced `AggregateStatus` support for the `Resource Interpreter Webhook` fram
Introduced `InterpreterOperationInterpretDependency` support for the `Resource Interpreter Webhook` framework, Introduced `InterpreterOperationInterpretDependency` support for the `Resource Interpreter Webhook` framework,
which enables propagating workload's dependencies automatically. which enables propagating workload's dependencies automatically.
Refer to [Customizing Resource Interpreter](../userguide/customizing-resource-interpreter.md) for more details. Refer to [Customizing Resource Interpreter](https://github.com/karmada-io/website/blob/main/docs/userguide/globalview/customizing-resource-interpreter.md) for more details.
# Other Notable Changes # Other Notable Changes

View File

@ -20,7 +20,7 @@ A new component `karmada-descheduler` was introduced, for rebalancing the schedu
One example use case is: it helps evict pending replicas (Pods) from resource-starved clusters so that `karmada-scheduler` One example use case is: it helps evict pending replicas (Pods) from resource-starved clusters so that `karmada-scheduler`
can "reschedule" these replicas (Pods) to a cluster with sufficient resources. can "reschedule" these replicas (Pods) to a cluster with sufficient resources.
For more details please refer to [Descheduler user guide.](../../docs/descheduler.md) For more details please refer to [Descheduler user guide.](https://github.com/karmada-io/website/blob/main/docs/userguide/scheduling/descheduler.md)
##### 2. Multi region HA support ##### 2. Multi region HA support
@ -101,15 +101,15 @@ Introduced `InterpretStatus` for the `Resource Interpreter Webhook` framework, w
Karmada can thereby learn how to collect status for your resources, especially custom resources. For example, a custom resource Karmada can thereby learn how to collect status for your resources, especially custom resources. For example, a custom resource
may have many status fields and only Karmada can collect only those you want. may have many status fields and only Karmada can collect only those you want.
Refer to [Customizing Resource Interpreter](../../docs/userguide/customizing-resource-interpreter.md) for more details. Refer to [Customizing Resource Interpreter](https://github.com/karmada-io/website/blob/main/docs/userguide/globalview/customizing-resource-interpreter.md) for more details.
#### Integrating verification with the ecosystem #### Integrating verification with the ecosystem
Benefiting from the Kubernetes native APIs, Karmada can easily integrate the Kubernetes ecosystem. The following components are verified by the Karmada community: Benefiting from the Kubernetes native APIs, Karmada can easily integrate the Kubernetes ecosystem. The following components are verified by the Karmada community:
- `Kyverno`: policy engine. Refer to [working with kyverno](../../docs/working-with-kyverno.md) for more details. - `Kyverno`: policy engine. Refer to [working with kyverno](https://github.com/karmada-io/website/blob/main/docs/userguide/security-governance/working-with-kyverno.md) for more details.
- `Gatekeeper`: another policy engine. Refer to [working with gatekeeper](../../docs/working-with-gatekeeper.md) for more details. - `Gatekeeper`: another policy engine. Refer to [working with gatekeeper](https://github.com/karmada-io/website/blob/main/docs/userguide/security-governance/working-with-gatekeeper.md) for more details.
- `fluxcd`: GitOps tooling for helm chart. Refer to [working with fluxcd](../../docs/working-with-flux.md) for more details. - `fluxcd`: GitOps tooling for helm chart. Refer to [working with fluxcd](https://github.com/karmada-io/website/blob/main/docs/userguide/cicd/working-with-flux.md) for more details.
### Other Notable Changes ### Other Notable Changes

View File

@ -1,44 +1 @@
# Karmada Karmada documentation is all hosted on [karmada-io/website](https://github.com/karmada-io/website).
## Overview
## Quick Start
## Installation
Refer to [Installing Karmada](./installation/installation.md).
## Concepts
## User Guide
- [Cluster Registration](./userguide/cluster-registration.md)
- [Resource Propagating](./userguide/resource-propagating.md)
- [Cluster Failover](./userguide/failover.md)
- [Aggregated Kubernetes API Endpoint](./userguide/aggregated-api-endpoint.md)
- [Customizing Resource Interpreter](./userguide/customizing-resource-interpreter.md)
- [Configuring Controllers](./userguide/configure-controllers.md)
## Best Practices
## Adoptions
User cases in production.
- Karmada at [VIPKID](https://www.vipkid.com/)
* [English](./adoptions/vipkid-en.md)
* [中文](./adoptions/vipkid-zh.md)
## Developer Guide
## Contributors
- [GitHub workflow](./contributors/guide/github-workflow.md)
- [Cherry Pick Overview](./contributors/devel/cherry-picks.md)
## Reference
## Troubleshooting
Refer to [Troubleshooting](./troubleshooting.md)
## Frequently Asked Questions
Refer to [FAQ](./frequently-asked-questions.md).

View File

@ -1,146 +0,0 @@
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*
- [VIPKID: Building a PaaS Platform with Karmada to Run Containers](#vipkid-building-a-paas-platform-with-karmada-to-run-containers)
- [Background](#background)
- [Born Multi-Cloud and Cross-Region](#born-multi-cloud-and-cross-region)
- [Multi-Cluster Policy](#multi-cluster-policy)
- [Cluster Disaster Recovery](#cluster-disaster-recovery)
- [Challenges and Pain Points](#challenges-and-pain-points)
- [Running the Same Application in Different Clusters](#running-the-same-application-in-different-clusters)
- [Quickly Migrating Applications upon Faults](#quickly-migrating-applications-upon-faults)
- [Why Karmada](#why-karmada)
- [Any Solutions Available?](#any-solutions-available)
- [Karmada, the Solution of Choice](#karmada-the-solution-of-choice)
- [Karmada at VIPKID](#karmada-at-vipkid)
- [Containerization Based on Karmada](#containerization-based-on-karmada)
- [Benefits](#benefits)
- [Gains](#gains)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
# VIPKID: Building a PaaS Platform with Karmada to Run Containers
Author: Ci Yiheng, Backend R&D Expert, VIPKID
## Background
VIPKID is an online English education platform with more than 80,000 teachers and 1 million trainees.
It has delivered 150 million training sessions across countries and regions. To provide better services,
VIPKID deploys applications by region and close to teachers and trainees. Therefore,
VIPKID purchased dozens of clusters from multiple cloud providers around the world to build its internal infrastructure.
## Born Multi-Cloud and Cross-Region
VIPKID provides services internationally. Native speakers can be both teaching students in China and studying with Chinese teachers.
To provide optimal online class experience, VIPKID sets up a low-latency network and deploys computing services close to teachers and trainees separately.
Such deployment depends on resources from multiple public cloud vendors. Managing multi-cloud resources has long become a part of VIPKID's IaaS operations.
### Multi-Cluster Policy
We first tried the single-cluster mode to containerize our platform, simple and low-cost. We dropped it after evaluating the network quality and infrastructure (network and storage) solutions across clouds and regions, and our project period. There are two major reasons:
1) Network latency and stability between clouds cannot be guaranteed.
2) Different vendors have different solutions for container networking and storage.
Costs would be high if we wanted to resolve these problems. Finally, we decided to configure Kubernetes clusters by cloud vendor and region. That's why we have so many clusters.
### Cluster Disaster Recovery
DR(Disaster Recovery) becomes easier for containers than VMs. Kubernetes provides DR solutions for pods and nodes, but not single clusters. Thanks to the microservice reconstruction, we can quickly create a cluster or scale an existing one to transfer computing services.
## Challenges and Pain Points
### Running the Same Application in Different Clusters
During deployment, we found that the workloads of the same application vary greatly in different clusters in terms of images, startup parameters (configurations), and release versions. In the early stage, we wanted that our developers can directly manage applications on our own PaaS platform. However, the increasing customization made it more and more difficult to abstract the differences.
We had to turn to our O&M team, but they also failed in some complex scenarios. This is not DevOps. It does not reduce costs or increase efficiency.
### Quickly Migrating Applications upon Faults
Fault migration can be focused on applications or clusters. The application-centric approach focuses on the
self-healing of key applications and the overall load in multi-cluster mode.
The cluster-centric approach focuses on the disasters (such as network faults) that may impact all clusters or on the
delivery requirements when creating new clusters. You need to set different policies for these approaches.
**Application-centric: Dynamic Migration**
Flexibly deploying an application in multiple clusters can ensure its stability. For example, if an instance in a cluster is faulty and cannot be quickly recovered, a new instance needs to be created automatically in another cluster of the same vendor or region based on the preset policy.
**Cluster-centric: Quick Cluster Startup**
Commonly, we start a new cluster to replace the unavailable one or to deliver services which depend on a specific cloud vendor or region. It would be best if clusters can be started as fast as pods.
## Why Karmada
### Any Solutions Available?
Your service systems may evolve fast and draw clear lines for modules. To address the pain points, you need to, to some extent, abstract, decouple and reconstruct your systems.
For us, service requirements were deeply coupled with cluster resources. We wanted to decouple them via multi-cluster management. Specifically, use the self-developed platform to manage the application lifecycle, and use a system to manage operation instructions on cluster resources.
We probed into the open source communities to find products that support multi-cluster management. However, most products either serve as a platform like ours or manage resources by cluster.
We wanted to manage multiple Kubernetes clusters like one single, large cluster. In this way, a workload can be regarded as an independent application (or a version of an application) instead of a replica of an application in multiple clusters.
We also wanted to lower the access costs as much as possible. We surveyed and evaluated many solutions in the communities and decided on Karmada.
### Karmada, the Solution of Choice
Karmada has the following advantages:
1) Karmada allows us to manage multiple clusters like one single cluster and manage resources in an application-centric approach. In addition, almost all configuration differences can be independently declared through the Override policies in Karmada, simple, intuitive, and easy to manage.
2) Karmada uses native Kubernetes APIs. We need no adaption and the access cost is low. Karmada also manifests configurations through CRDs. It dynamically turns distribution and differentiated configurations into Propagation and Override policies and delivers them to the Karmada control plane.
3) Karmada sits under the open governance of a neutral community. The community welcomes open discussions on requirements and ideas and we got technically improved while contributing to the community.
## Karmada at VIPKID
Our platform caters to all container-based deployments, covering stateful or stateless applications, hybrid deployment of online and offline jobs, AI, and big data services. This platform does not rely on any public cloud. Therefore, we cannot use any encapsulated products of cloud vendors.
We use the internal IaaS platform to create and scale out clusters, configure VPCs, subnets, and security groups of different vendors. In this way, vendor differences become the least of worries for our PaaS platform.
In addition, we provide GitOps for developers to manage system applications and components. This is more user-friendly and efficient for skilled developers.
### Containerization Based on Karmada
At the beginning, we designed a component (cluster aggregation API) in the platform to interact with Kubernetes clusters. We retained the native Kubernetes APIs and added some cluster-related information.
However, there were complex problems during the implementation. For example, as the PaaS system needed to render declarations of different resources to multiple clusters, the applications we maintained in different clusters were irrelevant. We made much effort to solve these problems, even after CRDs were introduced. The system still needed to keep track of the details of each cluster, which goes against what cluster aggregation API is supposed to do.
When there are a large number of clusters that go online and offline frequently, we need to change the configurations in batches for applications in the GitOps model to ensure normal cluster running. However, GitOps did not cope with the increasing complexity as expected.
The following figure shows the differences before and after we used Karmada.
![Karmada在VIPKID](../images/adoptions-vipkid-architecture-en.png)
**After Karmada is introduced, the multi-cluster aggregation layer is truly unified.** We can manage resources by application on the Karmada control plane. We only need to interact with Karmada, not the clusters, which simplifies containerized application management and enables our PaaS platform to fully focus on service requirements.
With Karmada integrated into GitOps, system components can be easily released and upgraded in each cluster, exponentially more efficient than before.
## Benefits
Managing Kubernetes resources by application simplifies the platform and greatly improves utilization. Here are the improvements brought by Karmada.
**1) Higher deployment efficiency**
Before then, we needed to send deployment instructions to each cluster and monitor the deployment status, which required us to continuously check resources and handle exceptions. Now, application statuses are automatically collected and detected by Karmada.
**2) Differentiated control on applications**
Adopting DevOps means developers can easily manage the lifecycle of applications.
We leverage Karmada Override policies to directly interconnect with application profiles such as environment variables, startup parameters, and image repositories so that developers can better control the differences of applications in different clusters.
**3) Quick cluster startup and adaptation to GitOps**
Basic services (system and common services) are configured for all clusters in Karmada Propagation policies and managed by Karmada when a new cluster is created. These basic services can be delivered along with the cluster, requiring no manual initialization and greatly shortening the delivery process.
Most basic services are managed by the GitOps system, which is convenient and intuitive.
**4) Short reconstruction period and no impact on services**
Thanks to the support of native Kubernetes APIs, we can quickly integrate Karmada into our platform.
We use Karmada the way we use Kubernetes. The only thing we need to configure is Propagation policies,
which can be customized by resource name, resource type, or LabelSelector.
## Gains
Since February 2021, three of us have become contributors to the Karmada community.
We witness the releases of Karmada from version 0.5.0 to 1.0.0. To write codes that satisfy all is challenging.
We have learned a lot from the community during the practice, and we always welcome more of you to join us.

View File

@ -1,111 +0,0 @@
# VIPKID基于Karmada的容器PaaS平台落地实践
本篇文章来自在线教育平台VIPKID在容器体系设计过程中的落地实践从VIPKID的业务背景、业务挑战、选型过程、引入Karmada前后的对比以及收益等方面深入剖析了VIPKID容器化改造过程。
## 业务背景
VIPKID的业务覆盖数十个国家和地区签约教师数量超过8万名为全球100万学员提供教育服务累计授课1.5亿节。为了更好的为教师和学员提供服务相关应用会按地域就近部署比如面向教师的服务会贴近教师所在的地域部署面向学员的服务直接部署在学员侧。为此VIPKID从全球多个云供应商采购了数十个集群用于构建VIPKID内部基础设施。
### 无法躲开的多云多Region
VIPKID的业务形态是将国内外的教育资源进行互换互补包括过去的北美外教资源引入到国内和将现在国内的教育服务提供到海外。
为了追求优质的上课体验,除了组建了一个稳定低延迟的互通链路外,不同形态的计算服务都要就近部署,比如教师客户端的依赖服务,要优先部署在海外,而家长客户端的依赖服务则首选国内。
因此VIPKID使用了国内外多个公有云厂商的网络资源和计算资源。对多云的资源管理很早就成为了VIPKID的IaaS管理系统的一部分。
### K8s多集群策略
在VIPKID容器体系设计之初我们首选的方案是单集群模式该方案的优势是结构简单且管理成本低。但综合评估多云之间和多Region的网络质量与基础设施网络和存储方案并综合我们的项目周期只能放弃这个想法。主要原因有两点
1不同云之间的网络延迟和稳定性无法保证
2各家云厂商的容器网络和存储方案均有差异
若要解决以上问题需要耗费较高的成本。最后我们按照云厂商和Region的维度去配置K8s集群如此就拥有很多K8s集群。
### 集群的容灾
容灾对于容器而言相比传统的VM资源已经友好得多。K8s解决了大部分Pod和Node级别的Case但单个集群的灾难处理还需要我们自行解决由于VIPKID早期已经完成了微服务化的改造因此可以利用快速创建新集群或者扩容现有特定集群的方式来快速进行计算服务的转移。
## 业务挑战及痛点
### 如何对待不同集群中的同一个应用?
在多云部署过程中我们遇到了一个很复杂的问题同一个应用在不同集群中的workload几乎都是不同的比如使用的镜像、启动参数配置甚至有时候连发布版本都不一样。前期我们是依赖自己容器PaaS平台来管理这些差异但随着差异需求增多场景也越来越多差异抽象越发困难。
我们的初衷是让开发者可以直接在我们的PaaS平台上管理其应用但因为复杂的差异越来越难以管理最终只能依赖运维同学协助操作。另外在某些复杂场景下运维同学也难以快速、准确的进行管理如此偏离了DevOps理念不仅增加了管理成本而且降低了使用效率。
### 如何快速完成故障迁移?
对于故障迁移,这里我从应用和集群两个不同视角来描述,应用视角看重关键应用的自愈能力以及能否在多集群状态下保障整体的负载能力。而集群维度则更看重集群整体级别的灾难或对新集群的交付需求,比如网络故障,此时应对策略会有不同。
**应用视角:应用的动态迁移**
从应用出发保障关键应用的稳定性可以灵活的调整应用在多集群中的部署情况。例如某关键应用在A集群的实例出现故障且无法快速恢复那就需要根据事先制定的策略在同厂商或同Region下的集群中创建实例并且这一切应该是自动的。
**集群视角新集群如何快速ready**
新集群的创建在我们的业务场景很常见。比如当某个K8s集群不可用时我们的期望是通过拉起新集群的方式进行快速修复再如业务对新的云厂商或者Region有需求时候我们也需要能够快速交付集群资源。我们希望他能够像启动Pod一样迅速。
## Why Karmada
### 不自己造轮子,着眼开源社区
上述列举的痛点,若只试图满足暂时的需求是远远不够的,系统在快速发展过程中必须要适当的进行抽象和解耦,并且随着系统组成模块的角色分工逐渐清晰,也需要适当的重构。
对于我们的容器PaaS平台而言业务需求与集群资源耦合越发严重我们将解耦的切面画在了多集群的管理上由我们自研的平台管理应用的生命周期另外一个系统管理集群资源的操作指令。
明确需求后,我们就开始在开源社区寻找与调研这类产品,但找到的开源产品都是平台层的,也就是与我们自研平台解决思路类似,并且大多是以集群视角来进行操作的,所有资源首先在集群的维度上就被割裂开了,并不符合我们对应用视角的诉求。
以应用为视角可以理解为将多个K8s集群作为一个大型集群来管理这样一个workload就可以看做是一个应用或一个应用的某个版本而不是散落在多个集群中同一个应用的多个workload。
另外一个原则是尽量低的接入成本。我们调研了开源社区的多种方案综合评估后发现Karmada比较符合我们的需求。
### 就它了, Karmada
试用Karmada后发现有以下几方面优势
**1Karmada真正意义的实现了以一个K8s集群视角来管理多集群的能力**让我们能够以应用视角管理多集群中的资源。另外Karmada的OverridePolicy设计几乎所有差异都可以单独声明出来简单直观且便于管理这与我们内部对应用画像在不同集群之间的应用差异不谋而合。
**2Karmada完全使用了K8s原生API**使得我们可以像原来一样使用同时也表明我们在后续的接入成本会很低。并且Karmada的CRD相对来讲也更容易理解我们平台的服务画像模块可以很容易的将分发和差异配置动态渲染成Propagation和Override策略下发给Karmada控制面。
**3开源开放的社区治理模式**也是我们团队最看重的一点。在试用Karmada过程中不论是我们自己还是社区方面对需求的理解和设想的解决方案都可以在社区中开放讨论。同时在参与代码贡献过程中我们团队整体技术能力也显著提升。
## Karmada at VIPKID
我们的容器平台承载了整个公司所有的容器化部署诉求包括有无状态、在离线业务和AI大数据等。并且要求PaaS平台的设计和实施不会对某一家公有云产生任何依赖因此我们无法使用云厂商封装过的一些产品。
我们会依赖内部的IaaS平台去管理多家云厂商的各类基础设施包括K8s集群的创建、扩容、VPC子网以及安全组的配置。这个很重要因为这让我们可以标准化多个云厂商的K8s集群让上层PaaS平台几乎无需关心厂商级别的差异。
另外对于系统级别的应用和组件我们为开发者创建了另外一条管理渠道那就是使用GitOps。这对高阶开发者来说要更加友好对系统应用的组件安装部署更为高效。
### 基于Karmada的容器化改造方案
在平台落地之初我们单独剥离了一个组件上图左侧的“集群汇聚API”专门和K8s集群进行交互并且向上保留K8s原生API也会附加一些和集群相关信息。
但在落地过程中“容器应用管理”系统需要进行许多操作适配多集群下的复杂情况。比如虽然PaaS系统看起来是一个应用但系统需要渲染不同的完整资源声明到不同的集群使得我们在真正维护多集群应用时仍然是不相关的、割裂的因为我们没办法在底层解决这个问题。诸如此类问题的解决还是占用了团队不少的资源尤其是引入CRD资源后还需要重复的解决这方面的问题。并且这个系统无法不去关心每个集群里面的细节状况如此背离了我们设计“集群汇聚API”组件的初衷。
另外由于GitOps也需要与集群强相关在集群数量较大并且经常伴随集群上下线的情况下此时若要正常运转就需要对GitOps的应用配置进行批量变更随之增加的复杂度让整体效果并未达到预期。
下图是VIPKID引入Karmada之前和之后的架构对比
![Karmada在VIPKID](../images/adoptions-vipkid-architecture-zh.png "Karmada在VIPKID")
**引入Karmada后多集群聚合层得以真正的统一**我们可以在Karmada控制平面以应用维度去管理资源多数情况下都不需要深入到受控集群中只需要与Karmada交互即可。如此极大的简化了我们的“容器应用管理”系统。现在我们的PaaS平台可以完全倾注于业务需求Karmada强大的能力已经满足了当前我们的各类需求。
而GitOps体系使用Karmada后系统级组件也可以简单的在各个集群中进行发布和升级不仅让我们体验到了GitOps本身的便利更是让我们收获到了GitOps*Karmada的成倍级的效率提升。
## 收益
以应用为维度来管理K8s资源降低了平台复杂度同时大幅提升使用效率。下面以我们PaaS平台特性入手来描述引入Karmada后的改变。
**1多集群应用的部署速度显著提升** 先前在部署时需要向每个集群发送部署指令随之监测部署状态是否异常。如此就需要不断的检查多个集群中的资源状态然后再根据异常情况做不同的处理这个过程逻辑繁琐并且缓慢。引入Karmada后Karmada会自动收集和汇聚应用在各个集群的状态这样我们可以通过Karmada来感知应用状态。
**2应用的差异控制可开放给开发者** DevOps文化最重要的一点就是开发者要能够完全参与进来能够便捷地对应用全生命周期进行管理。我们充分利用了Karmada的Override策略直接与应用画像对接让开发者可以清晰的了解和控制应用在不同集群的差异现已支持环境变量启动参数镜像仓库。
**3集群的快速拉起&对GitOps适配** 我们将基础服务系统级和通用类型服务在Karmada的Propagation设定为全量集群在新集群创建好以后直接加入到Karmada中进行纳管这些基础服务可以伴随集群交付一并交付节省了我们对集群做基础服务初始化的操作大大缩短了交付环节和时间。并且大部分基础服务都是由我们的GitOps体系管理的相比过去一个个集群的配置来讲既方便又直观。
**4平台改造周期短业务无感知** 得益于Karmada的原生K8s API我们花了很少的时间在接入Karmada上。Karmada真正做到了原来怎么用K8s现在继续怎么用就可以了。唯一需要考虑的是Propagation策略的定制可以按照资源名字的维度也可以按照资源类型或LabelSelector的维度来声明极其方便。
### 参与开源项目的收获
从2021年的2月份接触到Karmada项目以来我们团队先后有3人成为了Karmada社区的Contributor从0.5.0到1.0.0版本参与和见证了多个feature的发布。同时Karmada也见证了我们团队的成长。
把自己的需求写成代码很简单把自己的需求和其他人讨论对所有人的需求进行圈定和取舍选择一个符合大家的解决方案再转换成代码难度则会升级。我们团队在此期间收获了很多也成长了许多并且为能够参与Karmada项目建设而感到自豪希望更多的开发者能加入Karmada社区一起让社区生态更加繁荣

View File

@ -1,47 +0,0 @@
# bash auto-completion on Linux
## Introduction
The karmadactl completion script for Bash can be generated with the command karmadactl completion bash. Sourcing the completion script in your shell enables karmadactl autocompletion.
However, the completion script depends on [bash-completion](https://github.com/scop/bash-completion), which means that you have to install this software first (you can test if you have bash-completion already installed by running `type _init_completion`).
## Install bash-completion
bash-completion is provided by many package managers (see [here](https://github.com/scop/bash-completion#installation)). You can install it with `apt-get install bash-completion` or `yum install bash-completion`, etc.
The above commands create `/usr/share/bash-completion/bash_completion`, which is the main script of bash-completion. Depending on your package manager, you have to manually source this file in your `~/.bashrc` file.
```bash
source /usr/share/bash-completion/bash_completion
```
Reload your shell and verify that bash-completion is correctly installed by typing `type _init_completion`.
## Enable karmadactl autocompletion
You now need to ensure that the karmadactl completion script gets sourced in all your shell sessions. There are two ways in which you can do this:
- Source the completion script in your ~/.bashrc file:
```bash
echo 'source <(karmadactl completion bash)' >>~/.bashrc
```
- Add the completion script to the /etc/bash_completion.d directory:
```bash
karmadactl completion bash >/etc/bash_completion.d/karmadactl
```
If you have an alias for karmadactl, you can extend shell completion to work with that alias:
```bash
echo 'alias km=karmadactl' >>~/.bashrc
echo 'complete -F __start_karmadactl km' >>~/.bashrc
```
> **Note:** bash-completion sources all completion scripts in /etc/bash_completion.d.
Both approaches are equivalent. After reloading your shell, karmadactl autocompletion should be working.
## Enable kubectl-karmada autocompletion
Currently, kubectl plugins do not support autocomplete, but it is already planned in [Command line completion for kubectl plugins](https://github.com/kubernetes/kubernetes/issues/74178).
We will update the documentation as soon as it does.

View File

@ -1,119 +0,0 @@
# Overview
This document explains how cherry picks are managed on release branches within
the `karmada-io/karmada` repository.
A common use case for this task is backporting PRs from master to release
branches.
> This doc is lifted from [Kubernetes cherry-pick](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-release/cherry-picks.md).
- [Prerequisites](#prerequisites)
- [What Kind of PRs are Good for Cherry Picks](#what-kind-of-prs-are-good-for-cherry-picks)
- [Initiate a Cherry Pick](#initiate-a-cherry-pick)
- [Cherry Pick Review](#cherry-pick-review)
- [Troubleshooting Cherry Picks](#troubleshooting-cherry-picks)
- [Cherry Picks for Unsupported Releases](#cherry-picks-for-unsupported-releases)
## Prerequisites
- A pull request merged against the `master` branch.
- The release branch exists (example: [`release-1.0`](https://github.com/karmada-io/karmada/tree/release-1.0))
- The normal git and GitHub configured shell environment for pushing to your
karmada `origin` fork on GitHub and making a pull request against a
configured remote `upstream` that tracks
`https://github.com/karmada-io/karmada.git`, including `GITHUB_USER`.
- Have GitHub CLI (`gh`) installed following [installation instructions](https://github.com/cli/cli#installation).
- A github personal access token which has permissions "repo" and "read:org".
Permissions are required for [gh auth login](https://cli.github.com/manual/gh_auth_login)
and not used for anything unrelated to cherry-pick creation process
(creating a branch and initiating PR).
## What Kind of PRs are Good for Cherry Picks
Compared to the normal master branch's merge volume across time,
the release branches see one or two orders of magnitude less PRs.
This is because there is an order or two of magnitude higher scrutiny.
Again, the emphasis is on critical bug fixes, e.g.,
- Loss of data
- Memory corruption
- Panic, crash, hang
- Security
A bugfix for a functional issue (not a data loss or security issue) that only
affects an alpha feature does not qualify as a critical bug fix.
If you are proposing a cherry pick and it is not a clear and obvious critical
bug fix, please reconsider. If upon reflection you wish to continue, bolster
your case by supplementing your PR with e.g.,
- A GitHub issue detailing the problem
- Scope of the change
- Risks of adding a change
- Risks of associated regression
- Testing performed, test cases added
- Key stakeholder reviewers/approvers attesting to their confidence in the
change being a required backport
It is critical that our full community is actively engaged on enhancements in
the project. If a released feature was not enabled on a particular provider's
platform, this is a community miss that needs to be resolved in the `master`
branch for subsequent releases. Such enabling will not be backported to the
patch release branches.
## Initiate a Cherry Pick
- Run the [cherry pick script][cherry-pick-script]
This example applies a master branch PR #1206 to the remote branch
`upstream/release-1.0`:
```shell
hack/cherry_pick_pull.sh upstream/release-1.0 1206
```
- Be aware the cherry pick script assumes you have a git remote called
`upstream` that points at the Karmada github org.
- You will need to run the cherry pick script separately for each patch
release you want to cherry pick to. Cherry picks should be applied to all
active release branches where the fix is applicable.
- If `GITHUB_TOKEN` is not set you will be asked for your github password:
provide the github [personal access token](https://github.com/settings/tokens) rather than your actual github
password. If you can securely set the environment variable `GITHUB_TOKEN`
to your personal access token then you can avoid an interactive prompt.
Refer [https://github.com/github/hub/issues/2655#issuecomment-735836048](https://github.com/github/hub/issues/2655#issuecomment-735836048)
## Cherry Pick Review
As with any other PR, code OWNERS review (`/lgtm`) and approve (`/approve`) on
cherry pick PRs as they deem appropriate.
The same release note requirements apply as normal pull requests, except the
release note stanza will auto-populate from the master branch pull request from
which the cherry pick originated.
## Troubleshooting Cherry Picks
Contributors may encounter some of the following difficulties when initiating a
cherry pick.
- A cherry pick PR does not apply cleanly against an old release branch. In
that case, you will need to manually fix conflicts.
- The cherry pick PR includes code that does not pass CI tests. In such a case
you will have to fetch the auto-generated branch from your fork, amend the
problematic commit and force push to the auto-generated branch.
Alternatively, you can create a new PR, which is noisier.
## Cherry Picks for Unsupported Releases
The community supports & patches releases need to be discussed.
[cherry-pick-script]: https://github.com/karmada-io/karmada/blob/master/hack/cherry_pick_pull.sh

View File

@ -1,114 +0,0 @@
# Overview
This document explains how lifted code is managed.
A common user case for this task is developer lifting code from other repositories to `pkg/util/lifted` directory.
- [Steps of lifting code](#steps-of-lifting-code)
- [How to write lifted comments](#how-to-write-lifted-comments)
- [Examples](#examples)
## Steps of lifting code
- Copy code from another repository and save it to a go file under `pkg/util/lifted`.
- Optionally change the lifted code.
- Add lifted comments for the code [as guided](#how-to-write-lifted-comments).
- Run `hack/update-lifted.sh` to update the lifted doc `pkg/util/lifted/doc.go`.
## How to write lifted comments
Lifted comments shall be placed just before the lifted code (could be a func, type, var or const). Only empty lines and comments are allowed between lifted comments and lifted code.
Lifted comments are composed by one or multi comment lines, each in the format of `+lifted:KEY[=VALUE]`. Value is optional for some keys.
Valid keys are as follow
- source:
Key `source` is required. Its value indicates where the code is lifted from.
- changed:
Key `changed` is optional. It indicates whether the code is changed. Value is optional (`true` or `false`, defaults to `true`). Not adding this key or setting it to `false` means no code change.
## Examples
### Lifting function
Lift function `IsQuotaHugePageResourceName` to `corehelpers.go`:
```go
// +lifted:source=https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/apis/core/helper/helpers.go#L57-L61
// IsQuotaHugePageResourceName returns true if the resource name has the quota
// related huge page resource prefix.
func IsQuotaHugePageResourceName(name corev1.ResourceName) bool {
return strings.HasPrefix(string(name), corev1.ResourceHugePagesPrefix) || strings.HasPrefix(string(name), corev1.ResourceRequestsHugePagesPrefix)
}
```
Added in `doc.go`:
```markdown
| lifted file | source file | const/var/type/func | changed |
|--------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|---------|
| corehelpers.go | https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/apis/core/helper/helpers.go#L57-L61 | func IsQuotaHugePageResourceName | N |
```
### Changed lifting function
Lift and change function `GetNewReplicaSet` to `deployment.go`
```go
// +lifted:source=https://github.com/kubernetes/kubernetes/blob/release-1.22/pkg/controller/deployment/util/deployment_util.go#L536-L544
// +lifted:changed
// GetNewReplicaSet returns a replica set that matches the intent of the given deployment; get ReplicaSetList from client interface.
// Returns nil if the new replica set doesn't exist yet.
func GetNewReplicaSet(deployment *appsv1.Deployment, f ReplicaSetListFunc) (*appsv1.ReplicaSet, error) {
rsList, err := ListReplicaSetsByDeployment(deployment, f)
if err != nil {
return nil, err
}
return FindNewReplicaSet(deployment, rsList), nil
}
```
Added in `doc.go`:
```markdown
| lifted file | source file | const/var/type/func | changed |
|--------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|---------|
| deployment.go | https://github.com/kubernetes/kubernetes/blob/release-1.22/pkg/controller/deployment/util/deployment_util.go#L536-L544 | func GetNewReplicaSet | Y |
```
### Lifting const
Lift const `isNegativeErrorMsg` to `corevalidation.go `:
```go
// +lifted:source=https://github.com/kubernetes/kubernetes/blob/release-1.22/pkg/apis/core/validation/validation.go#L59
const isNegativeErrorMsg string = apimachineryvalidation.IsNegativeErrorMsg
```
Added in `doc.go`:
```markdown
| lifted file | source file | const/var/type/func | changed |
|--------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|---------|
| corevalidation.go | https://github.com/kubernetes/kubernetes/blob/release-1.22/pkg/apis/core/validation/validation.go#L59 | const isNegativeErrorMsg | N |
```
### Lifting type
Lift type `Visitor` to `visitpod.go`:
```go
// +lifted:source=https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/api/v1/pod/util.go#L82-L83
// Visitor is called with each object name, and returns true if visiting should continue
type Visitor func(name string) (shouldContinue bool)
```
Added in `doc.go`:
```markdown
| lifted file | source file | const/var/type/func | changed |
|--------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|---------|
| visitpod.go | https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/api/v1/pod/util.go#L82-L83 | type Visitor | N |
```

View File

@ -1,61 +0,0 @@
# Profiling Karmada
## Enable profiling
To profile Karmada components running inside a Kubernetes pod, set --enable-pprof flag to true in the yaml of Karmada components.
The default profiling address is 127.0.0.1:6060, and it can be configured via `--profiling-bind-address`.
The components which are compiled by the Karmada source code support the flag above, including `Karmada-agent`, `Karmada-aggregated-apiserver`, `Karmada-controller-manager`, `Karmada-descheduler`, `Karmada-search`, `Karmada-scheduler`, `Karmada-scheduler-estimator`, `Karmada-webhook`.
```
--enable-pprof
Enable profiling via web interface host:port/debug/pprof/.
--profiling-bind-address string
The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060")
```
## Expose the endpoint at the local port
You can get at the application in the pod by port forwarding with kubectl, for example:
```shell
$ kubectl -n karmada-system get pod
NAME READY STATUS RESTARTS AGE
karmada-controller-manager-7567b44b67-8kt59 1/1 Running 0 19s
...
```
```shell
$ kubectl -n karmada-system port-forward karmada-controller-manager-7567b44b67-8kt59 6060
Forwarding from 127.0.0.1:6060 -> 6060
Forwarding from [::1]:6060 -> 6060
```
The HTTP endpoint will now be available as a local port.
## Generate the data
You can then generate the file for the memory profile with curl and pipe the data to a file:
```shell
$ curl http://localhost:6060/debug/pprof/heap > heap.pprof
```
Generate the file for the CPU profile with curl and pipe the data to a file (7200 seconds is two hours):
```shell
curl "http://localhost:6060/debug/pprof/profile?seconds=7200" > cpu.pprof
```
## Analyze the data
To analyze the data:
```shell
go tool pprof heap.pprof
```
## Read more about profiling
1. [Profiling Golang Programs on Kubernetes](https://danlimerick.wordpress.com/2017/01/24/profiling-golang-programs-on-kubernetes/)
2. [Official Go blog](https://blog.golang.org/pprof)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 294 KiB

View File

@ -1,279 +0,0 @@
---
title: "GitHub Workflow"
weight: 6
description: |
An overview of the GitHub workflow used by the Karmada project. It includes
some tips and suggestions on things such as keeping your local environment in
sync with upstream and commit hygiene.
---
> This doc is lifted from [Kubernetes github-workflow](https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md).
![Git workflow](git_workflow.png)
### 1 Fork in the cloud
1. Visit https://github.com/karmada-io/karmada
2. Click `Fork` button (top right) to establish a cloud-based fork.
### 2 Clone fork to local storage
Per Go's [workspace instructions][go-workspace], place Karmada' code on your
`GOPATH` using the following cloning procedure.
[go-workspace]: https://golang.org/doc/code.html#Workspaces
Define a local working directory:
```sh
# If your GOPATH has multiple paths, pick
# just one and use it instead of $GOPATH here.
# You must follow exactly this pattern,
# neither `$GOPATH/src/github.com/${your github profile name/`
# nor any other pattern will work.
export working_dir="$(go env GOPATH)/src/github.com/karmada-io"
```
Set `user` to match your github profile name:
```sh
export user={your github profile name}
```
Both `$working_dir` and `$user` are mentioned in the figure above.
Create your clone:
```sh
mkdir -p $working_dir
cd $working_dir
git clone https://github.com/$user/karmada.git
# or: git clone git@github.com:$user/karmada.git
cd $working_dir/karmada
git remote add upstream https://github.com/karmada-io/karmada.git
# or: git remote add upstream git@github.com:karmada-io/karmada.git
# Never push to upstream master
git remote set-url --push upstream no_push
# Confirm that your remotes make sense:
git remote -v
```
### 3 Branch
Get your local master up to date:
```sh
# Depending on which repository you are working from,
# the default branch may be called 'main' instead of 'master'.
cd $working_dir/karmada
git fetch upstream
git checkout master
git rebase upstream/master
```
Branch from it:
```sh
git checkout -b myfeature
```
Then edit code on the `myfeature` branch.
### 4 Keep your branch in sync
```sh
# Depending on which repository you are working from,
# the default branch may be called 'main' instead of 'master'.
# While on your myfeature branch
git fetch upstream
git rebase upstream/master
```
Please don't use `git pull` instead of the above `fetch` / `rebase`. `git pull`
does a merge, which leaves merge commits. These make the commit history messy
and violate the principle that commits ought to be individually understandable
and useful (see below). You can also consider changing your `.git/config` file via
`git config branch.autoSetupRebase always` to change the behavior of `git pull`, or another non-merge option such as `git pull --rebase`.
### 5 Commit
Commit your changes.
```sh
git commit --signoff
```
Likely you go back and edit/build/test some more then `commit --amend`
in a few cycles.
### 6 Push
When ready to review (or just to establish an offsite backup of your work),
push your branch to your fork on `github.com`:
```sh
git push -f ${your_remote_name} myfeature
```
### 7 Create a pull request
1. Visit your fork at `https://github.com/$user/karmada`
2. Click the `Compare & Pull Request` button next to your `myfeature` branch.
_If you have upstream write access_, please refrain from using the GitHub UI for
creating PRs, because GitHub will create the PR branch inside the main
repository rather than inside your fork.
#### Get a code review
Once your pull request has been opened it will be assigned to one or more
reviewers. Those reviewers will do a thorough code review, looking for
correctness, bugs, opportunities for improvement, documentation and comments,
and style.
Commit changes made in response to review comments to the same branch on your
fork.
Very small PRs are easy to review. Very large PRs are very difficult to review.
#### Squash commits
After a review, prepare your PR for merging by squashing your commits.
All commits left on your branch after a review should represent meaningful milestones or units of work. Use commits to add clarity to the development and review process.
Before merging a PR, squash the following kinds of commits:
- Fixes/review feedback
- Typos
- Merges and rebases
- Work in progress
Aim to have every commit in a PR compile and pass tests independently if you can, but it's not a requirement. In particular, `merge` commits must be removed, as they will not pass tests.
To squash your commits, perform an [interactive
rebase](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History):
1. Check your git branch:
```
git status
```
Output is similar to:
```
On branch your-contribution
Your branch is up to date with 'origin/your-contribution'.
```
2. Start an interactive rebase using a specific commit hash, or count backwards from your last commit using `HEAD~<n>`, where `<n>` represents the number of commits to include in the rebase.
```
git rebase -i HEAD~3
```
Output is similar to:
```
pick 2ebe926 Original commit
pick 31f33e9 Address feedback
pick b0315fe Second unit of work
# Rebase 7c34fc9..b0315ff onto 7c34fc9 (3 commands)
#
# Commands:
# p, pick <commit> = use commit
# r, reword <commit> = use commit, but edit the commit message
# e, edit <commit> = use commit, but stop for amending
# s, squash <commit> = use commit, but meld into previous commit
# f, fixup <commit> = like "squash", but discard this commit's log message
...
```
3. Use a command line text editor to change the word `pick` to `squash` for the commits you want to squash, then save your changes and continue the rebase:
```
pick 2ebe926 Original commit
squash 31f33e9 Address feedback
pick b0315fe Second unit of work
...
```
Output (after saving changes) is similar to:
```
[detached HEAD 61fdded] Second unit of work
Date: Thu Mar 5 19:01:32 2020 +0100
2 files changed, 15 insertions(+), 1 deletion(-)
...
Successfully rebased and updated refs/heads/master.
```
4. Force push your changes to your remote branch:
```
git push --force
```
For mass automated fixups (e.g. automated doc formatting), use one or more
commits for the changes to tooling and a final commit to apply the fixup en
masse. This makes reviews easier.
### Merging a commit
Once you've received review and approval, your commits are squashed, your PR is ready for merging.
Merging happens automatically after both a Reviewer and Approver have approved the PR. If you haven't squashed your commits, they may ask you to do so before approving a PR.
### Reverting a commit
In case you wish to revert a commit, use the following instructions.
_If you have upstream write access_, please refrain from using the
`Revert` button in the GitHub UI for creating the PR, because GitHub
will create the PR branch inside the main repository rather than inside your fork.
- Create a branch and sync it with upstream.
```sh
# Depending on which repository you are working from,
# the default branch may be called 'main' instead of 'master'.
# create a branch
git checkout -b myrevert
# sync the branch with upstream
git fetch upstream
git rebase upstream/master
```
- If the commit you wish to revert is a:<br>
- **merge commit:**
```sh
# SHA is the hash of the merge commit you wish to revert
git revert -m 1 SHA
```
- **single commit:**
```sh
# SHA is the hash of the single commit you wish to revert
git revert SHA
```
- This will create a new commit reverting the changes. Push this new commit to your remote.
```sh
git push ${your_remote_name} myrevert
```
- [Create a Pull Request](#7-create-a-pull-request) using this branch.

View File

@ -1,148 +0,0 @@
# Descheduler
Users could divide their replicas of a workload into different clusters in terms of available resources of member clusters.
However, the scheduler's decisions are influenced by its view of Karmada at that point of time when a new `ResourceBinding`
appears for scheduling. As Karmada multi-clusters are very dynamic and their state changes over time, there may be desire
to move already running replicas to some other clusters due to lack of resources for the cluster. This may happen when
some nodes of a cluster failed and the cluster does not have enough resource to accommodate their pods or the estimators
have some estimation deviation, which is inevitable.
The karmada-descheduler will detect all deployments once in a while, every 2 minutes by default. In every period, it will find out
how many unschedulable replicas a deployment has in target scheduled clusters by calling karmada-scheduler-estimator. Then
it will evict them from decreasing `spec.clusters` and trigger karmada-scheduler to do a 'Scale Schedule' based on the current
situation. Note that it will take effect only when the replica scheduling strategy is dynamic division.
## Prerequisites
### Karmada has been installed
We can install Karmada by referring to [quick-start](https://github.com/karmada-io/karmada#quick-start), or directly run `hack/local-up-karmada.sh` script which is also used to run our E2E cases.
### Member cluster component is ready
Ensure that all member clusters have joined Karmada and their corresponding karmada-scheduler-estimator is installed into karmada-host.
Check member clusters using the following command:
```bash
# check whether member clusters have joined
$ kubectl get cluster
NAME VERSION MODE READY AGE
member1 v1.19.1 Push True 11m
member2 v1.19.1 Push True 11m
member3 v1.19.1 Pull True 5m12s
# check whether the karmada-scheduler-estimator of a member cluster has been working well
$ kubectl --context karmada-host -n karmada-system get pod | grep estimator
karmada-scheduler-estimator-member1-696b54fd56-xt789 1/1 Running 0 77s
karmada-scheduler-estimator-member2-774fb84c5d-md4wt 1/1 Running 0 75s
karmada-scheduler-estimator-member3-5c7d87f4b4-76gv9 1/1 Running 0 72s
```
- If a cluster has not joined, use `hack/deploy-agent-and-estimator.sh` to deploy both karmada-agent and karmada-scheduler-estimator.
- If the clusters have joined, use `hack/deploy-scheduler-estimator.sh` to only deploy karmada-scheduler-estimator.
### Scheduler option '--enable-scheduler-estimator'
After all member clusters have joined and estimators are all ready, specify the option `--enable-scheduler-estimator=true` to enable scheduler estimator.
```bash
# edit the deployment of karmada-scheduler
$ kubectl --context karmada-host -n karmada-system edit deployments.apps karmada-scheduler
```
Add the option `--enable-scheduler-estimator=true` into the command of container `karmada-scheduler`.
### Descheduler has been installed
Ensure that the karmada-descheduler has been installed onto karmada-host.
```bash
$ kubectl --context karmada-host -n karmada-system get pod | grep karmada-descheduler
karmada-descheduler-658648d5b-c22qf 1/1 Running 0 80s
```
## Example
Let's simulate a replica scheduling failure in a member cluster due to lack of resources.
First we create a deployment with 3 replicas and divide them into 3 member clusters.
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx-propagation
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: nginx
placement:
clusterAffinity:
clusterNames:
- member1
- member2
- member3
replicaScheduling:
replicaDivisionPreference: Weighted
replicaSchedulingType: Divided
weightPreference:
dynamicWeight: AvailableReplicas
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
resources:
requests:
cpu: "2"
```
It is possible for these 3 replicas to be evenly divided into 3 member clusters, that is, one replica in each cluster.
Now we taint all nodes in member1 and evict the replica.
```bash
# mark node "member1-control-plane" as unschedulable in cluster member1
$ kubectl --context member1 cordon member1-control-plane
# delete the pod in cluster member1
$ kubectl --context member1 delete pod -l app=nginx
```
A new pod will be created and cannot be scheduled by `kube-scheduler` due to lack of resources.
```bash
# the state of pod in cluster member1 is pending
$ kubectl --context member1 get pod
NAME READY STATUS RESTARTS AGE
nginx-68b895fcbd-fccg4 1/1 Pending 0 80s
```
After about 5 to 7 minutes, the pod in member1 will be evicted and scheduled to other available clusters.
```bash
# get the pod in cluster member1
$ kubectl --context member1 get pod
No resources found in default namespace.
# get a list of pods in cluster member2
$ kubectl --context member2 get pod
NAME READY STATUS RESTARTS AGE
nginx-68b895fcbd-dgd4x 1/1 Running 0 6m3s
nginx-68b895fcbd-nwgjn 1/1 Running 0 4s
```

View File

@ -1,40 +0,0 @@
# FAQ(Frequently Asked Questions)
## What is the difference between PropagationPolicy and ClusterPropagationPolicy?
The `PropagationPolicy` is a namespace-scoped resource type which means the objects with this type must reside in a namespace.
And the `ClusterPropagationPolicy` is the cluster-scoped resource type which means the objects with this type don't have a namespace.
Both of them are used to hold the propagation declaration, but they have different capacities:
- PropagationPolicy: can only represent the propagation policy for the resources in the same namespace.
- ClusterPropagationPolicy: can represent the propagation policy for all resources including namespace-scoped and cluster-scoped resources.
## What is the difference between 'Push' and 'Pull' mode of a cluster?
Please refer to [Overview of Push and Pull](./userguide/cluster-registration.md#overview-of-cluster-mode).
## Why Karmada requires `kube-controller-manager`?
`kube-controller-manager` is composed of a bunch of controllers, Karmada inherits some controllers from it
to keep a consistent user experience and behavior.
It's worth noting that not all controllers are needed by Karmada, for the recommended controllers please
refer to [Recommended Controllers](./userguide/configure-controllers.md#recommended-controllers).
## Can I install Karmada in a Kubernetes cluster and reuse the kube-apiserver as Karmada apiserver?
The quick answer is `yes`. In that case, you can save the effort to deploy
[karmada-apiserver](https://github.com/karmada-io/karmada/blob/master/artifacts/deploy/karmada-apiserver.yaml) and just
share the APIServer between Kubernetes and Karmada. In addition, the high availability capabilities in the origin clusters
can be inherited seamlessly. We do have some users using Karmada in this way.
There are some things you should consider before doing so:
- This approach hasn't been fully tested by the Karmada community and no plan for it yet.
- This approach will increase computation costs for the Karmada system. E.g.
After you apply a `resource template`, take `Deployment` as an example, the `kube-controller` will create `Pods` for the
Deployment and update the status persistently, Karmada system will reconcile these changes too, so there might be
conflicts.
TODO: Link to adoption use case once it gets on board.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 150 KiB

View File

@ -1 +0,0 @@
<mxfile host="Chrome" modified="2021-04-07T16:21:12.902Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36" etag="gUzIRTO6vc5kE9vvPNVQ" version="14.5.0" type="device"><diagram id="d3edZf_nAvSGJ7Mxgo-U" name="Page-1">7Vttc5s4EP41/ugMSIDxR9txX27auSRuL82njgIy5ooRBTmx++tPGGFeJDDEYLvTc2catBJCPM/uandlD+BsvX0fomD1mdjYGwDF3g7g7QAAVTV19ieW7BLJSOECJ3RtPigTLNxfmAsVLt24No4KAykhHnWDotAivo8tWpChMCSvxWFL4hWfGiAHC4KFhTxR+ujadJVITTDK5B+w66zSJ6vGOOlZo3Qwf5NohWzymhPB+QDOQkJocrXezrAXg5fiktz3rqL3sLAQ+7TJDesA3wPVvv86tKMlop8egs1yqGrJNC/I2/A35quluxSCkGx8G8ezqAM4fV25FC8CZMW9r4x0JlvRtce7UWhxEg3WEtfIl/2CQ4q3ORFf83tM1piGOzaE9wKN48cVSOcrfs3YgGM+ZJVjAphciLgGOIepM5DYBcdJjpnxCIwnMHyZ6/Dx4+7p/n720RsCAbL5l9mtABvjO4gvbURRREmIj4O3dD1vRjwS7meAc/VWn486gtEowqjquoCjIYFR6wvFBoqHbWaKvElCuiIO8ZE3z6TTTDUV1srGfCIk4Jj+iyndcZVEG0qKiFciG5FNaOGa9XP4KAodTI9rS/wutTyF2EPUfSm6nVNQfzKHOvyOpr6l/WLu6GUBnDuJ7l4C9YihRiexd2YCy0NR5Fqp+J3rVZjDcv+J5cSnfG7mv+AU+3Y6mU98nEj4PErXJEthVS9Fct2qcyT/QOEa2YgJJ3cf2f8LHDLXcZqnL7FjI2wuExZD8gPnegzLxM8y3vZt/nTJdtbax8Gyj9NEH6dqhujkjA6cnHR7NY+bm8NADwpGVdxGY//vxCZyTJkFnA5BEXpOH6bU4gdGsIAfgIq41wLJJjHuCz+j4+ikvc5mRKjy/aKW9+OqexzaGyX/6cBn3PqLh8Xn2Zfh9tfPfx6+f/r614+fQwgFqO+I51rxGmfMTkPieR17jKUe/5Oiv/9wD5GTJ58+aOC9JrzRCyYAx1L4c6yNpEFT56RJHb0qkjZ1fdv1nQEwPLaY6TOjzHDiqz+HRLW0DVSwkQ98x5fjEIg+br7F1oa6xO+PxfZs2ShaHWbvi7pysnecujOZn/T1xBRm5m0iGnPz51qf3pLAM9me9OVGkvCCM5cSl0XOGYVROoo99DBQ4JXhReuMjqcqeUa5CHmu48eZEaOGKQycxui7FvImvGPt2rZXVU8oZmd90WxoBZ7VEmMKEIhWpVWa8n1aT1SPBaozZhcW82ybrm2zaULUYQKka8eLPGZPRR55ebFJvSHL3rNSQA5WvHXpN67N8fVTfH2j89btNtd1u0sbzFR339IJ4kburriZ3bZvpfdVcpCvFNTp19FyECfkaKUgx5guYSyVNS4o8CfcEZe9WZYxm0WFgaOSJiTvze8CueJyaSINliYCpYkSYISJ9lp1eO0TFE2MhNsrWqo0OT15yvdVKM1BQQ9K+ZTXSamCvl3RjP8V7aKKJhbX0mggCpBf0Djj5yY+3Nm7+WG0ryJM2ABVC7ZZZxY/5Gc5yNA63mX85ygoBCNWEowMAw+xkCELR0p3y8WdByoeXtLTwpSud0KgFeNQ3RDjEWMki0d62glBAwclLwU2R6N1wU8dlYxpLBb8VE0Ckwp7ixh6Lvnh/UcWlUEDjqHdPGQ+MNqqyHduOMUUp7sTgO6BSncL5aaUR0oq0aaR1urOE8qKtfxDtq9MenCqp2Z/vTIEio5D5jeUs7pXsRYzcTAPAS7qY8uHUlfgY4EshrlOH6tV0HBNPlZSwT2/j20OVKWPlZyWntnHAnG3ynzsNVbYemXogj721Xx4+DC0vmjf7ABDuCGTqSatlpZT7HN9Y6eU1rbAP59gS18Tykk5MQHWi0frqa0dyVonYYh2uWFBPCBq+5SM9GS+tybE1XA12HhbeT3JYZNkJ27IeaWBjQtgqZIQ0zxnCNPgyyJvB7GLLbp9PVpvEPtASdVJVXtDWTwI+O1RFs67L45y+rQcyn/TlfRgNNnfo9+ehFIpRZO4k3OToAokiBt285q4Bo3iYYpy7DRl37rDocveJ46y9sJiACCvnafHPmeonddWTa6keK7B4mErTIlsXTwvnQ/Csu51FIZosBiHpDlw1boANOvGnxy4yM0DnG4epytz1aGTcsS2Ko0gr9x1AZsQQTeznosdIKW54akHSHBUynqvxAY0tXZ8TzbQwbFpyQb61OfaWONanLUykvLY3lmXJ+pJUQ2oSxfcleLV/WAi/3WczTO2aHxj/GigWJ7LVGHoEEEh25UmS785qNSzFpWz0tfkZUmN7KvcbyhAsmb2A8EE++xnlnD+Hw==</diagram></mxfile>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

View File

@ -1 +0,0 @@
<mxfile host="Electron" modified="2020-12-01T12:46:07.775Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/12.9.13 Chrome/80.0.3987.163 Electron/8.2.1 Safari/537.36" etag="Lk7p4tqHNz8BnVOhK5f2" version="12.9.13" type="device"><diagram id="C5RBs43oDa-KdzZeNtuy" name="Page-1">7V1bd5s4EP41PvuUHJur/Zhrm91ut23azfapRwbZpgHkCNmx++tXAgkDwgHHMcK46TktTCSBNBfNfDOoPf0qWL3DYD77G7nQ72l9d9XTr3sa/TFG9B9GWSeUwcjuJ5Qp9lxO2xDuvV+QE0WzhefCKNeQIOQTb54nOigMoUNyNIAxes43myA//9Q5mEKJcO8AX6Y+eC6ZJdShZm/o76E3nYknDyw+4wCIxnwm0Qy46DlD0m96+hVGiCRXweoK+mz1xLo83K0f/A+P1rs/P0dP4NvlX18//nuWDHa7S5d0ChiG5NVDf1neBeuviDz9GK3uf179M71d+mdiamQt1gu6dPn4LcJkhqYoBP7NhnqJ0SJ0IRu1T+82bT4gNKfEASX+hISsuSyABUGUNCOBz3+bPJM9qMCiivnxdhFaYAe+MCkhZgBPIXmp3YaLVP4hCiDBa9oRQx8Qb5l/O8DlcJq2410vMAbrTIM58kISZUb+xAi0AdepM1MIFFeps9GwwLliD916uQe9SN5C3GWmsyHF8rCDbPCJL4G/4Etx4bqUcLNkbCrKzUYqGIufZx6B93MQ8+mZmpa8BPhgDP1L4DxO425XyEeY/ipEIROwief7gtTT9InJ/lB6RDB6hJnfWPEP64FCkqEnP6mkLSEmcPUKWZNFI+XhsMCRQd/glOeNlRkITs8yFka0KxOoDEN355emQpfpKuL1f6z/uSluv/Ph4pvrVe5unb37BLFH5w4xJx7aMOjtNgxa/zgMgy4ZhmvoUza+jW04Au3XisveAu03lGj/yiMZ5ad33zO/2ag+u1lnbppWfKuu4u+p93tx0Dwq+90xzusqOW9JBvUunCAc0FUqikRiBUUkM8gzk0Yoc9YuWE1ZNHc+8dGzMwOYnNMFD7wQEGYjSw3wwWylUdzUBqldzNjKYZOm0paWW4XmHVryh005O3vxYijx4tvcBW/lSxx9nGEUY8UWeBqjk1AfgV9V64+mUn/Ea2a48YDw4+cFpLdvtHm4HoYO+UHVEjS+ewyKAY5VIv79JsV/IOMinZR/ra7820rlX5N9J+rsJDsIvQr220CaM+zDoWq7PtBVyPFrA5CDy79dV/5HSuXfkOT/YykbPzBXKL/0wPemIb126IqxgOySybnnAP+C/yLwXDfhMoy8X2Acj8cWn0NZdHDzsmdev6QoPK/DO/fSbEqWUS+I43YN6p/bOQ3ihmA3jG4DqYkmaDKJIJH48hZ4+lEF+K3RL00pKDOQY/PvHdEvs1K/dNtsvVLJwXwMRnvhtKffSpx6rQcMHS/yUNi4+yt5CX3lXoIcsaswawc3T6O65mmLGjVknuRg/D0ImVHR+l8gtQ2Ox64tn07hcozp1ZRdfcLIgVEkMa5riRopTWuXYI8NZ2nlgL2L+pPW+lTpj6EUPtHkYD7VnzSpeSraIqU1W6Atv2NSaa+psykpxWSs00DIanPDtJSaOFPixsSj60iXDbP1WXkR6YCrLG31lvJEiSbHjp3Ug7qRvK40ktfkSBHE9YwZbWglVGwbBck2y3IgzUr2aQSBel0nVlfrxI7UOkk5F2njMbWq8rPlNeF2cfcyrIrST6nHyGii9FOOXjF8Kk02HyUoqm1J5mRA0ZFp5heeP7O9GKku++PRwolD2uP3+6Td0VCeSNU1ab2TnDRbHq1/zaqP3mjVK+v6JHU7HCP0IiPKsOpGSzX0slKlBAt1vaUAQ9+jZ7YzIPrXeOH5zCVkJTSxbiSN6cMz7QV14Rcpvico3kSMQqflYwjcdRpraXQy/YWocotbUmaGU+ieZx64GUse/S4zeohIfmSHPi0Zmew03hcYoCXrh/Ccvg5/wstDUKK8Cpmluoh5xN+UGmUCvJBug/24Lq/HlNJ6WrDv5y4fAQ6AC849yobbZAru2ZiK0kXSDARMmhMY2wtdlt6hcwcBjGJ5127ZddpOvA8ffBsfCzpINYIUFG87OLcFzhP65sMJeWlzK9PVvEvWfK1h0ZIOdFmB00ZZBU7N69t/1SD7GydgSk27dabUOA1Mz+Czqk5bKEVYxWtmuOGKbAWIuz/HNreVeIaUZlCP1Blq0wzpTYsjaKN2PZyaCHpQ/M7ENCoiaKlHIxG0IRfsdaSgyKgs2Bv1DSu/5NyOtTd2NuXsSacAj2qm2ZbeeoTDkLnUSe9kX6djiyWUPjxPMRQxSGL1eb8D8E/OlHWkitmorrIc2Fbr9askdxayacyhiMVFFs3tAKYoeajqM26m7PGX4C0PM0D+iGJRShCniDAsqArJ2hHciubQOZ/7lBdB/IVmfwbYM8cQsph8nlRLQYajjeN3cGbQXfgQ14Wj4vGpsibmkC4MG31CCbPdwajfkM9OkI9m1oR87IMJuuwfl4jpbZqzZwAhR/3CcTTPsD6aMxTTKmCBOaCRyuocTGM0iaOKZwylxMj3WTlACX5YGPS3jFXK2LDoXOglVYX6oETGDoYqmrYK7/CoEuZm7SNT1IT71s7hvtSjkXDflFNQHfFszS3BSCbct0aj3JK3Po4Ubl5Xo/1qnpn2kX1Vacmnkakw7q2xx5a2pz3eb2eVA/kkFDn+oFAqMFYfFFqncZKRpdWUfUPpUUbiNTPcyNWTFDnTjmycJNbq66esOmhu6F6w06HZVuuDKPKcgrPNz6cT1xlXe6fz6TKfAmk7fQvkgmiWMrc1u4O5ZXfI8NosYbWg7Qs0S4fBmXpBiJKpSkBzydGnZnGohjFrq87XHRVSmhOSMpGtOFCxPKKskMzWmGt9S7FtU8JYRHklUKu2MErHVOlF7OLQwiiHmSdQJ6UVbUAZtNRsnZQtOwFddMmEplTquK30my+7LJ7nZa5t/+xLSkKpd8zs04i17bofC1lKj/6y1Rz5fUxFa7U52ZaiNb3qxH81RWtiHTv5BZFsaA94yBK93fyPQQl7Nv/xkn7zPw==</diagram></mxfile>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 79 KiB

View File

@ -1 +0,0 @@
<mxfile host="Electron" modified="2020-11-25T07:43:13.961Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/12.9.13 Chrome/80.0.3987.163 Electron/8.2.1 Safari/537.36" etag="iZ6dKITvxcgTXEq039Sm" version="12.9.13" type="device"><diagram id="C5RBs43oDa-KdzZeNtuy" name="Page-1">7V1Zd9o6EP41PJLjHXjM1uU27WmT25v2qUexBbgxFpVFAv31V7LlTbLBCRgZSNpzYsvyNjPfzKfxSOmZl7Plewzm08/Ig0HP0Lxlz7zqGYZuGUaP/de8FW8ZDbSkZYJ9j7flDXf+X8gb024L34NRqSNBKCD+vNzoojCELim1AYzRc7nbGAXlu87BBEoNdy4I5NZ73yPTpHVoDPL2D9CfTNM7684oOTIDaWf+JtEUeOi50GRe98xLjBBJtmbLSxgw6aVyuf+4ug9uHp33/3yL/oDvF5/+/fJfP7nYu5eckr0ChiF59aWt6Pb7p+s/D083t/C9Plr86p/f9XUuhicQLLjA+MuSVSpB6FGB8l2EyRRNUAiC67z1AqNF6EF2H43u5X1uEJrTRp02/oaErLh1gAVBtGlKZgE/mtyT3UhQ2oY35v0itMAuXPOaXIME4Akk68SRqZUCAqIZJHhFz8MwAMR/Kj8c4IY5yfrlwqcbXP4v0IUmqeIjgRgQSBvp1kxSTC52JsPnqU/g3RzEgnimaK4S8RPEBC5fIWRZKPwqZgoP7iAsvvucg01Pu0wLQEv77VyM+lCFBVMR4tUPdv6Zne7+5JeLd66Wpb0V32vb8lOXu8nyLZWWr48k0/9ZqcQb8EDjU0nwIPAnId12qbwgpg3MxH3q/8/5gZnveYmOYeT/BQ/x9Zjo58gPSfwq9kXPvlqHER6c+Ml5SCiqaY0x1oKnr505dgk+3P02lju/9Ff2LoUuaDyOqL5FxWRP8HpdZdHwDV4vgJehEl7phQvw+nIc8CrwsBp4DRyjDLB+5xEma+sKBpD44aRnvpP0FhGMHjNqq5e1RynrnPWbLSeM3p+NA/TsTgEmZx50/chHYQ1x2BdbyKiBMrqQetwjZ8BGQ0c1UuqoJFV8ACHzLoZ2C6mTcH227QT0BS4eMN2asK2vGLkwiiStvYgeB8z1XQD3cRKfdokCRL3dVYhCpt6xHwRpU88wxzb7x/AVg69wxIl/2BkoJIX25KdFYOlOGViGphxYMqk7RmBZDYGlmyqRZdUjKw4ubGMnOOo+Uqxh55BinwRS7IZIqRky7QcosipgGC0ww8fYp9KkwsPbIWRfEcBSnolxJFnmIjTPe+xZnD8Llj69eAR4Bjxw5iN2Jzh7gNgNFhEdcvRp3Ke+IwiY3NPukgaoLIlAfUsOpyKO86Z0gBPAMVk3vKnSahlyzJGlENXa0rLovRxb1rJZpWWjNe/lqHBWr00IxHtfIfbpyzMz3EuWYNiUIzhKOYJMEo4kC2fUyDVPEwyFLAG/Y3eTBKatFnUlzOUQPETU2dsSDkH5NamOvuiBkwfjZ+UWco4xWBW6cYDU3kcX7mNbwnfADf3T58oNNHmCnZqrZUq+5UhSkAkQ1/mWkTOyyhK3uu5dhpK2qliwmIiJFm48gDz4LKVIpW31WUoZPyr8f9sue9SUKNXk/fdDlOTEVoYOuITuglCLjWJLPYiBojFUbt0y8zxG605LojYzEpXWbZoqZH9QzLK5IndCLbdlhOawi4xQ5hjHwghrMv0FRminb59K3Ow6I0xNvjLoRSfEBU1HdbQ05QKBo4yWRlMyqLR2zbSUhkv9EMJlU02qCZeO9bJwKfbfT7iUC7WPJVzWwLcQLgdmOVx2Pn+SWnxBWxjO0NPRR0vx81QHoqVMXI4yWppNo6WhMlrqch4rB8YB5E6k4hHluRNTdjVHad+N66yUJk90OZOV2XfH60dE01ZfP2IomcnTuiVva6A1nxvFwhBNUEvN58adEVS5cupYqgc2zuEZGlqZoHIX3F2Caqot2jmIUWvTesWW6gcsSwR0IzxvO/rdVD6gZvQrlxIey+h3Y2nSyBnqJYl3vjZJl0NBFQk6mUGw+vIBXcZPMrecCcfQrlgp7I5kTi87ozomrMa2UuoSHttSg1iA5Sgv9DflJN5t3eiXPmV8QYL9yYTBRXNB5AKPTQM0NA8GXHcsJLOjZOpH0mXOKLbAjEk9fIhyk+9Y4XRbBuCIg/WKEU3Wth8DaDAnyl3gp2wsCEPvnC2CwvASgCjyXYFRLX3yo7Bd4FN0L6dTbGdvc6Jrvn8WpG5XCD1t25I2OeLHb11QZvL8Em+SLzQQrKctAjaofuDa5zKqn6tdAlbx+bfssKondPRh1M98UBzv+bQO2jkEM5gdmyTTOt4meEgTPEaqJ3hY8vfmz/EMnctMlejhN1u7KY9GXjpjncpjCiIWiHau0M2EXl3oybxOytkHFdzD2asWDUmLd4z+v7utoOZaTC0ELZ+UAo1RWYFWxTyrTKn7UWBF2Wkkul2RRb75U8mfViBxv/7UroiljLrHSxDQ349wtXZ6ZK2O2aVyhb+pXlT9QFetekdJsVCvZcJv89DS0foeI11eLM1YOusJtvglpb8Pgp3KsOAUzj2PNlw/QT6+ON21TWxhCafqVYOGMpDbm9quZPpl60Dedo2ShoAS1bKjkbRtHgLQ5QqYbAWWHWC9+2i2Bp1Ds62mhpfn7vJ83c/CkercHdvZ94dQe9A0vBtbuo/tlHgiq+g01obSpdzsw1oV5OigqHS1MVv+zvUxHCM8q0gjtfWZsa3wZZoCaahcP0yrGFW2R0ZPY3K43XRBD6WTw215XuM9wo/fFnCxs2/sno+hS355gADV1l9VJLpf4z+N2XB206URts6xbGf9cpL6+9wDuxrdHHomQ1oTqM31j+lu/sdJkuFr/jdezOv/AQ==</diagram></mxfile>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.6 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 338 KiB

View File

@ -1 +0,0 @@
<mxfile host="Electron" modified="2020-11-25T09:56:00.544Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/12.9.13 Chrome/80.0.3987.163 Electron/8.2.1 Safari/537.36" etag="kak3FtXZ4eCliDUD2HFw" version="12.9.13" type="device"><diagram id="C5RBs43oDa-KdzZeNtuy" name="Page-1">7V1fV+I6EP8s98FHPf0PfVR01b2rx13vXtf7sie0Abq2BEMR2E9/E9pASSJULE2oiw+2IU3bmfnNTCYz4cjuJLNLDEaDGxTC+MgywtmRfX5kWaZhOeQfbZnnLb5hZC19HIV526rhPvoN2aV56yQK4XitY4pQnEaj9cYADYcwSNfaAMZout6th+L1u45AHwoN9wGIxdaHKEwHWWvbaq3ar2DUH7A7m56ffZMA1jl/k/EAhGhaaLIvjuwORijNjpJZB8aUeowuD9fzh/jLk3f5+ev4GXw/+/uf23+Ps8E+veWS5StgOEx3HnoK227kP186ne9X+NIa+fe/7o7dfOwXEE9yguUvm84ZBWFICJqfIpwOUB8NQXyxaj3DaDIMIb2PQc5Wfb4gNCKNJmn8BdN0nksHmKSINA3SJM6/LfmC+cOO0QQHcMNbMTkDuA/TDf0sN+tI37AgLTn9LiFKYIrnpAOGMUijl3WRArlk9pf9VtQnBzkD3sAMkRfXKcQghaSRHCUCZ1Z0p0ScDqIU3o/AgjRTAuf30PgF4hTONhIl/9ZxcnzkGoKdTldoMxmEBgWksX6Vk7FlN1Gm7ZIy3bK0kmmRF1dgGMZUpL9BovODiB57MXmlsy4mR316dIdRAMfj9wl8L4rjDooRXlxr91z6R9rHKUZPsPCNt/jQK9AwLbRnnz1Cx2qvQ8cyVGPHNFpNBI9TEjy+XgbBeR085zCGC8PwQaDiGbpBpZFAcUsCxVOFE/PzTf++O32ez2ajrvffCA/A1bHlNYf01ZE0v/QOReT5lkBy3XUgHVscQDLe51etGHOKMZgXuo1oh/Hr92lbnFvocdOTLf3Zc63kInuClZQsabI7hl0Bw514MiZO98I9AXTS+0kQrUwpssmkuS4UZJI4ov2SWZ9OqE96MZoGA4DTkxAG0ThCw1e08d4UJ+djOMr9c7ORc06vpOZkcRFNXAxP4EXIPIspwk8xAqHonvcwSmiEBCbdBVQCBhod56c8AFxLOQAaOUFlgr0dAXrNUJe8bxg7rLLsaGvGDoEbGCbohWqkXkQISuh3IJpmeV6HppE7xs3UNG1RtOWvr0zTyB/HUUF8QnM8/1E8eaSDnbjs9HyWD56dzYtndxBH5OUhzhtVcbJyHVXJnKbV1nBOw2hawPw3pkDvJ8EiWNS4WU3LU+3U2b5ScBfw/LgGZzm4K8axVTZu5Gjlazii5/co5eIX0IXxOuWJI9InUn8eEApS9XhGhTcKQHyaf5FEYZgxGY6j36C7GI/SPtcyZHD37Mg9l3Jjo4gJMFmunud3OSouUMvgc2yc+Ibd4pRTdrprfIl1Qb3eGKYCb96m2aRGwFRrPw8EYrZWPo8phthu9YDYRhGrBGJsIVoLRG0UqgJ3FgtL0bB/+D4Cn5hgKg/8tHyB3Co0WsWaiWUcbE9NUGb95VhvqzUn+k3HZJzUeTq2jm/b2DIbUzAZYxRt8gqTkMVSZ9hLSnX2AB/SUWyVTXZptVSpYznTzD9MK8E0vaL1LTFFqTOAwRNp6qBhGKVUHfFs1SJYL1gP5RGktrgW2wTvsFVSstt6xYZMRzTdKtjBJdoZi88e2CSJxMtTJPXKXzAdUQM1mE3tsnbCdJTFgTY+eIFNAYaASzTR0lBwmaca5I84euSe1iXzZSOfpqMsLXXjgxfYtIjjeSChsjvsjrNpGQ0tMleJT7Ui/58ntMLs7HQ0iiPCrmUDGTHFZOB3oaYaHu6OplpzJORS44psajCamP3ejiZXLwvii4t1k1F4CBZkObXQxoL4om+blRoudJBxjoaiWtkxKEWGTaIhSKlcSxkgrPLsjQ3cekBblglt1Kp6mpkK7Zc12L5e9to0lVBfkxBUea7pFTf0JUUchxGCEmra6vSF7JuvP7u9cce9ubt2E/s+ubqdH+uUCc2JV5WSXrZWw/eVCrYsHvj6bMAy/lp0Mt4/KdiXwC8dH21irqapR/ioagkvG3TVq97ZlGQLNYEbfklumIapFT/Ygx9gLQavanQoe1Scb/I277KW9H8m7yWQYWmFjOWTNzrL3nW2By7atWKIicGHCly4Nhczle19UW/gwtHDUNcUM10upm1fxNbMoWKbPejBlkpzH+EsSn8UjgtDkbPVSPRkXjjZt02TyUqNSZRC2iOfBu1xSqGi/T64GLe3nl25rXs9lXGOWA7LFqH5dbdxbsMP24Bb1nbLUa8Bd8TqRJ00VOXKoHSoSTP/lm0MqgdbdE6al/JYJ4Xvtvei8Lm7bKuFdpQofDGEusjA2JRx0UzNLyuQrlfzuwdVvSnVHK+qJ+aKLt3PR3Z7pa6oW3baUr312U2lmP66u+Ju0Slc/206yPY36qw9KSFX9DpZ4kozdY/N74Nj1qd7pIZXydYMu4NZvnmMUifjrUjmc22Ot+z+6Bob+78bmZtIWsDlaRiShosXmLtGOy9rxLSa/QwET/3FZcxoDGl88hC22xWKJWstnpaX6x5UfVjFMw8pRdQGmt4cIOJDPjqqBNFULzfarkAr6I97PiVJA9yrmTYo9+iltCgbaVAWTJI+jXtQivuwOa/X7j9i+OV62EM4kWSN7GuVdl+a0hH3Ua9xVVbuIemUvlmZ5EtS2erxct7FC7FC7zsrVvkzvRBC1Bq4GZJ9hxsAnrJV3cryDjc9dYEVDwg/fZ1ASWLzrqGxCMMg/UkgCVQbjjrLkOSiL9slOgtNjkdgyIKTtygk356K9auSmtU4+/+peL1szFUWKR3YWKWzPwGcgBCcRIiMAmcwmNDVmuOAKCqM4pg6EKvM91dux0kK4U/KiceaQpRoz7yJZXrFsEdH4LchTCm2pTK0rg7q0qa+RJnatS72tGSr/JtKF2gf2k3j4gW+ilN9RrEv2wbgwKnMb+xca4mIfC6tdvqq467tkhJyjQOP/C7s21LT1OzaLs7dlpHHxuSTCz+QUeeP+G38hYgPlU/uccvg6gvhW2+1ZNqXOvL7nOzTWyCnq18hzxTS6sfc7Yv/AQ==</diagram></mxfile>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

View File

@ -1 +0,0 @@
<mxfile host="Electron" modified="2020-12-11T11:45:24.481Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/12.9.13 Chrome/80.0.3987.163 Electron/8.2.1 Safari/537.36" etag="lXAAPX5WP69rXVahZDO2" version="12.9.13" type="device"><diagram id="C5RBs43oDa-KdzZeNtuy" name="Page-1">7VxZd5s6EP41fnQONovtxzhr971pn3oUkLFuBKJCduz++kogmUXEwQk2OLfpOQ0atIDmm08zI5GeeRasriiI5u+IB3FvaHirnnneGw4Hxtjhv4RkLSUT20olPkWelGWCL+gPVE2ldIE8GBcqMkIwQ1FR6JIwhC4ryACl5L5YbUZwcdQI+FATfHEB1qU3yGPzVDoejjL5NUT+XI08cCbpnQCoyvJN4jnwyH1OZF70zDNKCEuvgtUZxGL21LzcvFrf4Ld3ztXrT/Fv8G365uv77/20s8tdmmxegcKQPbnraThw/qynF6+Xljn7Ou1/Ort+0x/IvmO2VhMGPT5/skgomxOfhABfZNIpJYvQg6Jbg5eyOm8JibhwwIX/QcbWEgxgwQgXzVmA5d2a76OejSyoC7e9hIQVoD5kW+pJxYoXzGFDztYVJAFkdM0rUIgBQ8sigIDEob+pJ5ueUgrWuQoRQSGLcz1/FAJeQdrUBk/SovqTcUlxpQbmeHsDfpE+gyrlXiYTJWjYBRnpmEuAF3IiTj2PCy6WQmll0GSQEPq9nyMGv0Qg0do9J5ai+jG4hXgK3Ds/aXZGMKH8VkhCga4ZwliJekNzZot/XB4zSu5g7o6T/IgWJGQ5efqzM8yWkDK42goMddcpKWRgWFJyn1HMQOl5nqMXVa8KTTl9PkFdhqaTAxgyn1i6/iHan9iq+FN2lxTOV4XSOl/6CCniLw+pFDbMCuZRsYJtHQUrmBornEPMddgMMXTf9K3yrHfA9K1WLH+FWM7weeln7k5m9qKwzhX2bPROTaMfNG30z9KgfVTcfdyaNzuleUfj01fhjNCAz1IZEikJqhhmUFQmj00iUS9Y+SKOO5lhcu/OAWUnfMIDFAImKLKSf/dFlWZ5RRtsaDFHleNDMuVIm+02DK9h4I/b8nOepYqxpopvkQeaciSOPcIwyzFiB9yMge77vQDrmdR1GIadMp+JposbQu8+LSAvNrRweIhCl/3iRgkOvXKY5dDGqQC/cVDw2y8R/ArUj6Pf6hT61XPn3Sbu56TLB78Knrd6HIzVx+O2Sd08qrRR0/C36sLf6RT8TT03+75SjW+FG1SceoCRH/Jrl0+hCMWmAr7IBfhU3giQ56VahjH6A26T/sTkywwW79ye9uzzSnVsxZhmKJtNHzlKL7+vUmVAfeNkVLAgyQO7peayVJqqQmazGDJNL03ky1pZJo7Ovkbdsi/d0/3ZZftSq2Ej9mWO7K4blcJVOQuNQr9nXmqaeqr/C10UIxIe2vnVnASjbSdhoGeprkEoUDw0PkOOMReJawfzB5neUn7li6uPlLgwjp/ni3U/Vtd2A0cVaa7DKszW/eM2lqGml5NRzeXE6li0oqcdN+az2T37nxiLtn/WvrGYx5gVfnybw27aBmqeZnHGJUWldi2b7cHD1jPJ3Vff44cX7Lp67lhoqufJMAFe9ibiQeTbdzJHo63mTuuJd+dFruZm3Q1be9wthOvOMA/sCIVpuDXDyGXPXMYPhm27Kq9+2LMrL3JTyay7q2QNu4VtfVvpdoGwoO8pCj0RXx8HsK3WY2bVcQHYaYTsoaUKka/JvYAJ4f+VJ1okMtL6fPxcEyVd4LIEIyVBs7zGDIApBJ6YL7hCMV+D+SsZC7XdnlR25yD0oXeSGzPrTh/gVXGAkLBi5y4fMO2c7dTlZxiQpWhHaMSfKD/Ilk64UJ+N3JSdJurKHpfTNAMo5CxtJAcFegIZzu+FOMg/vQM0AB44QVwpl+l7eP1bDqzTtBoIBKiTISLCyV5MawgCGCeoH16K60019USy74c0WrIpjnpWylI9HMA9EPKpPCSGM7YtC1llsUU6PfjZh1HJmM2KENGyKox5Y+HNW7PugqUbjiI/yKN4ca6krMV9ndfSMsx7Y9U6itjbNvxk3Ee/rt6t3A8/vr/BEfU/eK/7VaTalrdQWrMbdB+suvsmzqhNb8HS0/CeSmeBIuV20WnQUlHtR3p2nVRU6J2Kz+IEA2AQx8jtVR7MVtfJvt/mqO4Dx3M3+4WF3cJs8/CB/UIPxPONGlvzoY2aPnROrXaFVpXsmQkx7fDScFIvIfb4Fx5mGXjp3Owttea8yODMrntgViGwI8GZrSc6OQGgoHd0KTWr9WNPth7obo/OtJl+LDzLhx/Jd5Mg5FGSchgfCDc8ylEU/os22ow2qlIHldHG/vK9+vGunb2AwsrcoEvwiEPQFlXbdam6JR/AtJ/qA5S50zQO7APoTn4FxV3yiBXz9ZsmxCXpKLyNoxwNccoKcyxZxX8RJRHwE5JM2a4vUjWUYCy+h6rgtVKf/+huZ7obVKzFlR9l7S234uhBz8vPrViTkh4qt2Iayq3wYvZXSlJiyP7Yi3nxFw==</diagram></mxfile>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 309 KiB

View File

@ -1,69 +0,0 @@
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*
- [Installing Karmada on Cluster from Source](#installing-karmada-on-cluster-from-source)
- [Select a way to expose karmada-apiserver](#select-a-way-to-expose-karmada-apiserver)
- [1. expose by service with `LoadBalancer` type](#1-expose-by-service-with-loadbalancer-type)
- [2. expose by service with `ClusterIP` type](#2-expose-by-service-with-clusterip-type)
- [Install](#install)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
# Installing Karmada on Cluster from Source
This document describes how you can use the `hack/remote-up-karmada.sh` script to install Karmada on
your clusters based on the codebase.
## Select a way to expose karmada-apiserver
The `hack/remote-up-karmada.sh` will install `karmada-apiserver` and provide two ways to expose the server:
### 1. expose by `HostNetwork` type
By default, the `hack/remote-up-karmada.sh` will expose `karmada-apiserver` by `HostNetwork`.
No extra operations needed with this type.
### 2. expose by service with `LoadBalancer` type
If you don't want to use the `HostNetwork`, you can ask `hack/remote-up-karmada.sh` to expose `karmada-apiserver`
by a service with `LoadBalancer` type that *requires your cluster have deployed the `Load Balancer`*.
All you need to do is set an environment:
```bash
export LOAD_BALANCER=true
```
## Install
From the `root` directory the `karmada` repo, install Karmada by command:
```bash
hack/remote-up-karmada.sh <kubeconfig> <context_name>
```
- `kubeconfig` is your cluster's kubeconfig that you want to install to
- `context_name` is the name of context in 'kubeconfig'
For example:
```bash
hack/remote-up-karmada.sh $HOME/.kube/config mycluster
```
If everything goes well, at the end of the script output, you will see similar messages as follows:
```
------------------------------------------------------------------------------------------------------
█████ ████ █████████ ███████████ ██████ ██████ █████████ ██████████ █████████
░░███ ███░ ███░░░░░███ ░░███░░░░░███ ░░██████ ██████ ███░░░░░███ ░░███░░░░███ ███░░░░░███
░███ ███ ░███ ░███ ░███ ░███ ░███░█████░███ ░███ ░███ ░███ ░░███ ░███ ░███
░███████ ░███████████ ░██████████ ░███░░███ ░███ ░███████████ ░███ ░███ ░███████████
░███░░███ ░███░░░░░███ ░███░░░░░███ ░███ ░░░ ░███ ░███░░░░░███ ░███ ░███ ░███░░░░░███
░███ ░░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ███ ░███ ░███
█████ ░░████ █████ █████ █████ █████ █████ █████ █████ █████ ██████████ █████ █████
░░░░░ ░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░░░░░░ ░░░░░ ░░░░░
------------------------------------------------------------------------------------------------------
Karmada is installed successfully.
Kubeconfig for karmada in file: /root/.kube/karmada.config, so you can run:
export KUBECONFIG="/root/.kube/karmada.config"
Or use kubectl with --kubeconfig=/root/.kube/karmada.config
Please use 'kubectl config use-context karmada-apiserver' to switch the cluster of karmada control plane
And use 'kubectl config use-context your-host' for debugging karmada installation
```

File diff suppressed because it is too large Load Diff

View File

@ -1,51 +0,0 @@
# kubectl-karmada Installation
You can install `kubectl-karmada` plug-in in any of the following ways:
- Download from the release.
- Install using Krew.
- Build from source code.
## Prerequisites
### kubectl
`kubectl` is the Kubernetes command line tool lets you control Kubernetes clusters.
For installation instructions see [installing kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl).
## Download from the release
Karmada provides `kubectl-karmada` plug-in download service since v0.9.0. You can choose proper plug-in version which fits your operator system form [karmada release](https://github.com/karmada-io/karmada/releases).
Take v0.9.0 that working with linux-amd64 os as an example:
```bash
wget https://github.com/karmada-io/karmada/releases/download/v0.9.0/kubectl-karmada-linux-amd64.tar.gz
tar -zxf kubectl-karmada-linux-amd64.tar.gz
```
Next, move `kubectl-karmada` executable file to `PATH` path, reference from [Installing kubectl plugins](https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/#installing-kubectl-plugins).
## Install using Krew
Krew is the plugin manager for `kubectl` command-line tool.
[Install and set up](https://krew.sigs.k8s.io/docs/user-guide/setup/install/) Krew on your machine.
Then install `kubectl-karmada` plug-in:
```bash
kubectl krew install karmada
```
You can refer to [Quickstart of Krew](https://krew.sigs.k8s.io/docs/user-guide/quickstart/) for more information.
## Build from source code
Clone karmada repo and run `make` cmd from the repository:
```bash
make kubectl-karmada
```
Next, move the `kubectl-karmada` executable file under the `_output` folder in the project root directory to the `PATH` path.

View File

@ -1,239 +0,0 @@
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*
- [Installing Karmada](#installing-karmada)
- [Prerequisites](#prerequisites)
- [Karmada kubectl plugin](#karmada-kubectl-plugin)
- [Install Karmada by Karmada command-line tool](#install-karmada-by-karmada-command-line-tool)
- [Install Karmada on your own cluster](#install-karmada-on-your-own-cluster)
- [Offline installation](#offline-installation)
- [Deploy HA](#deploy-ha)
- [Install Karmada in Kind cluster](#install-karmada-in-kind-cluster)
- [Install Karmada by Helm Chart Deployment](#install-karmada-by-helm-chart-deployment)
- [Install Karmada from source](#install-karmada-from-source)
- [Install Karmada for development environment](#install-karmada-for-development-environment)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
# Installing Karmada
## Prerequisites
### Karmada kubectl plugin
`kubectl-karmada` is the Karmada command-line tool that lets you control the Karmada control plane, it presents as
the [kubectl plugin][1].
For installation instructions see [installing kubectl-karmada](./install-kubectl-karmada.md).
## Install Karmada by Karmada command-line tool
### Install Karmada on your own cluster
Assume you have put your cluster's `kubeconfig` file to `$HOME/.kube/config` or specify the path
with `KUBECONFIG` environment variable. Otherwise, you should specify the configuration file by
setting `--kubeconfig` flag to the following commands.
> Note: The `init` command is available from v1.0.
Run the following command to install:
```bash
kubectl karmada init
```
It might take about 5 minutes and if everything goes well, you will see outputs similar to:
```
I1216 07:37:45.862959 4256 cert.go:230] Generate ca certificate success.
I1216 07:37:46.000798 4256 cert.go:230] Generate etcd-server certificate success.
...
...
------------------------------------------------------------------------------------------------------
█████ ████ █████████ ███████████ ██████ ██████ █████████ ██████████ █████████
░░███ ███░ ███░░░░░███ ░░███░░░░░███ ░░██████ ██████ ███░░░░░███ ░░███░░░░███ ███░░░░░███
░███ ███ ░███ ░███ ░███ ░███ ░███░█████░███ ░███ ░███ ░███ ░░███ ░███ ░███
░███████ ░███████████ ░██████████ ░███░░███ ░███ ░███████████ ░███ ░███ ░███████████
░███░░███ ░███░░░░░███ ░███░░░░░███ ░███ ░░░ ░███ ░███░░░░░███ ░███ ░███ ░███░░░░░███
░███ ░░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ███ ░███ ░███
█████ ░░████ █████ █████ █████ █████ █████ █████ █████ █████ ██████████ █████ █████
░░░░░ ░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░░░░░░ ░░░░░ ░░░░░
------------------------------------------------------------------------------------------------------
Karmada is installed successfully.
Register Kubernetes cluster to Karmada control plane.
Register cluster with 'Push' mode
Step 1: Use karmadactl join to register the cluster to Karmada control panel. --cluster-kubeconfig is members kubeconfig.
(In karmada)~# MEMBER_CLUSTER_NAME=`cat ~/.kube/config | grep current-context | sed 's/: /\n/g'| sed '1d'`
(In karmada)~# karmadactl --kubeconfig /etc/karmada/karmada-apiserver.config join ${MEMBER_CLUSTER_NAME} --cluster-kubeconfig=$HOME/.kube/config
Step 2: Show members of karmada
(In karmada)~# kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get clusters
Register cluster with 'Pull' mode
Step 1: Send karmada kubeconfig and karmada-agent.yaml to member kubernetes
(In karmada)~# scp /etc/karmada/karmada-apiserver.config /etc/karmada/karmada-agent.yaml {member kubernetes}:~
Step 2: Create karmada kubeconfig secret
Notice:
Cross-network, need to change the config server address.
(In member kubernetes)~# kubectl create ns karmada-system
(In member kubernetes)~# kubectl create secret generic karmada-kubeconfig --from-file=karmada-kubeconfig=/root/karmada-apiserver.config -n karmada-system
Step 3: Create karmada agent
(In member kubernetes)~# MEMBER_CLUSTER_NAME="demo"
(In member kubernetes)~# sed -i "s/{member_cluster_name}/${MEMBER_CLUSTER_NAME}/g" karmada-agent.yaml
(In member kubernetes)~# kubectl apply -f karmada-agent.yaml
Step 4: Show members of karmada
(In karmada)~# kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get clusters
```
The components of Karmada are installed in `karmada-system` namespace by default, you can get them by:
```bash
kubectl get deployments -n karmada-system
NAME READY UP-TO-DATE AVAILABLE AGE
karmada-aggregated-apiserver 1/1 1 1 102s
karmada-apiserver 1/1 1 1 2m34s
karmada-controller-manager 1/1 1 1 116s
karmada-scheduler 1/1 1 1 119s
karmada-webhook 1/1 1 1 113s
kube-controller-manager 1/1 1 1 2m3s
```
And the `karmada-etcd` is installed as the `StatefulSet`, get it by:
```bash
kubectl get statefulsets -n karmada-system
NAME READY AGE
etcd 1/1 28m
```
The configuration file of Karmada will be created to `/etc/karmada/karmada-apiserver.config` by default.
#### Offline installation
When installing Karmada, the `kubectl karmada init` will download the APIs(CRD) from the Karmada official release page
(e.g. `https://github.com/karmada-io/karmada/releases/tag/v0.10.1`) and load images from the official registry by default.
If you want to install Karmada offline, maybe you have to specify the APIs tar file as well as the image.
Use `--crds` flag to specify the CRD file. e.g.
```bash
kubectl karmada init --crds /$HOME/crds.tar.gz
```
The images of Karmada components could be specified, take `karmada-controller-manager` as an example:
```bash
kubectl karmada init --karmada-controller-manager-image=example.registry.com/library/karmada-controller-manager:1.0
```
#### Deploy HA
Use `--karmada-apiserver-replicas` and `--etcd-replicas` flags to specify the number of the replicas (defaults to `1`).
```bash
kubectl karmada init --karmada-apiserver-replicas 3 --etcd-replicas 3
```
### Install Karmada in Kind cluster
> kind is a tool for running local Kubernetes clusters using Docker container "nodes".
> It was primarily designed for testing Kubernetes itself, not for production.
Create a cluster named `host` by `hack/create-cluster.sh`:
```bash
hack/create-cluster.sh host $HOME/.kube/host.config
```
Install Karmada v1.2.0 by command `kubectl karmada init`:
```bash
kubectl karmada init --crds https://github.com/karmada-io/karmada/releases/download/v1.2.0/crds.tar.gz --kubeconfig=$HOME/.kube/host.config
```
Check installed components:
```bash
kubectl get pods -n karmada-system --kubeconfig=$HOME/.kube/host.config
NAME READY STATUS RESTARTS AGE
etcd-0 1/1 Running 0 2m55s
karmada-aggregated-apiserver-84b45bf9b-n5gnk 1/1 Running 0 109s
karmada-apiserver-6dc4cf6964-cz4jh 1/1 Running 0 2m40s
karmada-controller-manager-556cf896bc-79sxz 1/1 Running 0 2m3s
karmada-scheduler-7b9d8b5764-6n48j 1/1 Running 0 2m6s
karmada-webhook-7cf7986866-m75jw 1/1 Running 0 2m
kube-controller-manager-85c789dcfc-k89f8 1/1 Running 0 2m10s
```
## Install Karmada by Helm Chart Deployment
Please refer to [installing by Helm](https://github.com/karmada-io/karmada/tree/master/charts/karmada).
## Install Karmada by binary
Please refer to [installing by binary](https://github.com/karmada-io/karmada/blob/master/docs/installation/binary-install.md).
## Install Karmada from source
Please refer to [installing from source](./fromsource.md).
[1]: https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/
## Install Karmada for development environment
If you want to try Karmada, we recommend that build a development environment by
`hack/local-up-karmada.sh` which will do following tasks for you:
- Start a Kubernetes cluster by [kind](https://kind.sigs.k8s.io/) to run the Karmada control plane, aka. the `host cluster`.
- Build Karmada control plane components based on a current codebase.
- Deploy Karmada control plane components on the `host cluster`.
- Create member clusters and join Karmada.
**1. Clone Karmada repo to your machine:**
```
git clone https://github.com/karmada-io/karmada
```
or use your fork repo by replacing your `GitHub ID`:
```
git clone https://github.com/<GitHub ID>/karmada
```
**2. Change to the karmada directory:**
```
cd karmada
```
**3. Deploy and run Karmada control plane:**
run the following script:
```
hack/local-up-karmada.sh
```
This script will do following tasks for you:
- Start a Kubernetes cluster to run the Karmada control plane, aka. the `host cluster`.
- Build Karmada control plane components based on a current codebase.
- Deploy Karmada control plane components on the `host cluster`.
- Create member clusters and join Karmada.
If everything goes well, at the end of the script output, you will see similar messages as follows:
```
Local Karmada is running.
To start using your Karmada environment, run:
export KUBECONFIG="$HOME/.kube/karmada.config"
Please use 'kubectl config use-context karmada-host/karmada-apiserver' to switch the host and control plane cluster.
To manage your member clusters, run:
export KUBECONFIG="$HOME/.kube/members.config"
Please use 'kubectl config use-context member1/member2/member3' to switch to the different member cluster.
```
**4. Check registered cluster**
```
kubectl get clusters --kubeconfig=/$HOME/.kube/karmada.config
```
You will get similar output as follows:
```
NAME VERSION MODE READY AGE
member1 v1.23.4 Push True 7m38s
member2 v1.23.4 Push True 7m35s
member3 v1.23.4 Pull True 7m27s
```
There are 3 clusters named `member1`, `member2` and `member3` have registered with `Push` or `Pull` mode.

View File

@ -1,289 +0,0 @@
# Multi-cluster Ingress
Users can use [MultiClusterIngress API](https://github.com/karmada-io/karmada/blob/master/pkg/apis/networking/v1alpha1/ingress_types.go) provided in Karmada to import external traffic to services in the member clusters.
## Prerequisites
### Karmada has been installed
We can install Karmada by referring to [Quick Start](https://github.com/karmada-io/karmada#quick-start), or directly run `hack/local-up-karmada.sh` script which is also used to run our E2E cases.
### Cluster Network
Currently, we need to use the [Multi-cluster Service](https://github.com/karmada-io/karmada/blob/master/docs/multi-cluster-service.md#the-serviceexport-and-serviceimport-crds-have-been-installed) feature to import external traffic.
So we need to ensure that the container networks between the **host cluster** and member clusters are connected. The **host cluster** indicates the cluster where the **Karmada Control Plane** is deployed.
- If you use the `hack/local-up-karmada.sh` script to deploy Karmada, Karmada will have three member clusters, and the container networks between the **host cluster**, `member1` and `member2` are connected.
- You can use `Submariner` or other related open source projects to connected networks between clusters.
## Example
### Step 1: Deploy ingress-nginx on the host cluster
We use [multi-cluster-ingress-nginx](https://github.com/karmada-io/multi-cluster-ingress-nginx) as the demo for demonstration. We've made some changes based on the latest version(controller-v1.1.1) of [ingress-nginx](https://github.com/kubernetes/ingress-nginx).
#### Download code
```shell
# for HTTPS
git clone https://github.com/karmada-io/multi-cluster-ingress-nginx.git
# for SSH
git clone git@github.com:karmada-io/multi-cluster-ingress-nginx.git
```
#### Build and deploy ingress-nginx
Using the existing `karmada-host` kind cluster to build and deploy the ingress controller.
```shell
export KUBECONFIG=~/.kube/karmada.config
export KIND_CLUSTER_NAME=karmada-host
kubectl config use-context karmada-host
cd multi-cluster-ingress-nginx
make dev-env
```
#### Apply kubeconfig secret
Create a secret that contains the `karmada-apiserver` authentication credential:
```shell
# get the 'karmada-apiserver' kubeconfig information and direct it to file /tmp/kubeconfig.yaml
kubectl -n karmada-system get secret kubeconfig --template={{.data.kubeconfig}} | base64 -d > /tmp/kubeconfig.yaml
# create secret with name 'kubeconfig' from file /tmp/kubeconfig.yaml
kubectl -n ingress-nginx create secret generic kubeconfig --from-file=kubeconfig=/tmp/kubeconfig.yaml
```
#### Edit ingress-nginx-controller deployment
We want `nginx-ingress-controller` to access `karmada-apiserver` to listen to changes in resources(such as multiclusteringress, endpointslices, and service). Therefore, we need to mount the authentication credential of `karmada-apiserver` to the `nginx-ingress-controller`.
```shell
kubectl -n ingress-nginx edit deployment ingress-nginx-controller
```
Edit as follows:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
spec:
containers:
- args:
- /nginx-ingress-controller
- --karmada-kubeconfig=/etc/kubeconfig # new line
...
volumeMounts:
...
- mountPath: /etc/kubeconfig # new line
name: kubeconfig # new line
subPath: kubeconfig # new line
volumes:
...
- name: kubeconfig # new line
secret: # new line
secretName: kubeconfig # new line
```
### Step 2: Use the MCS feature to discovery service
#### Install ServiceExport and ServiceImport CRDs
Refer to [here](https://github.com/karmada-io/karmada/blob/master/docs/multi-cluster-service.md#the-serviceexport-and-serviceimport-crds-have-been-installed).
#### Deploy web on member1 cluster
deploy.yaml:
<details>
<summary>unfold me to see the yaml</summary>
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: hello-app
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: web
spec:
ports:
- port: 81
targetPort: 8080
selector:
app: web
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: mcs-workload
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: web
- apiVersion: v1
kind: Service
name: web
placement:
clusterAffinity:
clusterNames:
- member1
```
</details>
```shell
kubectl --context karmada-apiserver apply -f deploy.yaml
```
#### Export web service from member1 cluster
service_export.yaml:
<details>
<summary>unfold me to see the yaml</summary>
```yaml
apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceExport
metadata:
name: web
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: web-export-policy
spec:
resourceSelectors:
- apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceExport
name: web
placement:
clusterAffinity:
clusterNames:
- member1
```
</details>
```shell
kubectl --context karmada-apiserver apply -f service_export.yaml
```
#### Import web service to member2 cluster
service_import.yaml:
<details>
<summary>unfold me to see the yaml</summary>
```yaml
apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceImport
metadata:
name: web
spec:
type: ClusterSetIP
ports:
- port: 81
protocol: TCP
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: web-import-policy
spec:
resourceSelectors:
- apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceImport
name: web
placement:
clusterAffinity:
clusterNames:
- member2
```
</details>
```shell
kubectl --context karmada-apiserver apply -f service_import.yaml
```
### Step 3: Deploy multiclusteringress on karmada-controlplane
mci-web.yaml:
<details>
<summary>unfold me to see the yaml</summary>
```yaml
apiVersion: networking.karmada.io/v1alpha1
kind: MultiClusterIngress
metadata:
name: demo-localhost
namespace: default
spec:
ingressClassName: nginx
rules:
- host: demo.localdev.me
http:
paths:
- backend:
service:
name: web
port:
number: 81
path: /web
pathType: Prefix
```
</details>
```shell
kubectl --context karmada-apiserver apply -f mci-web.yaml
```
### Step 4: Local testing
Let's forward a local port to the ingress controller:
```shell
kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
```
At this point, if you access http://demo.localdev.me:8080/web/, you should see an HTML page telling you:
```html
Hello, world!
Version: 1.0.0
Hostname: web-xxx-xxx
```

View File

@ -1,202 +0,0 @@
# Multi-cluster Service Discovery
Users are able to **export** and **import** services between clusters with [Multi-cluster Service APIs](https://github.com/kubernetes-sigs/mcs-api).
## Prerequisites
### Karmada has been installed
We can install Karmada by referring to [Quick Start](https://github.com/karmada-io/karmada#quick-start), or directly run `hack/local-up-karmada.sh` script which is also used to run our E2E cases.
### Member Cluster Network
Ensure that at least two clusters have been added to Karmada, and the container networks between member clusters are connected.
- If you use the `hack/local-up-karmada.sh` script to deploy Karmada, Karmada will have three member clusters, and the container networks of the `member1` and `member2` will be connected.
- You can use `Submariner` or other related open source projects to connected networks between member clusters.
### The ServiceExport and ServiceImport CRDs have been installed
We need to install ServiceExport and ServiceImport in the member clusters.
After ServiceExport and ServiceImport have been installed on the **Karmada Control Plane**, we can create `ClusterPropagationPolicy` to propagate those two CRDs to the member clusters.
```yaml
# propagate ServiceExport CRD
apiVersion: policy.karmada.io/v1alpha1
kind: ClusterPropagationPolicy
metadata:
name: serviceexport-policy
spec:
resourceSelectors:
- apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
name: serviceexports.multicluster.x-k8s.io
placement:
clusterAffinity:
clusterNames:
- member1
- member2
---
# propagate ServiceImport CRD
apiVersion: policy.karmada.io/v1alpha1
kind: ClusterPropagationPolicy
metadata:
name: serviceimport-policy
spec:
resourceSelectors:
- apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
name: serviceimports.multicluster.x-k8s.io
placement:
clusterAffinity:
clusterNames:
- member1
- member2
```
## Example
### Step 1: Deploy service on the `member1` cluster
We need to deploy service on the `member1` cluster for discovery.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: serve
spec:
replicas: 1
selector:
matchLabels:
app: serve
template:
metadata:
labels:
app: serve
spec:
containers:
- name: serve
image: jeremyot/serve:0a40de8
args:
- "--message='hello from cluster member1 (Node: {{env \"NODE_NAME\"}} Pod: {{env \"POD_NAME\"}} Address: {{addr}})'"
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
---
apiVersion: v1
kind: Service
metadata:
name: serve
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: serve
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: mcs-workload
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: serve
- apiVersion: v1
kind: Service
name: serve
placement:
clusterAffinity:
clusterNames:
- member1
```
### Step 2: Export service to the `member2` cluster
- Create a `ServiceExport` object on **Karmada Control Plane**, and then create a `PropagationPolicy` to propagate the `ServiceExport` object to the `member1` cluster.
```yaml
apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceExport
metadata:
name: serve
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: serve-export-policy
spec:
resourceSelectors:
- apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceExport
name: serve
placement:
clusterAffinity:
clusterNames:
- member1
```
- Create a `ServiceImport` object on **Karmada Control Plane**, and then create a `PropagationPolicy` to propagate the `ServiceImport` object to the `member2` cluster.
```yaml
apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceImport
metadata:
name: serve
spec:
type: ClusterSetIP
ports:
- port: 80
protocol: TCP
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: serve-import-policy
spec:
resourceSelectors:
- apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceImport
name: serve
placement:
clusterAffinity:
clusterNames:
- member2
```
### Step 3: Access the service from `member2` cluster
After the above steps, we can find the **derived service** which has the prefix `derived-` on the `member2` cluster. Then, we can access the **derived service** to access the service on the `member1` cluster.
```shell
# get the services in cluster member2, and we can find the service with the name 'derived-serve'
$ kubectl --kubeconfig ~/.kube/members.config --context member2 get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
derived-serve ClusterIP 10.13.205.2 <none> 80/TCP 81s
kubernetes ClusterIP 10.13.0.1 <none> 443/TCP 15m
```
Start a pod `request` on the `member2` cluster to access the ClusterIP of **derived service**:
```shell
$ kubectl --kubeconfig ~/.kube/members.config --context member2 run -i --rm --restart=Never --image=jeremyot/request:0a40de8 request -- --duration={duration-time} --address={ClusterIP of derived service}
```
Eg, if we continue to access service for 3s, ClusterIP is `10.13.205.2`:
```shell
# access the service of derived-serve, and the pod in member1 cluster returns a response
$ kubectl --kubeconfig ~/.kube/members.config --context member2 run -i --rm --restart=Never --image=jeremyot/request:0a40de8 request -- --duration=3s --address=10.13.205.2
If you don't see a command prompt, try pressing enter.
2022/07/24 15:13:08 'hello from cluster member1 (Node: member1-control-plane Pod: serve-9b5b94f65-cp87p Address: 10.10.0.5)'
2022/07/24 15:13:09 'hello from cluster member1 (Node: member1-control-plane Pod: serve-9b5b94f65-cp87p Address: 10.10.0.5)'
pod "request" deleted
```

View File

@ -1 +0,0 @@
<mxfile host="Electron" modified="2021-10-13T07:31:49.927Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/13.9.9 Chrome/85.0.4183.121 Electron/10.1.5 Safari/537.36" etag="Ik9ImQcaMVUkTqd0vA8j" version="13.9.9" type="device"><diagram id="jWAMvOakBas4Yc_O8tK2" name="object association map">7V1de6O2Ev41vowefSJxucm2pxfbbtrtOW2v+mAj22wwuJh89dcfCYRBBtvYARsndi5ixIfE6J1XM5qRPCJ3i5f/JN5y/nPsy3CEof8yIp9HGCNIqPqnS15NCYdOXjJLAj8vg2XBt+BfWdxqSh8DX65MWV6UxnGYBku7cBJHkZykVpmXJPGzfdk0Dn2rYOnNZK3g28QL66V/BH46z0sF5mX5TzKYzYuakePmZxZecbF5k9Xc8+PnShH5YUTukjhO82+LlzsZaunZcvlxy9l1wxIZpW1uCP7G4S83X/43/R798uene8b//nV8g7Bp7pMXPppXNs1NXwsZzJL4cTkit0rGqRdEMlHFUB3XW2Aa9SSTVL409Y83Lh5aikCBR8YLmSav6rriLuICITjLbzbg4UaSz2VHEOECygRZf/Ir5tUuwRgISlWXMeFg6Ajzxp5BxmxdeSk89cXI7xBZCtFellMlS4N2xPqVpUMJQJuyRJQByJEgLhSQMpfzmmwRpEAwipzi4zYIF7oAuYxCjF0hGGS8N+HSBuE6YWpkaUnZ+ecxLk7crDIpf1IXILx8KU+qbzP9P/TGmrTyR6m25U/Lz9W6T/Ve5EvfKMDzPEjlt6U30WefFQOqsnm6CHWnFk++9SYPs+y2uziMtepEcSRtBAh9GIRhccUIE9+TYjpR5as0iR9k5YwzEXI8bYTMHlhuImm79lHgcrfseMc5ChyYgwIVlCOFw96wwXrChhdFceqlQRwNDyBMCp82AUTgMVE9dg6AGHKhELhVwh02fHAHzOI0oad4yjgpSn6Tq/gxmcjbIPKDaKbbXbvmLnxcpTKpXdoD/rwwmEXq+0QBRI/qmxibMv3XSEJ5p+eCqJSbQZjcaiwFyoz6ZKpYBL6v23jAQLcVfcy10cYw2DmQcdQ0cqm7BCfMITi7mfdlFvAOyKkRXgVC1AW/y8Uy9FLZC0r0k370FkGo5f2TDJ+k7tk6QdXQtIfeOsNa95DSDOYgLpADXW3etIOUywDLEAgRoi7FvY13TTZ7F4i6y5jnPomVW5ONevdxGExe+0HVkLpfCOBWvQhioYFwAei6VxkuzI0KHrCygzmsPIH3MZ79dyWTr+Pv2s/FMLdbrd73g6fjbZ0urWl9wRbMrIuzxnZZ9bLE7TLDLXjwkoXneyCIVX2Rt5CrDI61IbfFnTt1IAyih7wn2rahVY14HxHkOLWZo6KPq7m3lBnhptJWvvGalL8+pqr10pT7XvLwVbUvSDX0IYBaI2213kL6oZxqeawqLm0TAXTh5RxO7pgAUjE2uc31DgI5yxsntq7cxAWO9akrN6OgwvyKIw5WbnVY0e8Poe8DaP8kt7bfwgCbbLLtnc/KFOLKFC2YghPbr70YpmhnONKmCcouDMcPaDIypgYVhO1Z1RaWIuLKUmRDtw6zrt1CuIqWojc9qGHMyJ/Z9ZjRVPnhhK+ZvQOxDWk4yHnpbMMBwu9iPKAcA4eZWSet8wMaEN43GRzj6r2hja2V7mz6zAerz2RI+rzbE6QuBJVIJnU+kD5XIl2WLvYwAjfh3Fsuw0D6N8tQgWQhM4vxvQ6+/PIH32Nijx1PyrKm0bbFLKzOe6BEZyaZCOXp3C/el/v1R5w8nCZAmDHPfbwKcqYoTxTxvi8bF6zjflsDgsPy6uw4kKMMQAIrMZ0a5BoDQRhBgEv+5wV0r87cx3DmTLMSE6Ad5yH8owYLcd4p/nfiqTnCAQ1Zah/BsHsfit7Vc/aqpOU29vMuLWhhV/W7qaftC8JOyIleyentbueAyamdXeviBrt2w+yUkf9Jp+NrmzH0VqtgMjoqyUi+BOmfGTaYOfrLAEN///xSPXgtDiL14pWb9OFfpt7soLwtO7Luu5dJoCS3P+ndJGNtF1NhKaZeMpPpjguJ6TvpW2sT3pa81AQE88T7OMi8jwLLDmnMzSwcLyyASetkDHG8YdvmYjCPLLFWq4UgApxq6j4/pJZchrVaMkSvBXR8hpXblLN3UkSjoxANB4rogrCuiD4bolus8zknoo8HYRHU3g9CfgXheUGIIDo3CndbCidAYTEXdEXh+VDYYp1e9yjEFze8FwnsezEtBo5pAopHayfL7QnTO2vpG9PFeukKprctJNpEuuq0L/nczs6p/60T+IlUTrdZjKoBt9Qvmb02ux2xzw0++1blqWN1jwa/eXHsDQSEFqGkRIZeGjxJ64EtcFhcEk+nK9lT/55l5OTvmLPolbPOzVmkAdNdJHUfuXbytEwouiBCtG0u8hgipNBB5yPCayDkwFQ0e8o+iFZp8jiprlW/tFgnQtd4Qqt4gusASqvRTovVOWNW9oExFKo56C4FHFfX7+eXWHkyzWv4r8HOywl2KnswH4HCYCLPEurc04K3BDo75Z3Lz8g7Ce8IRSyoEsi0rUmB1Ok1XTBW5x2MOEBunbguPsliINmzRiduYtWlSbZv2sVaAuddr3gxObK7MwuE0reGNWUHZhZQFyDC3TJn9vClaR9XJS9fFS2/8aqK21XRgUBAXKaf92GU96KKbbeKxE3TNd5CSzwar5aZGKHORQ9jz6/NrhyVeG46/L2nnSMuQHWlA0EWeFwiQGXFQh07FLmgMpfocFrHjkCgiLdRDSJ8MHSu3tspvbdnpUhncdqaKx6Kr4avCfGtKIVAF7gQwTUt2OORyxDg2dqUfAe8BtPwXM5ay+GItImI+TP5zRyabefsno2TdB7P4kiNIHG8NIj4LtP01fS395jGo6OiaFv7cG8Mi7fNJUH4VIH3t+1X17jDZu/By4uLXQ6w35vDisrIBbu2f9XWCC7JhdnPbxu83FeLcACBxrElDhcb1fQdvSROkzncQfRy73ZzFa1Rg0Fq64k9qDTstWmK2kc1m4Y0m0UP2se4C3OZMiB43VUqduGhcM9EJFNjYxVYToO9fLLdq/kVSAMFEkUMkOr634EDifcIpC1bZ14xtg9jDnX09tTrD71orupg6+ErxE4LsQtjMVHPZO16P6dyruFyoZNNRHSBHSxAZdKa2BsaOsRVyKrtljJY7NTnBK7Y6Q87FGPgumTjp4WEywEq55mLeZrBYqYe1rj28ahtovJQCEIdlj+llvv45S/SkR/+Dw==</diagram></mxfile>

View File

@ -1,29 +0,0 @@
# Karmada Object association mapping
## Review
![](./images/object-association-map.png)
This picture is made by draw.io. If you need to update the **Review**, you can use the file [object-association-map.drawio](./object-association-map.drawio).
## Label/Annotation information table
> Note:
> These labels and annotations are managed by the Karmada. Please do not modify them.
| Object | Tag | KeyName | Usage |
| ---------------------- | ---------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| ResourceTemplate | label | propagationpolicy.karmada.io/namespace propagationpolicy.karmada.io/name | The labels can be used to determine whether the current resource template is claimed by PropagationPolicy. |
| | label | clusterpropagationpolicy.karmada.io/name | The label can be used to determine whether the current resource template is claimed by ClusterPropagationPolicy. |
| ResourceBinding | label | propagationpolicy.karmada.io/namespace propagationpolicy.karmada.io/name | Through those two labels, logic can find the associated ResourceBinding from the PropagationPolicy or trace it back from the ResourceBinding to the corresponding PropagationPolicy. |
| | label | clusterpropagationpolicy.karmada.io/name | Through the label, logic can find the associated ResourceBinding from the ClusterPropagationPolicy or trace it back from the ResourceBinding to the corresponding ClusterPropagationPolicy. |
| | annotation | policy.karmada.io/applied-placement | Record applied placement declaration. The placement could be either PropagationPolicy's or ClusterPropagationPolicy's. |
| ClusterResourceBinding | label | clusterpropagationpolicy.karmada.io/name | Through the label, logic can find the associated ClusterResourceBinding from the ClusterPropagationPolicy or trace it back from the ClusterResourceBinding to the corresponding ClusterPropagationPolicy. |
| | annotation | policy.karmada.io/applied-placement | Record applied placement declaration. The placement could be either PropagationPolicy's or ClusterPropagationPolicy's. |
| Work | label | resourcebinding.karmada.io/namespace resourcebinding.karmada.io/name | Through those two labels, logic can find the associated WorkList from the ResourceBinding or trace it back from the Work to the corresponding ResourceBinding. |
| | label | clusterresourcebinding.karmada.io/name | Through the label, logic can find the associated WorkList from the ClusterResourceBinding or trace it back from the Work to the corresponding ClusterResourceBinding. |
| | label | propagation.karmada.io/instruction | Valid values includes: - suppressed: indicates that the resource should not be propagated. |
| | label | endpointslice.karmada.io/namespace endpointslice.karmada.io/name | Those labels are added to work object, which is report by member cluster, to specify service associated with EndpointSlice. |
| | annotation | policy.karmada.io/applied-overrides | Record override items, the overrides items should be sorted alphabetically in ascending order by OverridePolicy's name. |
| | annotation | policy.karmada.io/applied-cluster-overrides | Record override items, the overrides items should be sorted alphabetically in ascending order by ClusterOverridePolicy's name. |
| Workload | label | work.karmada.io/namespace work.karmada.io/name | The labels can be used to determine whether the current workload is managed by karmada. Through those labels, logic can find the associated Work or trace it back from the Work to the corresponding Workload. |

View File

@ -1,10 +0,0 @@
# Reserved Namespaces
> Note: Avoid creating namespaces with the prefix `kube-` and `karmada-`, since they are reserved for Kubernetes
> and Karmada system namespaces.
> For now, resources under the following namespaces will not be propagated:
- namespaces prefix `kube-`(including but not limited to `kube-system`, `kube-public`, `kube-node-lease`)
- karmada-system
- karmada-cluster
- karmada-es-*

View File

@ -1,151 +0,0 @@
# Cluster Accurate Scheduler Estimator
Users could divide their replicas of a workload into different clusters in terms of available resources of member clusters. When some clusters are lack of resources, scheduler would not assign excessive replicas into these clusters by calling karmada-scheduler-estimator.
## Prerequisites
### Karmada has been installed
We can install Karmada by referring to [quick-start](https://github.com/karmada-io/karmada#quick-start), or directly run `hack/local-up-karmada.sh` script which is also used to run our E2E cases.
### Member cluster component is ready
Ensure that all member clusters have been joined and their corresponding karmada-scheduler-estimator is installed into karmada-host.
You could check by using the following command:
```bash
# check whether the member cluster has been joined
$ kubectl get cluster
NAME VERSION MODE READY AGE
member1 v1.19.1 Push True 11m
member2 v1.19.1 Push True 11m
member3 v1.19.1 Pull True 5m12s
# check whether the karmada-scheduler-estimator of a member cluster has been working well
$ kubectl --context karmada-host get pod -n karmada-system | grep estimator
karmada-scheduler-estimator-member1-696b54fd56-xt789 1/1 Running 0 77s
karmada-scheduler-estimator-member2-774fb84c5d-md4wt 1/1 Running 0 75s
karmada-scheduler-estimator-member3-5c7d87f4b4-76gv9 1/1 Running 0 72s
```
- If the cluster has not been joined, you could use `hack/deploy-agent-and-estimator.sh` to deploy both karmada-agent and karmada-scheduler-estimator.
- If the cluster has been joined already, you could use `hack/deploy-scheduler-estimator.sh` to only deploy karmada-scheduler-estimator.
### Scheduler option '--enable-scheduler-estimator'
After all member clusters have been joined and estimators are all ready, please specify the option `--enable-scheduler-estimator=true` to enable scheduler estimator.
```bash
# edit the deployment of karmada-scheduler
$ kubectl --context karmada-host edit -n karmada-system deployments.apps karmada-scheduler
```
And then add the option `--enable-scheduler-estimator=true` into the command of container `karmada-scheduler`.
## Example
Now we could divide the replicas into different member clusters. Note that `propagationPolicy.spec.replicaScheduling.replicaSchedulingType` must be `Divided` and `propagationPolicy.spec.replicaScheduling.replicaDivisionPreference` must be `Aggregated`. The scheduler will try to divide the replicas aggregately in terms of all available resources of member clusters.
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: aggregated-policy
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: nginx
placement:
clusterAffinity:
clusterNames:
- member1
- member2
- member3
replicaScheduling:
replicaSchedulingType: Divided
replicaDivisionPreference: Aggregated
```
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 5
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
name: web-1
resources:
requests:
cpu: "1"
memory: 2Gi
```
You will find all replicas have been assigned to as few clusters as possible.
```
$ kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 5/5 5 5 2m16s
$ kubectl get rb nginx-deployment -o=custom-columns=NAME:.metadata.name,CLUSTER:.spec.clusters
NAME CLUSTER
nginx-deployment [map[name:member1 replicas:5] map[name:member2] map[name:member3]]
```
After that, we change the resource request of the deployment to a large number and have a try again.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 5
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
name: web-1
resources:
requests:
cpu: "100"
memory: 200Gi
```
As any node of member clusters does not have so many cpu and memory resources, we will find workload scheduling failed.
```bash
$ kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 0/5 0 0 2m20s
$ kubectl get rb nginx-deployment -o=custom-columns=NAME:.metadata.name,CLUSTER:.spec.clusters
NAME CLUSTER
nginx-deployment <none>
```

View File

@ -1,15 +0,0 @@
# Troubleshooting
## I can't access some resources when install Karmada
- Pulling images from Google Container Registry(k8s.gcr.io)
You may run the following command to change the image registry in the mainland China
```shell
sed -i'' -e "s#k8s.gcr.io#registry.aliyuncs.com/google_containers#g" artifacts/deploy/karmada-etcd.yaml
sed -i'' -e "s#k8s.gcr.io#registry.aliyuncs.com/google_containers#g" artifacts/deploy/karmada-apiserver.yaml
sed -i'' -e "s#k8s.gcr.io#registry.aliyuncs.com/google_containers#g" artifacts/deploy/kube-controller-manager.yaml
```
- Download golang package in the mainland China, run the following command before installing
```shell
export GOPROXY=https://goproxy.cn

View File

@ -1,83 +0,0 @@
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*
- [Upgrading Instruction](#upgrading-instruction)
- [Overview](#overview)
- [Regular Upgrading Process](#regular-upgrading-process)
- [Upgrading APIs](#upgrading-apis)
- [Manual Upgrade API](#manual-upgrade-api)
- [Upgrading Components](#upgrading-components)
- [Details Upgrading Instruction](#details-upgrading-instruction)
- [v0.8 to v0.9](#v08-to-v09)
- [v0.9 to v0.10](#v09-to-v010)
- [v0.10 to v1.0](#v010-to-v10)
- [v1.0 to v1.1](#v10-to-v11)
- [v1.1 to v1.2](#v11-to-v12)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
# Upgrading Instruction
## Overview
Karmada uses the [semver versioning](https://semver.org/) and each version in the format of v`MAJOR`.`MINOR`.`PATCH`:
- The `PATCH` release does not introduce breaking changes.
- The `MINOR` release might introduce minor breaking changes with a workaround.
- The `Major` release might introduce backward-incompatible behavior changes.
## Regular Upgrading Process
### Upgrading APIs
For releases that introduce API changes, the Karmada API(CRD) that Karmada components rely on must upgrade to keep consistent.
Karmada CRD is composed of two parts:
- bases: The CRD definition generated via API structs.
- patches: conversion settings for the CRD.
In order to support multiple versions of custom resources, the `patches` should be injected into `bases`.
To achieve this we introduced a `kustomization.yaml` configuration then use `kubectl kustomize` to build the final CRD.
The `bases`,`patches`and `kustomization.yaml` now located at `charts/_crds` directory of the repo.
#### Manual Upgrade API
**Step 1: Get the Webhook CA certificate**
The CA certificate will be injected into `patches` before building the final CRD.
We can retrieve it from the `MutatingWebhookConfiguration` or `ValidatingWebhookConfiguration` configurations, e.g:
```bash
kubectl get mutatingwebhookconfigurations.admissionregistration.k8s.io mutating-config
```
Copy the `ca_string` from the yaml path `webhooks.name[x].clientConfig.caBundle`, then replace the `{{caBundle}}` from
the yaml files in `patches`. e.g:
```bash
sed -i'' -e "s/{{caBundle}}/${ca_string}/g" ./"charts/karmada/_crds/patches/webhook_in_resourcebindings.yaml"
sed -i'' -e "s/{{caBundle}}/${ca_string}/g" ./"charts/karmada/_crds/patches/webhook_in_clusterresourcebindings.yaml"
```
**Step2: Build final CRD**
Generate the final CRD by `kubectl kustomize` command, e.g:
```bash
kubectl kustomize ./charts/karmada/_crds
```
Or, you can apply to `karmada-apiserver` by:
```bash
kubectl apply -k ./charts/karmada/_crds
```
### Upgrading Components
Components upgrading is composed of image version update and possible command args changes.
> For the argument changes please refer to `Details Upgrading Instruction` below.
## Details Upgrading Instruction
The following instructions are for minor version upgrades. Cross-version upgrades are not recommended.
And it is recommended to use the latest patch version when upgrading, for example, if you are upgrading from
v1.1.x to v1.2.x and the available patch versions are v1.2.0, v1.2.1 and v1.2.2, then select v1.2.2.
### [v0.8 to v0.9](./v0.8-v0.9.md)
### [v0.9 to v0.10](./v0.9-v0.10.md)
### [v0.10 to v1.0](./v0.10-v1.0.md)
### [v1.0 to v1.1](./v1.0-v1.1.md)
### [v1.1 to v1.2](./v1.1-v1.2.md)

View File

@ -1,224 +0,0 @@
# v0.10 to v1.0
Follow the [Regular Upgrading Process](./README.md).
## Upgrading Notable Changes
### Introduced `karmada-aggregated-apiserver` component
In the releases before v1.0.0, we are using CRD to extend the
[Cluster API](https://github.com/karmada-io/karmada/tree/24f586062e0cd7c9d8e6911e52ce399106f489aa/pkg/apis/cluster),
and starts v1.0.0 we use
[API Aggregation](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)(AA) to
extend it.
Based on the above change, perform the following operations during the upgrade:
#### Step 1: Stop `karmada-apiserver`
You can stop `karmada-apiserver` by updating its replica to `0`.
#### Step 2: Remove Cluster CRD from ETCD
Remove the `Cluster CRD` from ETCD directly by running the following command.
```
etcdctl --cert="/etc/kubernetes/pki/etcd/karmada.crt" \
--key="/etc/kubernetes/pki/etcd/karmada.key" \
--cacert="/etc/kubernetes/pki/etcd/server-ca.crt" \
del /registry/apiextensions.k8s.io/customresourcedefinitions/clusters.cluster.karmada.io
```
> Note: This command only removed the `CRD` resource, all the `CR` (Cluster objects) not changed.
> That's the reason why we don't remove CRD by `karmada-apiserver`.
#### Step 3: Prepare the certificate for the `karmada-aggregated-apiserver`
To avoid [CA Reusage and Conflicts](https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/#ca-reusage-and-conflicts),
create CA signer and sign a certificate to enable the aggregation layer.
Update `karmada-cert-secret` secret in `karmada-system` namespace:
```diff
apiVersion: v1
kind: Secret
metadata:
name: karmada-cert-secret
namespace: karmada-system
type: Opaque
data:
...
+ front-proxy-ca.crt: |
+ {{front_proxy_ca_crt}}
+ front-proxy-client.crt: |
+ {{front_proxy_client_crt}}
+ front-proxy-client.key: |
+ {{front_proxy_client_key}}
```
Then update `karmada-apiserver` deployment's container command:
```diff
- - --proxy-client-cert-file=/etc/kubernetes/pki/karmada.crt
- - --proxy-client-key-file=/etc/kubernetes/pki/karmada.key
+ - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
+ - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- - --requestheader-client-ca-file=/etc/kubernetes/pki/server-ca.crt
+ - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
```
After the update, restore the replicas of `karmada-apiserver` instances.
#### Step 4: Deploy `karmada-aggregated-apiserver`:
Deploy `karmada-aggregated-apiserver` instance to your `host cluster` by following manifests:
<details>
<summary>unfold me to see the yaml</summary>
```yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: karmada-aggregated-apiserver
namespace: karmada-system
labels:
app: karmada-aggregated-apiserver
apiserver: "true"
spec:
selector:
matchLabels:
app: karmada-aggregated-apiserver
apiserver: "true"
replicas: 1
template:
metadata:
labels:
app: karmada-aggregated-apiserver
apiserver: "true"
spec:
automountServiceAccountToken: false
containers:
- name: karmada-aggregated-apiserver
image: swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-aggregated-apiserver:v1.0.0
imagePullPolicy: IfNotPresent
volumeMounts:
- name: k8s-certs
mountPath: /etc/kubernetes/pki
readOnly: true
- name: kubeconfig
subPath: kubeconfig
mountPath: /etc/kubeconfig
command:
- /bin/karmada-aggregated-apiserver
- --kubeconfig=/etc/kubeconfig
- --authentication-kubeconfig=/etc/kubeconfig
- --authorization-kubeconfig=/etc/kubeconfig
- --karmada-config=/etc/kubeconfig
- --etcd-servers=https://etcd-client.karmada-system.svc.cluster.local:2379
- --etcd-cafile=/etc/kubernetes/pki/server-ca.crt
- --etcd-certfile=/etc/kubernetes/pki/karmada.crt
- --etcd-keyfile=/etc/kubernetes/pki/karmada.key
- --tls-cert-file=/etc/kubernetes/pki/karmada.crt
- --tls-private-key-file=/etc/kubernetes/pki/karmada.key
- --audit-log-path=-
- --feature-gates=APIPriorityAndFairness=false
- --audit-log-maxage=0
- --audit-log-maxbackup=0
resources:
requests:
cpu: 100m
volumes:
- name: k8s-certs
secret:
secretName: karmada-cert-secret
- name: kubeconfig
secret:
secretName: kubeconfig
---
apiVersion: v1
kind: Service
metadata:
name: karmada-aggregated-apiserver
namespace: karmada-system
labels:
app: karmada-aggregated-apiserver
apiserver: "true"
spec:
ports:
- port: 443
protocol: TCP
targetPort: 443
selector:
app: karmada-aggregated-apiserver
```
</details>
Then, deploy `APIService` to `karmada-apiserver` by following manifests.
<details>
<summary>unfold me to see the yaml</summary>
```yaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1alpha1.cluster.karmada.io
labels:
app: karmada-aggregated-apiserver
apiserver: "true"
spec:
insecureSkipTLSVerify: true
group: cluster.karmada.io
groupPriorityMinimum: 2000
service:
name: karmada-aggregated-apiserver
namespace: karmada-system
version: v1alpha1
versionPriority: 10
---
apiVersion: v1
kind: Service
metadata:
name: karmada-aggregated-apiserver
namespace: karmada-system
spec:
type: ExternalName
externalName: karmada-aggregated-apiserver.karmada-system.svc.cluster.local
```
</details>
#### Step 5: check clusters status
If everything goes well, you can see all your clusters just as before the upgrading.
```yaml
kubectl get clusters
```
### `karmada-agent` requires an extra `impersonate` verb
In order to proxy user's request, the `karmada-agent` now request an extra `impersonate` verb.
Please check the `ClusterRole` configuration or apply the following manifest.
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: karmada-agent
rules:
- apiGroups: ['*']
resources: ['*']
verbs: ['*']
- nonResourceURLs: ['*']
verbs: ["get"]
```
### MCS feature now supports `Kubernetes v1.21+`
Since the `discovery.k8s.io/v1beta1` of `EndpointSlices` has been deprecated in favor of `discovery.k8s.io/v1`, in
[Kubernetes v1.21](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md), Karmada adopt
this change at release v1.0.0.
Now the [MCS](https://github.com/karmada-io/karmada/blob/master/docs/multi-cluster-service.md) feature requires
member cluster version no less than v1.21.

View File

@ -1,6 +0,0 @@
# v0.8 to v0.9
Nothing special other than the [Regular Upgrading Process](./README.md).
## Upgrading Notable Changes
Please refer to [v0.9.0 Release Notes](https://github.com/karmada-io/karmada/releases/tag/v0.9.0) for more details.

View File

@ -1,12 +0,0 @@
# v0.9 to v0.10
Follow the [Regular Upgrading Process](./README.md).
## Upgrading Notable Changes
### karmada-scheduler
The `--failover` flag has been removed and replaced by `--feature-gates`.
If you enable fail over feature by `--failover`, now should be change to `--feature-gates=Failover=true`.
Please refer to [v0.10.0 Release Notes](https://github.com/karmada-io/karmada/releases/tag/v0.10.0) for more details.

View File

@ -1,43 +0,0 @@
# v1.0 to v1.1
Follow the [Regular Upgrading Process](./README.md).
## Upgrading Notable Changes
The validation process for `Cluster` objects now has been moved from `karmada-webhook` to `karmada-aggregated-apiserver`
by [PR 1152](https://github.com/karmada-io/karmada/pull/1152), you have to remove the `Cluster` webhook configuration
from `ValidatingWebhookConfiguration`, such as:
```diff
diff --git a/artifacts/deploy/webhook-configuration.yaml b/artifacts/deploy/webhook-configuration.yaml
index 0a89ad36..f7a9f512 100644
--- a/artifacts/deploy/webhook-configuration.yaml
+++ b/artifacts/deploy/webhook-configuration.yaml
@@ -69,20 +69,6 @@ metadata:
labels:
app: validating-config
webhooks:
- - name: cluster.karmada.io
- rules:
- - operations: ["CREATE", "UPDATE"]
- apiGroups: ["cluster.karmada.io"]
- apiVersions: ["*"]
- resources: ["clusters"]
- scope: "Cluster"
- clientConfig:
- url: https://karmada-webhook.karmada-system.svc:443/validate-cluster
- caBundle: {{caBundle}}
- failurePolicy: Fail
- sideEffects: None
- admissionReviewVersions: ["v1"]
- timeoutSeconds: 3
- name: propagationpolicy.karmada.io
rules:
- operations: ["CREATE", "UPDATE"]
```
Otherwise, when joining clusters(or updating Cluster objects) the request will be rejected with following errors:
```
Error: failed to create cluster(host) object. error: Internal error occurred: failed calling webhook "cluster.karmada.io": the server could not find the requested resource
```
Please refer to [v1.1.0 Release Notes](https://github.com/karmada-io/karmada/releases/tag/v1.1.0) for more details.

View File

@ -1,53 +0,0 @@
# v1.1 to v1.2
Follow the [Regular Upgrading Process](./README.md).
## Upgrading Notable Changes
### karmada-controller-manager
The `hpa` controller has been disabled by default now, if you are using this controller, please enable it as per [Configure Karmada controllers](https://github.com/karmada-io/karmada/blob/master/docs/userguide/configure-controllers.md#configure-karmada-controllers).
### karmada-aggregated-apiserver
The deprecated flags `--karmada-config` and `--master` in v1.1 have been removed from the codebase.
Please remember to remove the flags `--karmada-config` and `--master` in the `karmada-aggregated-apiserver` deployment yaml.
### karmadactl
We enable `karmadactl promote` command to support AA. For details info, please refer to [1795](https://github.com/karmada-io/karmada/pull/1795).
In order to use AA by default, need to deploy some RBAC by following manifests.
<details>
<summary>unfold me to see the yaml</summary>
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-proxy-admin
rules:
- apiGroups:
- 'cluster.karmada.io'
resources:
- clusters/proxy
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-proxy-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-proxy-admin
subjects:
- kind: User
name: "system:admin"
```
</details>
Please refer to [v1.2.0 Release Notes](https://github.com/karmada-io/karmada/releases/tag/v1.2.0) for more details.

View File

@ -1,256 +0,0 @@
# Aggregated Kubernetes API Endpoint
The newly introduced [karmada-aggregated-apiserver](https://github.com/karmada-io/karmada/blob/master/cmd/aggregated-apiserver/main.go) component aggregates all registered clusters and allows users to access member clusters through Karmada by the proxy endpoint.
For detailed discussion topic, see [here](https://github.com/karmada-io/karmada/discussions/1077).
Here's a quick start.
## Quick start
To quickly experience this feature, we experimented with karmada-apiserver certificate.
### Step1: Obtain the karmada-apiserver Certificate
For Karmada deployed using `hack/local-up-karmada.sh`, you can directly copy it from the `$HOME/.kube/` directory.
```shell
cp $HOME/.kube/karmada.config karmada-apiserver.config
```
### Step2: Grant permission to user `system:admin`
`system:admin` is the user for karmada-apiserver certificate. We need to grant the `clusters/proxy` permission to it explicitly.
Apply the following yaml file:
cluster-proxy-rbac.yaml:
<details>
<summary>unfold me to see the yaml</summary>
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-proxy-clusterrole
rules:
- apiGroups:
- 'cluster.karmada.io'
resources:
- clusters/proxy
resourceNames:
- member1
- member2
- member3
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-proxy-clusterrolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-proxy-clusterrole
subjects:
- kind: User
name: "system:admin"
```
</details>
```shell
kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver apply -f cluster-proxy-rbac.yaml
```
### Step3: Access member clusters
Run the below command (replace `{clustername}` with your actual cluster name):
```shell
kubectl --kubeconfig karmada-apiserver.config get --raw /apis/cluster.karmada.io/v1alpha1/clusters/{clustername}/proxy/api/v1/nodes
```
Or append `/apis/cluster.karmada.io/v1alpha1/clusters/{clustername}/proxy` to the server address of karmada-apiserver.config, and then you can directly use:
```shell
kubectl --kubeconfig karmada-apiserver.config get node
```
> Note: For a member cluster that joins Karmada in pull mode and allows only cluster-to-karmada access, we can [deploy apiserver-network-proxy (ANP)](../working-with-anp.md) to access it.
## Unified authentication
For one or a group of user subjects (users, groups, or service accounts) in a member cluster, we can import them into Karmada control plane and grant them the `clusters/proxy` permission, so that we can access the member cluster with permission of the user subject through Karmada.
In this section, we use a serviceaccount named `tom` for the test.
### Step1: Create ServiceAccount in member1 cluster (optional)
If the serviceaccount has been created in your environment, you can skip this step.
Create a serviceaccount that does not have any permission:
```shell
kubectl --kubeconfig $HOME/.kube/members.config --context member1 create serviceaccount tom
```
### Step2: Create ServiceAccount in Karmada control plane
```shell
kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver create serviceaccount tom
```
In order to grant serviceaccount the `clusters/proxy` permission, apply the following rbac yaml file:
cluster-proxy-rbac.yaml:
<details>
<summary>unfold me to see the yaml</summary>
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-proxy-clusterrole
rules:
- apiGroups:
- 'cluster.karmada.io'
resources:
- clusters/proxy
resourceNames:
- member1
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-proxy-clusterrolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-proxy-clusterrole
subjects:
- kind: ServiceAccount
name: tom
namespace: default
# The token generated by the serviceaccount can parse the group information. Therefore, you need to specify the group information below.
- kind: Group
name: "system:serviceaccounts"
- kind: Group
name: "system:serviceaccounts:default"
```
</details>
```shell
kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver apply -f cluster-proxy-rbac.yaml
```
### Step3: Access member1 cluster
Obtain token of serviceaccount `tom`:
```shell
kubectl get secret `kubectl get sa tom -oyaml | grep token | awk '{print $3}'` -oyaml | grep token: | awk '{print $2}' | base64 -d
```
Then construct a kubeconfig file `tom.config` for `tom` serviceaccount:
```yaml
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: {karmada-apiserver-address} # Replace {karmada-apiserver-address} with karmada-apiserver-address. You can find it in $HOME/.kube/karmada.config file.
name: tom
contexts:
- context:
cluster: tom
user: tom
name: tom
current-context: tom
kind: Config
users:
- name: tom
user:
token: {token} # Replace {token} with the token obtain above.
```
Run the command below to access member1 cluster:
```shell
kubectl --kubeconfig tom.config get --raw /apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy/apis
```
We can find that we were able to access, but run the command below:
```shell
kubectl --kubeconfig tom.config get --raw /apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy/api/v1/nodes
```
It will fail because serviceaccount `tom` does not have any permissions in the member1 cluster.
### Step4: Grant permission to Serviceaccount in member1 cluster
Apply the following YAML file:
member1-rbac.yaml
<details>
<summary>unfold me to see the yaml</summary>
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: tom
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tom
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tom
subjects:
- kind: ServiceAccount
name: tom
namespace: default
```
</details>
```shell
kubectl --kubeconfig $HOME/.kube/members.config --context member1 apply -f member1-rbac.yaml
```
Run the command that failed in the previous step again:
```shell
kubectl --kubeconfig tom.config get --raw /apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy/api/v1/nodes
```
The access will be successful.
Or we can append `/apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy` to the server address of tom.config, and then you can directly use:
```shell
kubectl --kubeconfig tom.config get node
```
> Note: For a member cluster that joins Karmada in pull mode and allows only cluster-to-karmada access, we can [deploy apiserver-network-proxy (ANP)](../working-with-anp.md) to access it.

View File

@ -1,106 +0,0 @@
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*
- [Cluster Registration](#cluster-registration)
- [Overview of cluster mode](#overview-of-cluster-mode)
- [Push mode](#push-mode)
- [Pull mode](#pull-mode)
- [Register cluster with 'Push' mode](#register-cluster-with-push-mode)
- [Register cluster by CLI](#register-cluster-by-cli)
- [Check cluster status](#check-cluster-status)
- [Unregister cluster by CLI](#unregister-cluster-by-cli)
- [Register cluster with 'Pull' mode](#register-cluster-with-pull-mode)
- [Register cluster](#register-cluster)
- [Check cluster status](#check-cluster-status-1)
- [Unregister cluster](#unregister-cluster)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
# Cluster Registration
## Overview of cluster mode
Karmada supports both `Push` and `Pull` modes to manage the member clusters.
The main difference between `Push` and `Pull` modes is the way access to member clusters when deploying manifests.
### Push mode
Karmada control plane will access member cluster's `kube-apiserver` directly to get cluster status and deploy manifests.
### Pull mode
Karmada control plane will not access member cluster but delegate it to an extra component named `karmada-agent`.
Each `karmada-agent` serves for a cluster and take responsibility for:
- Register cluster to Karmada(creates the `Cluster` object)
- Maintains cluster status and reports to Karmada(updates the status of `Cluster` object)
- Watch manifests from Karmada execution space(namespace, `karmada-es-<cluster name>`) and deploy to the cluster which serves for.
## Register cluster with 'Push' mode
You can use the [kubectl-karmada](./../installation/install-kubectl-karmada.md) CLI to `join`(register) and `unjoin`(unregister) clusters.
### Register cluster by CLI
Join cluster with name `member1` to Karmada by using the following command.
```
kubectl karmada join member1 --kubeconfig=<karmada kubeconfig> --cluster-kubeconfig=<member1 kubeconfig>
```
Repeat this step to join any additional clusters.
The `--kubeconfig` specifies the Karmada's `kubeconfig` file and the CLI infers `karmada-apiserver` context
from the `current-context` field of the `kubeconfig`. If there are more than one context is configured in
the `kubeconfig` file, it is recommended to specify the context by the `--karmada-context` flag. For example:
```
kubectl karmada join member1 --kubeconfig=<karmada kubeconfig> --karmada-context=karmada --cluster-kubeconfig=<member1 kubeconfig>
```
The `--cluster-kubeconfig` specifies the member cluster's `kubeconfig` and the CLI infers the member cluster's context
by the cluster name. If there is more than one context is configured in the `kubeconfig` file, or you don't want to use
the context name to register, it is recommended to specify the context by the `--cluster-context` flag. For example:
```
kubectl karmada join member1 --kubeconfig=<karmada kubeconfig> --karmada-context=karmada \
--cluster-kubeconfig=<member1 kubeconfig> --cluster-context=member1
```
> Note: The registering cluster name can be different from the context with `--cluster-context` specified.
### Check cluster status
Check the status of the joined clusters by using the following command.
```
kubectl get clusters
NAME VERSION MODE READY AGE
member1 v1.20.7 Push True 66s
```
### Unregister cluster by CLI
You can unjoin clusters by using the following command.
```
kubectl karmada unjoin member1 --kubeconfig=<karmada kubeconfig> --cluster-kubeconfig=<member1 kubeconfig>
```
During unjoin process, the resources propagated to `member1` by Karmada will be cleaned up.
And the `--cluster-kubeconfig` is used to clean up the secret created at the `join` phase.
Repeat this step to unjoin any additional clusters.
## Register cluster with 'Pull' mode
### Register cluster
After `karmada-agent` be deployed, it will register cluster automatically at the start-up phase.
### Check cluster status
Check the status of the registered clusters by using the same command above.
```
kubectl get clusters
NAME VERSION MODE READY AGE
member3 v1.20.7 Pull True 66s
```
### Unregister cluster
Undeploy the `karmada-agent` and then remove the `cluster` manually from Karmada.
```
kubectl delete cluster member3
```

View File

@ -1,136 +0,0 @@
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*
- [Configure Controllers](#configure-controllers)
- [Karmada Controllers](#karmada-controllers)
- [Configure Karmada Controllers](#configure-karmada-controllers)
- [Kubernetes Controllers](#kubernetes-controllers)
- [Required Controllers](#required-controllers)
- [namespace](#namespace)
- [garbagecollector](#garbagecollector)
- [serviceaccount-token](#serviceaccount-token)
- [Optinal Controllers](#optinal-controllers)
- [ttl-after-finished](#ttl-after-finished)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
# Configure Controllers
Karmada maintains a bunch of controllers which are control loops that watch the state of your system, then make or
request changes where needed. Each controller tries to move the current state closer to the desired state.
See [Kubernetes Controller Concepts][1] for more details.
## Karmada Controllers
The controllers are embedded into components of `karmada-controller-manager` or `karmada-agent` and will be launched
along with components startup. Some controllers may be shared by `karmada-controller-manager` and `karmada-agent`.
| Controller | In karmada-controller-manager | In karmada-agent |
|---------------|-------------------------------|------------------|
| cluster | Y | N |
| clusterStatus | Y | Y |
| binding | Y | N |
| execution | Y | Y |
| workStatus | Y | Y |
| namespace | Y | N |
| serviceExport | Y | Y |
| endpointSlice | Y | N |
| serviceImport | Y | N |
| unifiedAuth | Y | N |
| hpa | Y | N |
### Configure Karmada Controllers
You can use `--controllers` flag to specify the enabled controller list for `karmada-controller-manager` and
`karmada-agent`, or disable some of them in addition to the default list.
E.g. Specify a controller list:
```bash
--controllers=cluster,clusterStatus,binding,xxx
```
E.g. Disable some controllers(remember to keep `*` if you want to keep the rest controllers in the default list):
```bash
--controllers=-hpa,-unifiedAuth,*
```
Use `-foo` to disable the controller named `foo`.
> Note: The default controller list might be changed in the future releases. The controllers enabled in the last release
> might be disabled or deprecated and new controllers might be introduced too. Users who are using this flag should
> check the release notes before system upgrade.
## Kubernetes Controllers
In addition to the controllers that are maintained by the Karmada community, Karmada also requires some controllers from
Kubernetes. These controllers run as part of `kube-controller-manager` and are maintained by the Kubernetes community.
Users are recommended to deploy the `kube-controller-manager` along with Karmada components. And the installation
methods list in [installation guide][2] would help you deploy it as well as Karmada components.
### Required Controllers
Not all controllers in `kube-controller-manager` are necessary for Karmada, if you are deploying
Karmada using other tools, you might have to configure the controllers by `--controllers` flag just like what we did in
[example of kube-controller-manager deployment][3].
The following controllers are tested and recommended by Karmada.
#### namespace
The `namespace` controller runs as part of `kube-controller-manager`. It watches `Namespace` deletion and deletes
all resources in the given namespace.
For the Karmada control plane, we inherit this behavior to keep a consistent user experience. More than that, we also
rely on this feature in the implementation of Karmada controllers, for example, when un-registering a cluster,
Karmada would delete the `execution namespace`(named `karmada-es-<cluster name>`) that stores all the resources
propagated to that cluster, to ensure all the resources could be cleaned up from both the Karmada control plane and the
given cluster.
More details about the `namespace` controller, please refer to
[namespace controller sync logic](https://github.com/kubernetes/kubernetes/blob/v1.23.4/pkg/controller/namespace/deletion/namespaced_resources_deleter.go#L82-L94).
#### garbagecollector
The `garbagecollector` controller runs as part of `kube-controller-manager`. It is used to clean up garbage resources.
It manages [owner reference](https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/) and
deletes the resources once all owners are absent.
For the Karmada control plane, we also use `owner reference` to link objects to each other. For example, each
`ResourceBinding` has an owner reference that link to the `resource template`. Once the `resource template` is removed,
the `ResourceBinding` will be removed by `garbagecollector` controller automatically.
For more details about garbage collection mechanisms, please refer to
[Garbage Collection](https://kubernetes.io/docs/concepts/architecture/garbage-collection/).
#### serviceaccount-token
The `serviceaccount-token` controller runs as part of `kube-controller-manager`.
It watches `ServiceAccount` creation and creates a corresponding ServiceAccount token Secret to allow API access.
For the Karmada control plane, after a `ServiceAccount` object is created by the administrator, we also need
`serviceaccount-token` controller to generate the ServiceAccount token `Secret`, which will be a relief for
administrator as he/she doesn't need to manually prepare the token.
More details please refer to:
- [service account token controller](https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#token-controller)
- [service account tokens](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens)
### Optinal Controllers
#### ttl-after-finished
The `ttl-after-finished` controller runs as part of `kube-controller-manager`.
It watches `Job` updates and limits the lifetime of finished `Jobs`.
The TTL timer starts when the Job finishes, and the finished Job will be cleaned up after the TTL expires.
For the Karmada control plane, we also provide the capability to clean up finished `Jobs` automatically by
specifying the `.spec.ttlSecondsAfterFinished` field of a Job, which will be a relief for the control plane.
More details please refer to:
- [ttl after finished controller](https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/#ttl-after-finished-controller)
- [clean up finished jobs automatically](https://kubernetes.io/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically)
[1]: https://kubernetes.io/docs/concepts/architecture/controller/
[2]: https://github.com/karmada-io/karmada/blob/master/docs/installation/installation.md
[3]: https://github.com/karmada-io/karmada/blob/master/artifacts/deploy/kube-controller-manager.yaml

View File

@ -1,180 +0,0 @@
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*
- [Customizing Resource Interpreter](#customizing-resource-interpreter)
- [Resource Interpreter Framework](#resource-interpreter-framework)
- [Interpreter Operations](#interpreter-operations)
- [Built-in Interpreter](#built-in-interpreter)
- [InterpretReplica](#interpretreplica)
- [ReviseReplica](#revisereplica)
- [Retain](#retain)
- [AggregateStatus](#aggregatestatus)
- [InterpretStatus](#interpretstatus)
- [InterpretDependency](#interpretdependency)
- [Customized Interpreter](#customized-interpreter)
- [What are interpreter webhooks?](#what-are-interpreter-webhooks)
- [Write an interpreter webhook server](#write-an-interpreter-webhook-server)
- [Deploy the admission webhook service](#deploy-the-admission-webhook-service)
- [Configure webhook on the fly](#configure-webhook-on-the-fly)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
# Customizing Resource Interpreter
## Resource Interpreter Framework
In the progress of propagating a resource from `karmada-apiserver` to member clusters, Karmada needs to know the
resource definition. Take `Propagating Deployment` as an example, at the phase of building `ResourceBinding`, the
`karmada-controller-manager` will parse the `replicas` from the deployment object.
For Kubernetes native resources, Karmada knows how to parse them, but for custom resources defined by `CRD`(or extended
by something like `aggregated-apiserver`), as lack of the knowledge of the resource structure, they can only be treated
as normal resources. Therefore, the advanced scheduling algorithms cannot be used for them.
The [Resource Interpreter Framework][1] is designed for interpreting resource structure. It consists of `built-in` and
`customized` interpreters:
- built-in interpreter: used for common Kubernetes native or well-known extended resources.
- customized interpreter: interprets custom resources or overrides the built-in interpreters.
> Note: The major difference between `built-in` and `customized` interpreters is that the `built-in` interpreter is
> implemented and maintained by Karmada community and will be built into Karmada components, such as
> `karmada-controller-manager`. On the contrary, the `customized` interpreter is implemented and maintained by users.
> It should be registered to Karmada as an `Interpreter Webhook` (see below for more details).
### Interpreter Operations
When interpreting resources, we often get multiple pieces of information extracted. The `Interpreter Operations`
defines the interpreter request type, and the `Resource Interpreter Framework` provides services for each operation
type.
For all operations designed by `Resource Interpreter Framework`, please refer to [Interpreter Operations][2].
> Note: Not all the designed operations are supported (see below for supported operations).
> Note: At most one interpreter will be consulted to when interpreting a resource with specific `interpreter operation`
> and the `customized` interpreter has higher priority than `built-in` interpreter if they are both interpreting the same
> resource.
> For example, the `built-in` interpreter serves `InterpretReplica` for `Deployment` with version `apps/v1`. If there
> is a customized interpreter registered to Karmada for interpreting the same resource, the `customized` interpreter wins and the
> `built-in` interpreter will be ignored.
## Built-in Interpreter
For the common Kubernetes native or well-known extended resources, the interpreter operations are built-in, which means
the users usually don't need to implement customized interpreters. If you want more resources to be built-in,
please feel free to [file an issue][3] to let us know your user case.
The built-in interpreter now supports following interpreter operations:
### InterpretReplica
Supported resources:
- Deployment(apps/v1)
- Job(batch/v1)
### ReviseReplica
Supported resources:
- Deployment(apps/v1)
- Job(batch/v1)
### Retain
Supported resources:
- Pod(v1)
- Service(v1)
- ServiceAccount(v1)
- PersistentVolumeClaim(v1)
- Job(batch/v1)
### AggregateStatus
Supported resources:
- Deployment(apps/v1)
- Service(v1)
- Ingress(extensions/v1beta1)
- Job(batch/v1)
- DaemonSet(apps/v1)
- StatefulSet(apps/v1)
- Pod(v1)
- PersistentVolumeClaim(v1)
### InterpretStatus
Supported resources:
- Deployment(apps/v1)
- Service(v1)
- Ingress(extensions/v1beta1)
- Job(batch/v1)
- DaemonSet(apps/v1)
- StatefulSet(apps/v1)
### InterpretDependency
Supported resources:
- Deployment(apps/v1)
- Job(batch/v1)
- CronJob(batch/v1)
- Pod(v1)
- DaemonSet(apps/v1)
- StatefulSet(apps/v1)
## Customized Interpreter
The customized interpreter is implemented and maintained by users, it developed as extensions and
run as webhooks at runtime.
### What are interpreter webhooks?
Interpreter webhooks are HTTP callbacks that receive interpret requests and do something with them.
### Write an interpreter webhook server
Please refer to the implementation of the [Example of Customize Interpreter][4] that is validated
in Karmada E2E test. The webhook handles the `ResourceInterpreterRequest` request sent by the
Karmada components (such as `karmada-controller-manager`), and sends back its decision as an
`ResourceInterpreterResponse`.
### Deploy the admission webhook service
The [Example of Customize Interpreter][4] is deployed in the host cluster for E2E and exposed by
a service as the front-end of the webhook server.
You may also deploy your webhooks outside the cluster. You will need to update your webhook
configurations accordingly.
### Configure webhook on the fly
You can configure what resources and supported operations are subject to what interpreter webhook
via [ResourceInterpreterWebhookConfiguration][5].
The following is an example `ResourceInterpreterWebhookConfiguration`:
```yaml
apiVersion: config.karmada.io/v1alpha1
kind: ResourceInterpreterWebhookConfiguration
metadata:
name: examples
webhooks:
- name: workloads.example.com
rules:
- operations: [ "InterpretReplica","ReviseReplica","Retain","AggregateStatus" ]
apiGroups: [ "workload.example.io" ]
apiVersions: [ "v1alpha1" ]
kinds: [ "Workload" ]
clientConfig:
url: https://karmada-interpreter-webhook-example.karmada-system.svc:443/interpreter-workload
caBundle: {{caBundle}}
interpreterContextVersions: [ "v1alpha1" ]
timeoutSeconds: 3
```
You can config more than one webhook in a `ResourceInterpreterWebhookConfiguration`, each webhook
serves at least one operation.
[1]: https://github.com/karmada-io/karmada/tree/master/docs/proposals/resource-interpreter-webhook
[2]: https://github.com/karmada-io/karmada/blob/master/pkg/apis/config/v1alpha1/resourceinterpreterwebhook_types.go#L71-L108
[3]: https://github.com/karmada-io/karmada/issues/new?assignees=&labels=kind%2Ffeature&template=enhancement.md
[4]: https://github.com/karmada-io/karmada/tree/master/examples/customresourceinterpreter
[5]: https://github.com/karmada-io/karmada/blob/master/pkg/apis/config/v1alpha1/resourceinterpreterwebhook_types.go#L16
[6]: https://github.com/karmada-io/karmada/blob/master/examples/customresourceinterpreter/webhook-configuration.yaml

View File

@ -1,207 +0,0 @@
# Failover Overview
## Monitor the cluster health status
Karmada supports both `Push` and `Pull` modes to manage member clusters.
More details about cluster registration please refer to [Cluster Registration](./cluster-registration.md#cluster-registration).
### Determining failures
For clusters there are two forms of heartbeats:
- updates to the `.status` of a Cluster.
- `Lease` objects within the `karmada-cluster` namespace in karmada control plane. Each cluster has an associated `Lease` object.
#### Cluster status collection
For `Push` mode clusters, the cluster status controller in karmada control plane will continually collect cluster's status for a configured interval.
For `Pull` mode clusters, the `karmada-agent` is responsible for creating and updating the `.status` of clusters with configured interval.
The interval for `.status` updates to `Cluster` can be configured via `--cluster-status-update-frequency` flag(default is 10 seconds).
Cluster might be set to the `NotReady` state with following conditions:
- cluster is unreachable(retry 4 times within 2 seconds).
- cluster's health endpoint responded without ok.
- failed to collect cluster status including the kubernetes version, installed APIs, resources usages, etc.
#### Lease updates
Karmada will create a `Lease` object and a lease controller for each cluster when clusters are joined.
Each lease controller is responsible for updating the related Leases. The lease renewing time can be configured via `--cluster-lease-duration` and `--cluster-lease-renew-interval-fraction` flags(default is 10 seconds).
Leases updating process is independent with clusters status updating process, since clusters `.status` field is maintained by cluster status controller.
The cluster controller in Karmada control plane would check the state of each cluster every `--cluster-monitor-period` period(default is 5 seconds).
The cluster's `Ready` condition would be changed to `Unknown` when cluster controller has not heard from the cluster in the last `--cluster-monitor-grace-period`(default is 40 seconds).
### Check cluster status
You can use `kubectl` to check a Cluster's status and other details:
```
kubectl describe cluster <cluster-name>
```
The `Ready` condition in `Status` field indicates the cluster is healthy and ready to accept workloads.
It will be set to `False` if the cluster is not healthy and is not accepting workloads, and `Unknown` if the cluster controller has not heard from the cluster in the last `cluster-monitor-grace-period`.
The following example describes an unhealthy cluster:
```
kubectl describe cluster member1
Name: member1
Namespace:
Labels: <none>
Annotations: <none>
API Version: cluster.karmada.io/v1alpha1
Kind: Cluster
Metadata:
Creation Timestamp: 2021-12-29T08:49:35Z
Finalizers:
karmada.io/cluster-controller
Resource Version: 152047
UID: 53c133ab-264e-4e8e-ab63-a21611f7fae8
Spec:
API Endpoint: https://172.23.0.7:6443
Impersonator Secret Ref:
Name: member1-impersonator
Namespace: karmada-cluster
Secret Ref:
Name: member1
Namespace: karmada-cluster
Sync Mode: Push
Status:
Conditions:
Last Transition Time: 2021-12-31T03:36:08Z
Message: cluster is not reachable
Reason: ClusterNotReachable
Status: False
Type: Ready
Events: <none>
```
## Failover feature of Karmada
The failover feature is controlled by the `Failover` feature gate, users need to enable the `Failover` feature gate of karmada scheduler:
```
--feature-gates=Failover=true
```
### Concept
When it is determined that member clusters becoming unhealthy, the karmada scheduler will reschedule the reference application.
There are several constraints:
- For each rescheduled application, it still needs to meet the restrictions of PropagationPolicy, such as ClusterAffinity or SpreadConstraints.
- The application distributed on the ready clusters after the initial scheduling will remain when failover schedule.
#### Duplicated schedule type
For `Duplicated` schedule policy, when the number of candidate clusters that meet the PropagationPolicy restriction is not less than the number of failed clusters,
it will be rescheduled to candidate clusters according to the number of failed clusters. Otherwise, no rescheduling.
Take `Deployment` as example:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx-propagation
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: nginx
placement:
clusterAffinity:
clusterNames:
- member1
- member2
- member3
- member5
spreadConstraints:
- maxGroups: 2
minGroups: 2
replicaScheduling:
replicaSchedulingType: Duplicated
```
Suppose there are 5 member clusters, and the initial scheduling result is in member1 and member2. When member2 fails, it triggers rescheduling.
It should be noted that rescheduling will not delete the application on the ready cluster member1. In the remaining 3 clusters, only member3 and member5 match the `clusterAffinity` policy.
Due to the limitations of spreadConstraints, the final result can be [member1, member3] or [member1, member5].
#### Divided schedule type
For `Divided` schedule policy, karmada scheduler will try to migrate replicas to the other health clusters.
Take `Deployment` as example:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx-propagation
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: nginx
placement:
clusterAffinity:
clusterNames:
- member1
- member2
replicaScheduling:
replicaDivisionPreference: Weighted
replicaSchedulingType: Divided
weightPreference:
staticWeightList:
- targetCluster:
clusterNames:
- member1
weight: 1
- targetCluster:
clusterNames:
- member2
weight: 2
```
Karmada scheduler will divide the replicas according the `weightPreference`. The initial schedule result is member1 with 1 replica and member2 with 2 replicas.
When member1 fails, it triggers rescheduling. Karmada scheduler will try to migrate replicas to the other health clusters. The final result will be member2 with 3 replicas.

View File

@ -1,400 +0,0 @@
# Override Policy
The [OverridePolicy][1] and [ClusterOverridePolicy][2] are used to declare override rules for resources when
they are propagating to different clusters.
## Difference between OverridePolicy and ClusterOverridePolicy
ClusterOverridePolicy represents the cluster-wide policy that overrides a group of resources to one or more clusters while OverridePolicy will apply to resources in the same namespace as the namespace-wide policy. For cluster scoped resources, apply ClusterOverridePolicy by policies name in ascending. For namespaced scoped resources, first apply ClusterOverridePolicy, then apply OverridePolicy.
## Resource Selector
ResourceSelectors restricts resource types that this override policy applies to. If you ignore this field it means matching all resources.
Resource Selector required `apiVersion` field which represents the API version of the target resources and `kind` which represents the Kind of the target resources.
The allowed selectors are as follows:
- `namespace`: namespace of the target resource.
- `name`: name of the target resource
- `labelSelector`: A label query over a set of resources.
#### Examples
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: OverridePolicy
metadata:
name: example
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: nginx
namespace: test
labelSelector:
matchLabels:
app: nginx
overrideRules:
...
```
It means override rules above will only be applied to `Deployment` which is named nginx in test namespace and has labels with `app: nginx`.
## Target Cluster
Target Cluster defines restrictions on the override policy that only applies to resources propagated to the matching clusters. If you ignore this field it means matching all clusters.
The allowed selectors are as follows:
- `labelSelector`: a filter to select member clusters by labels.
- `fieldSelector`: a filter to select member clusters by fields. Currently only three fields of provider(cluster.spec.provider), zone(cluster.spec.zone), and region(cluster.spec.region) are supported.
- `clusterNames`: the list of clusters to be selected.
- `exclude`: the list of clusters to be ignored.
### labelSelector
#### Examples
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: OverridePolicy
metadata:
name: example
spec:
...
overrideRules:
- targetCluster:
labelSelector:
matchLabels:
cluster: member1
overriders:
...
```
It means override rules above will only be applied to those resources propagated to clusters which has `cluster: member1` label.
### fieldSelector
#### Examples
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: OverridePolicy
metadata:
name: example
spec:
...
overrideRules:
- targetCluster:
fieldSelector:
matchExpressions:
- key: region
operator: In
values:
- cn-north-1
overriders:
...
```
It means override rules above will only be applied to those resources propagated to clusters which has the `spec.region` field with values in [cn-north-1].
### fieldSelector
#### Examples
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: OverridePolicy
metadata:
name: example
spec:
...
overrideRules:
- targetCluster:
fieldSelector:
matchExpressions:
- key: region
operator: In
values:
- cn-north-1
overriders:
...
```
It means override rules above will only be applied to those resources propagated to clusters which has the `spec.region` field with values in [cn-north-1].
### clusterNames
#### Examples
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: OverridePolicy
metadata:
name: example
spec:
...
overrideRules:
- targetCluster:
clusterNames:
- member1
overriders:
...
```
It means override rules above will only be applied to those resources propagated to clusters whose clusterNames are member1.
### exclude
#### Examples
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: OverridePolicy
metadata:
name: example
spec:
...
overrideRules:
- targetCluster:
exclude:
- member1
overriders:
...
```
It means override rules above will only be applied to those resources propagated to clusters whose clusterNames are not member1.
## Overriders
Karmada offers various alternatives to declare the override rules:
- `ImageOverrider`: dedicated to override images for workloads.
- `CommandOverrider`: dedicated to override commands for workloads.
- `ArgsOverrider`: dedicated to override args for workloads.
- `PlaintextOverrider`: a general-purpose tool to override any kind of resources.
### ImageOverrider
The `ImageOverrider` is a refined tool to override images with format `[registry/]repository[:tag|@digest]`(e.g.`/spec/template/spec/containers/0/image`) for workloads such as `Deployment`.
The allowed operations are as follows:
- `add`: appends the registry, repository or tag/digest to the image from containers.
- `remove`: removes the registry, repository or tag/digest from the image from containers.
- `replace`: replaces the registry, repository or tag/digest of the image from containers.
#### Examples
Suppose we create a deployment named `myapp`.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
...
spec:
template:
spec:
containers:
- image: myapp:1.0.0
name: myapp
```
**Example 1: Add the registry when workloads are propagating to specific clusters.**
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: OverridePolicy
metadata:
name: example
spec:
...
overrideRules:
- overriders:
imageOverrider:
- component: Registry
operator: add
value: test-repo
```
It means `add` a registry`test-repo` to the image of `myapp`.
After the policy is applied for `myapp`, the image will be:
```yaml
containers:
- image: test-repo/myapp:1.0.0
name: myapp
```
**Example 2: replace the repository when workloads are propagating to specific clusters.**
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: OverridePolicy
metadata:
name: example
spec:
...
overrideRules:
- overriders:
imageOverrider:
- component: Repository
operator: replace
value: myapp2
```
It means `replace` the repository from `myapp` to `myapp2`.
After the policy is applied for `myapp`, the image will be:
```yaml
containers:
- image: myapp2:1.0.0
name: myapp
```
**Example 3: remove the tag when workloads are propagating to specific clusters.**
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: OverridePolicy
metadata:
name: example
spec:
...
overrideRules:
- overriders:
imageOverrider:
- component: Tag
operator: remove
```
It means `remove` the tag of the image `myapp`.
After the policy is applied for `myapp`, the image will be:
```yaml
containers:
- image: myapp
name: myapp
```
### CommandOverrider
The `CommandOverrider` is a refined tool to override commands(e.g.`/spec/template/spec/containers/0/command`)
for workloads, such as `Deployment`.
The allowed operations are as follows:
- `add`: appends one or more flags to the command list.
- `remove`: removes one or more flags from the command list.
#### Examples
Suppose we create a deployment named `myapp`.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
...
spec:
template:
spec:
containers:
- image: myapp
name: myapp
command:
- ./myapp
- --parameter1=foo
- --parameter2=bar
```
**Example 1: Add flags when workloads are propagating to specific clusters.**
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: OverridePolicy
metadata:
name: example
spec:
...
overrideRules:
- overriders:
commandOverrider:
- containerName: myapp
operator: add
value:
- --cluster=member1
```
It means `add`(appending) a new flag `--cluster=member1` to the `myapp`.
After the policy is applied for `myapp`, the command list will be:
```yaml
containers:
- image: myapp
name: myapp
command:
- ./myapp
- --parameter1=foo
- --parameter2=bar
- --cluster=member1
```
**Example 2: Remove flags when workloads are propagating to specific clusters.**
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: OverridePolicy
metadata:
name: example
spec:
...
overrideRules:
- overriders:
commandOverrider:
- containerName: myapp
operator: remove
value:
- --parameter1=foo
```
It means `remove` the flag `--parameter1=foo` from the command list.
After the policy is applied for `myapp`, the `command` will be:
```yaml
containers:
- image: myapp
name: myapp
command:
- ./myapp
- --parameter2=bar
```
### ArgsOverrider
The `ArgsOverrider` is a refined tool to override args(such as `/spec/template/spec/containers/0/args`) for workloads,
such as `Deployments`.
The allowed operations are as follows:
- `add`: appends one or more args to the command list.
- `remove`: removes one or more args from the command list.
Note: the usage of `ArgsOverrider` is similar to `CommandOverrider`, You can refer to the `CommandOverrider` examples.
### PlaintextOverrider
The `PlaintextOverrider` is a simple overrider that overrides target fields according to path, operator and value, just like `kubectl patch`.
The allowed operations are as follows:
- `add`: appends one or more elements to the resources.
- `remove`: removes one or more elements from the resources.
- `replace`: replaces one or more elements from the resources.
Suppose we create a configmap named `myconfigmap`.
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: myconfigmap
...
data:
example: 1
```
**Example 1: replace data of the configmap when resources are propagating to specific clusters.**
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: OverridePolicy
metadata:
name: example
spec:
...
overrideRules:
- overriders:
plaintext:
- path: /data/example
operator: replace
value: 2
```
It means `replace` data of the configmap from `example: 1` to the `example: 2`.
After the policy is applied for `myconfigmap`, the configmap will be:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: myconfigmap
...
data:
example: 2
```
[1]: https://github.com/karmada-io/karmada/blob/c37bedc1cfe5a98b47703464fed837380c90902f/pkg/apis/policy/v1alpha1/override_types.go#L13
[2]: https://github.com/karmada-io/karmada/blob/c37bedc1cfe5a98b47703464fed837380c90902f/pkg/apis/policy/v1alpha1/override_types.go#L189

View File

@ -1,58 +0,0 @@
# Promote legacy workload
Assume that there is a member cluster where a workload (like Deployment) is deployed but not managed by Karmada, we can use the `karmadactl promote` command to let Karmada take over this workload directly and not to cause its pods to restart.
## Example
### For member cluster in `Push` mode
There is an `nginx` Deployment that belongs to namespace `default` in member cluster `cluster1`.
```
[root@master1]# kubectl get cluster
NAME VERSION MODE READY AGE
cluster1 v1.22.3 Push True 24d
```
```
[root@cluster1]# kubectl get deploy nginx
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 66s
[root@cluster1]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-6799fc88d8-sqjj4 1/1 Running 0 2m12s
```
We can promote it to Karmada by executing the command below on the Karmada control plane.
```
[root@master1]# karmadactl promote deployment nginx -n default -c member1
Resource "apps/v1, Resource=deployments"(default/nginx) is promoted successfully
```
The nginx deployment has been adopted by Karmada.
```
[root@master1]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 7m25s
```
And the pod created by the nginx deployment in the member cluster wasn't restarted.
```
[root@cluster1]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-6799fc88d8-sqjj4 1/1 Running 0 15m
```
### For member cluster in `Pull` mode
Most steps are same as those for clusters in `Push` mode. Only the flags of the `karmadactl promote` command are different.
```
karmadactl promote deployment nginx -n default -c cluster1 --cluster-kubeconfig=<CLUSTER_KUBECONFIG_PATH>
```
For more flags and example about the command, you can use `karmadactl promote --help`.
> Note: As the version upgrade of resources in Kubernetes is in progress, the apiserver of Karmada control plane cloud be different from member clusters. To avoid compatibility issues, you can specify the GVK of a resource, such as replacing `deployment` with `deployment.v1.apps`.

View File

@ -1,117 +0,0 @@
# Propagate dependencies
Deployment, Job, Pod, DaemonSet and StatefulSet dependencies (ConfigMaps, Secrets and ServiceAccounts) can be propagated to member
clusters automatically. This document demonstrates how to use this feature. For more design details, please refer to
[dependencies-automatically-propagation](../proposals/dependencies-automatically-propagation/README.md)
##Prerequisites
### Karmada has been installed
We can install Karmada by referring to [quick-start](https://github.com/karmada-io/karmada#quick-start), or directly run
`hack/local-up-karmada.sh` script which is also used to run our E2E cases.
### Enable PropagateDeps feature
```bash
kubectl edit deployment karmada-controller-manager -n karmada-system
```
Add `--feature-gates=PropagateDeps=true` option.
## Example
Create a Deployment mounted with a ConfigMap
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: my-nginx
spec:
replicas: 2
selector:
matchLabels:
app: my-nginx
template:
metadata:
labels:
app: my-nginx
spec:
containers:
- image: nginx
name: my-nginx
ports:
- containerPort: 80
volumeMounts:
- name: configmap
mountPath: "/configmap"
volumes:
- name: configmap
configMap:
name: my-nginx-config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: my-nginx-config
data:
nginx.properties: |
proxy-connect-timeout: "10s"
proxy-read-timeout: "10s"
client-max-body-size: "2m"
```
Create a propagation policy with this Deployment and set `propagateDeps: true`.
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: my-nginx-propagation
spec:
propagateDeps: true
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: my-nginx
placement:
clusterAffinity:
clusterNames:
- member1
- member2
replicaScheduling:
replicaDivisionPreference: Weighted
replicaSchedulingType: Divided
weightPreference:
staticWeightList:
- targetCluster:
clusterNames:
- member1
weight: 1
- targetCluster:
clusterNames:
- member2
weight: 1
```
Upon successful policy execution, the Deployment and ConfigMap are properly propagated to the member cluster.
```bash
$ kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get propagationpolicy
NAME AGE
my-nginx-propagation 16s
$ kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
my-nginx 2/2 2 2 22m
# member cluster1
$ kubectl config use-context member1
Switched to context "member1".
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
my-nginx 1/1 1 1 25m
$ kubectl get configmap
NAME DATA AGE
my-nginx-config 1 26m
# member cluster2
$ kubectl config use-context member2
Switched to context "member2".
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
my-nginx 1/1 1 1 27m
$ kubectl get configmap
NAME DATA AGE
my-nginx-config 1 27m
```

View File

@ -1,281 +0,0 @@
# Resource Propagating
The [PropagationPolicy](https://github.com/karmada-io/karmada/blob/master/pkg/apis/policy/v1alpha1/propagation_types.go#L13) and [ClusterPropagationPolicy](https://github.com/karmada-io/karmada/blob/master/pkg/apis/policy/v1alpha1/propagation_types.go#L292) APIs are provided to propagate resources. For the differences between the two APIs, please see [here](../frequently-asked-questions.md#what-is-the-difference-between-propagationpolicy-and-clusterpropagationpolicy).
Here, we use PropagationPolicy as an example to describe how to propagate resources.
## Before you start
[Install Karmada](../installation/installation.md) and prepare the [karmadactl command-line](../installation/install-kubectl-karmada.md) tool.
## Deploy a simplest multi-cluster Deployment
### Create a PropagationPolicy object
You can propagate a Deployment by creating a PropagationPolicy object defined in a YAML file. For example, this YAML
file describes a Deployment object named nginx under default namespace need to be propagated to member1 cluster:
```yaml
# propagationpolicy.yaml
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: example-policy # The default namespace is `default`.
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: nginx # If no namespace is specified, the namespace is inherited from the parent object scope.
placement:
clusterAffinity:
clusterNames:
- member1
```
1. Create a propagationPolicy base on the YAML file:
```shell
kubectl apply -f propagationpolicy.yaml
```
2. Create a Deployment nginx resource:
```shell
kubectl create deployment nginx --image nginx
```
> Note: The resource exists only as a template in karmada. After being propagated to a member cluster, the behavior of the resource is the same as that of a single kubernetes cluster.
> Note: Resources and PropagationPolicy are created in no sequence.
3. Display information of the deployment:
```shell
karmadactl get deployment
```
The output is similar to this:
```shell
The karmadactl get command now only supports the push mode. [ member3 ] is not running in push mode.
NAME CLUSTER READY UP-TO-DATE AVAILABLE AGE ADOPTION
nginx member1 1/1 1 1 52s Y
```
4. List the pods created by the deployment:
```shell
karmadactl get pod -l app=nginx
```
The output is similar to this:
```shell
The karmadactl get command now only supports the push mode. [ member3 ] is not running in push mode.
NAME CLUSTER READY STATUS RESTARTS AGE
nginx-6799fc88d8-s7vv9 member1 1/1 Running 0 52s
```
### Update PropagationPolicy
You can update the propagationPolicy by applying a new YAML file. This YAML file propagates the Deployment to the member2 cluster.
```yaml
# propagationpolicy-update.yaml
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: example-policy
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: nginx
placement:
clusterAffinity:
clusterNames: # Modify the selected cluster to propagate the Deployment.
- member2
```
1. Apply the new YAML file:
```shell
kubectl apply -f propagationpolicy-update.yaml
```
2. Display information of the deployment (the output is similar to this):
```shell
The karmadactl get command now only supports the push mode. [ member3 ] is not running in push mode.
NAME CLUSTER READY UP-TO-DATE AVAILABLE AGE ADOPTION
nginx member2 1/1 1 1 5s Y
```
3. List the pods of the deployment (the output is similar to this):
```shell
The karmadactl get command now only supports the push mode. [ member3 ] is not running in push mode.
NAME CLUSTER READY STATUS RESTARTS AGE
nginx-6799fc88d8-8t8cc member2 1/1 Running 0 17s
```
> Note: Updating the `.spec.resourceSelectors` field to change hit resources is currently not supported.
### Update Deployment
You can update the deployment template. The changes will be automatically synchronized to the member clusters.
1. Update deployment replicas to 2
2. Display information of the deployment (the output is similar to this):
```shell
The karmadactl get command now only supports the push mode. [ member3 ] is not running in push mode.
NAME CLUSTER READY UP-TO-DATE AVAILABLE AGE ADOPTION
nginx member2 2/2 2 2 7m59s Y
```
3. List the pods of the deployment (the output is similar to this):
```shell
The karmadactl get command now only supports the push mode. [ member3 ] is not running in push mode.
NAME CLUSTER READY STATUS RESTARTS AGE
nginx-6799fc88d8-8t8cc member2 1/1 Running 0 8m12s
nginx-6799fc88d8-zpl4j member2 1/1 Running 0 17s
```
### Delete a propagationPolicy
Delete the propagationPolicy by name:
```shell
kubectl delete propagationpolicy example-policy
```
Deleting a propagationPolicy does not delete deployments propagated to member clusters. You need to delete deployments in the karmada control-plane:
```shell
kubectl delete deployment nginx
```
## Deploy deployment into a specified set of target clusters
`.spec.placement.clusterAffinity` field of PropagationPolicy represents scheduling restrictions on a certain set of clusters, without which any cluster can be scheduling candidates.
It has four fields to set:
- LabelSelector
- FieldSelector
- ClusterNames
- ExcludeClusters
### LabelSelector
LabelSelector is a filter to select member clusters by labels. It uses `*metav1.LabelSelector` type. If it is non-nil and non-empty, only the clusters match this filter will be selected.
PropagationPolicy can be configured as follows:
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: test-propagation
spec:
...
placement:
clusterAffinity:
labelSelector:
matchLabels:
location: us
...
```
PropagationPolicy can also be configured as follows:
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: test-propagation
spec:
...
placement:
clusterAffinity:
labelSelector:
matchExpressions:
- key: location
operator: In
values:
- us
...
```
For a description of `matchLabels` and `matchExpressions`, you can refer to [Resources that support set-based requirements](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements).
### FieldSelector
FieldSelector is a filter to select member clusters by fields. If it is non-nil and non-empty, only the clusters match this filter will be selected.
PropagationPolicy can be configured as follows:
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx-propagation
spec:
...
placement:
clusterAffinity:
fieldSelector:
matchExpressions:
- key: provider
operator: In
values:
- huaweicloud
- key: region
operator: NotIn
values:
- cn-south-1
...
```
If multiple `matchExpressions` are specified in the `fieldSelector`, the cluster must match all `matchExpressions`.
The `key` in `matchExpressions` now supports three values: `provider`, `region`, and `zone`, which correspond to the `.spec.provider`, `.spec.region`, and `.spec.zone` fields of the Cluster object, respectively.
The `operator` in `matchExpressions` now supports `In` and `NotIn`.
### ClusterNames
Users can set the `ClusterNames` field to specify the selected clusters.
PropagationPolicy can be configured as follows:
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx-propagation
spec:
...
placement:
clusterAffinity:
clusterNames:
- member1
- member2
...
```
### ExcludeClusters
Users can set the `ExcludeClusters` fields to specify the clusters to be ignored.
PropagationPolicy can be configured as follows:
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx-propagation
spec:
...
placement:
clusterAffinity:
exclude:
- member1
- member3
...
```
## Configuring Multi-Cluster HA for Deployment
## Multi-Cluster Failover
Please refer to [Failover feature of Karmada](failover.md#failover-feature-of-karmada).
## Propagate specified resources to clusters
## Adjusting the instance propagation policy of Deployment in clusters

View File

@ -1,310 +0,0 @@
# Deploy apiserver-network-proxy (ANP)
## Purpose
For a member cluster that joins Karmada in the pull mode, you need to provide a method to connect the network between the Karmada control plane and the member cluster, so that karmada-aggregated-apiserver can access this member cluster.
Deploying ANP to achieve appeal is one of the methods. This article describes how to deploy ANP in Karmada.
## Environment
Karmada can be deployed using the kind tool.
You can directly use `hack/local-up-karmada.sh` to deploy Karmada.
## Actions
### Step 1: Download code
To facilitate demonstration, the code is modified based on ANP v0.0.24 to support access to the front server through HTTP. Here is the code repository address: https://github.com/mrlihanbo/apiserver-network-proxy/tree/v0.0.24/dev.
```shell
git clone -b v0.0.24/dev https://github.com/mrlihanbo/apiserver-network-proxy.git
cd apiserver-network-proxy/
```
### Step 2: Compile images
Compile the proxy-server and proxy-agent images.
```shell
docker build . --build-arg ARCH=amd64 -f artifacts/images/agent-build.Dockerfile -t swr.ap-southeast-1.myhuaweicloud.com/karmada/proxy-agent:0.0.24
docker build . --build-arg ARCH=amd64 -f artifacts/images/server-build.Dockerfile -t swr.ap-southeast-1.myhuaweicloud.com/karmada/proxy-server:0.0.24
```
### Step 3: Generate a certificate
Run the command to check the IP address of karmada-host-control-plane:
```shell
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' karmada-host-control-plane
```
Run the `make certs` command to generate a certificate and specify PROXY_SERVER_IP as the IP address obtained in the preceding command.
```shell
make certs PROXY_SERVER_IP=x.x.x.x
```
The certificate is generated in the `certs` folder.
### Step 4: Deploy proxy-server
Save the `proxy-server.yaml` file in the root directory of the ANP code repository.
<details>
<summary>unfold me to see the yaml</summary>
```yaml
# proxy-server.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy-server
namespace: karmada-system
spec:
replicas: 1
selector:
matchLabels:
app: proxy-server
template:
metadata:
labels:
app: proxy-server
spec:
containers:
- command:
- /proxy-server
args:
- --health-port=8092
- --cluster-ca-cert=/var/certs/server/cluster-ca-cert.crt
- --cluster-cert=/var/certs/server/cluster-cert.crt
- --cluster-key=/var/certs/server/cluster-key.key
- --mode=http-connect
- --proxy-strategies=destHost
- --server-ca-cert=/var/certs/server/server-ca-cert.crt
- --server-cert=/var/certs/server/server-cert.crt
- --server-key=/var/certs/server/server-key.key
image: swr.ap-southeast-1.myhuaweicloud.com/karmada/proxy-server:0.0.24
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8092
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 60
name: proxy-server
volumeMounts:
- mountPath: /var/certs/server
name: cert
restartPolicy: Always
hostNetwork: true
volumes:
- name: cert
secret:
secretName: proxy-server-cert
---
apiVersion: v1
kind: Secret
metadata:
name: proxy-server-cert
namespace: karmada-system
type: Opaque
data:
server-ca-cert.crt: |
{{server_ca_cert}}
server-cert.crt: |
{{server_cert}}
server-key.key: |
{{server_key}}
cluster-ca-cert.crt: |
{{cluster_ca_cert}}
cluster-cert.crt: |
{{cluster_cert}}
cluster-key.key: |
{{cluster_key}}
```
</details>
Save the `replace-proxy-server.sh` file in the root directory of the ANP code repository.
<details>
<summary>unfold me to see the shell</summary>
```shell
#!/bin/bash
cert_yaml=proxy-server.yaml
SERVER_CA_CERT=$(cat certs/frontend/issued/ca.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
sed -i'' -e "s/{{server_ca_cert}}/${SERVER_CA_CERT}/g" ${cert_yaml}
SERVER_CERT=$(cat certs/frontend/issued/proxy-frontend.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
sed -i'' -e "s/{{server_cert}}/${SERVER_CERT}/g" ${cert_yaml}
SERVER_KEY=$(cat certs/frontend/private/proxy-frontend.key | base64 | tr "\n" " "|sed s/[[:space:]]//g)
sed -i'' -e "s/{{server_key}}/${SERVER_KEY}/g" ${cert_yaml}
CLUSTER_CA_CERT=$(cat certs/agent/issued/ca.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
sed -i'' -e "s/{{cluster_ca_cert}}/${CLUSTER_CA_CERT}/g" ${cert_yaml}
CLUSTER_CERT=$(cat certs/agent/issued/proxy-frontend.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
sed -i'' -e "s/{{cluster_cert}}/${CLUSTER_CERT}/g" ${cert_yaml}
CLUSTER_KEY=$(cat certs/agent/private/proxy-frontend.key | base64 | tr "\n" " "|sed s/[[:space:]]//g)
sed -i'' -e "s/{{cluster_key}}/${CLUSTER_KEY}/g" ${cert_yaml}
```
</details>
Run the following commands to run the script:
```shell
chmod +x replace-proxy-server.sh
bash replace-proxy-server.sh
```
Deploy the proxy-server on the Karmada control plane:
```shell
kind load docker-image swr.ap-southeast-1.myhuaweicloud.com/karmada/proxy-server:0.0.24 --name karmada-host
export KUBECONFIG=/root/.kube/karmada.config
kubectl --context=karmada-host apply -f proxy-server.yaml
```
### Step 5: Deploy proxy-agent
Save the `proxy-agent.yaml` file in the root directory of the ANP code repository.
<details>
<summary>unfold me to see the yaml</summary>
```yaml
# proxy-agent.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: proxy-agent
name: proxy-agent
namespace: karmada-system
spec:
replicas: 1
selector:
matchLabels:
app: proxy-agent
template:
metadata:
labels:
app: proxy-agent
spec:
containers:
- command:
- /proxy-agent
args:
- '--ca-cert=/var/certs/agent/ca.crt'
- '--agent-cert=/var/certs/agent/proxy-agent.crt'
- '--agent-key=/var/certs/agent/proxy-agent.key'
- '--proxy-server-host={{proxy_server_addr}}'
- '--proxy-server-port=8091'
- '--agent-identifiers=host={{identifiers}}'
image: swr.ap-southeast-1.myhuaweicloud.com/karmada/proxy-agent:0.0.24
imagePullPolicy: IfNotPresent
name: proxy-agent
livenessProbe:
httpGet:
scheme: HTTP
port: 8093
path: /healthz
initialDelaySeconds: 15
timeoutSeconds: 60
volumeMounts:
- mountPath: /var/certs/agent
name: cert
volumes:
- name: cert
secret:
secretName: proxy-agent-cert
---
apiVersion: v1
kind: Secret
metadata:
name: proxy-agent-cert
namespace: karmada-system
type: Opaque
data:
ca.crt: |
{{proxy_agent_ca_crt}}
proxy-agent.crt: |
{{proxy_agent_crt}}
proxy-agent.key: |
{{proxy_agent_key}}
```
</details>
Save the `replace-proxy-agent.sh` file in the root directory of the ANP code repository.
<details>
<summary>unfold me to see the shell</summary>
```shell
#!/bin/bash
cert_yaml=proxy-agent.yaml
karmada_controlplan_addr=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' karmada-host-control-plane)
member3_cluster_addr=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' member3-control-plane)
sed -i'' -e "s/{{proxy_server_addr}}/${karmada_controlplan_addr}/g" ${cert_yaml}
sed -i'' -e "s/{{identifiers}}/${member3_cluster_addr}/g" ${cert_yaml}
PROXY_AGENT_CA_CRT=$(cat certs/agent/issued/ca.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
sed -i'' -e "s/{{proxy_agent_ca_crt}}/${PROXY_AGENT_CA_CRT}/g" ${cert_yaml}
PROXY_AGENT_CRT=$(cat certs/agent/issued/proxy-agent.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
sed -i'' -e "s/{{proxy_agent_crt}}/${PROXY_AGENT_CRT}/g" ${cert_yaml}
PROXY_AGENT_KEY=$(cat certs/agent/private/proxy-agent.key | base64 | tr "\n" " "|sed s/[[:space:]]//g)
sed -i'' -e "s/{{proxy_agent_key}}/${PROXY_AGENT_KEY}/g" ${cert_yaml}
```
</details>
Run the following commands to run the script:
```shell
chmod +x replace-proxy-agent.sh
bash replace-proxy-agent.sh
```
Deploy the proxy-agent in the pull mode for a member cluster (in this example, the `member3` cluster is in the pull mode.):
```shell
kind load docker-image swr.ap-southeast-1.myhuaweicloud.com/karmada/proxy-agent:0.0.24 --name member3
kubectl --kubeconfig=/root/.kube/members.config --context=member3 apply -f proxy-agent.yaml
```
**The ANP deployment is complete now.**
### Step 6: Add command flags for the karmada-agent deployment
After deploying the ANP deployment, you need to add extra command flags `--cluster-api-endpoint` and `--proxy-server-address` for the `karmada-agent` deployment in the `member3` cluster.
Where `--cluster-api-endpoint` is the APIEndpoint of the cluster. You can obtain it from the KubeConfig file of the `member3` cluster.
Where `--proxy-server-address` is the address of the proxy server that is used to proxy the cluster. In current case, you can set `--proxy-server-address` to `http://<karmada_controlplan_addr>:8088`. Get `karmada_controlplan_addr` value through the following command:
```shell
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' karmada-host-control-plane
```
Set port `8088` by modifying the code in ANP: https://github.com/mrlihanbo/apiserver-network-proxy/blob/v0.0.24/dev/cmd/server/app/server.go#L267. You can also modify it to a different value.

View File

@ -1,96 +0,0 @@
# Working with Argo CD
This topic walks you through how to use the [Argo CD](https://github.com/argoproj/argo-cd/) to manage your workload
`across clusters` with `Karmada`.
## Prerequisites
### Argo CD Installation
You have installed Argo CD following the instructions in [Getting Started](https://argo-cd.readthedocs.io/en/stable/getting_started/#getting-started).
### Karmada Installation
In this example, we are using a Karmada environment with at lease `3` member clusters joined.
You can set up the environment by `hack/local-up-karmada.sh`, which is also used to run our E2E cases.
```bash
# kubectl get clusters
NAME VERSION MODE READY AGE
member1 v1.19.1 Push True 18h
member2 v1.19.1 Push True 18h
member3 v1.19.1 Pull True 17h
```
## Registering Karmada to Argo CD
This step registers Karmada control plane to Argo CD.
First list the contexts of all clusters in your current kubeconfig:
```bash
kubectl config get-contexts -o name
```
Choose the context of the Karmada control plane from the list and add it to `argocd cluster add CONTEXTNAME`.
For example, for `karmada-apiserver` context, run:
```bash
argocd cluster add karmada-apiserver
```
If everything goes well, you can see the registered Karmada control plane from the Argo CD UI, e.g.:
![](./images/argocd-register-karmada.png)
## Creating Apps Via UI
### Preparing Apps
Take the [guestbook](https://github.com/argoproj/argocd-example-apps/tree/53e28ff20cc530b9ada2173fbbd64d48338583ba/guestbook)
as example.
First, fork the [argocd-example-apps](https://github.com/argoproj/argocd-example-apps) repo and create a branch
`karmada-demo`.
Then, create a [PropagationPolicy manifest](https://github.com/RainbowMango/argocd-example-apps/blob/e499ea5c6f31b665366bfbe5161737dc8723fb3b/guestbook/propagationpolicy.yaml) under the `guestbook` directory.
### Creating Apps
Click the `+ New App` button as shown below:
![](./images/argocd-new-app.png)
Give your app the name `guestbook-multi-cluster`, use the project `default`, and leave the sync policy as `Manual`:
![](./images/argocd-new-app-name.png)
Connect the `forked repo` to Argo CD by setting repository url to the github repo url, set revision as `karmada-demo`,
and set the path to `guestbook`:
![](./images/argocd-new-app-repo.png)
For Destination, set cluster to `karmada` and namespace to `default`:
![](./images/argocd-new-app-cluster.png)
### Syncing Apps
You can sync your applications via UI by simply clicking the SYNC button and following the pop-up instructions, e.g.:
![](./images/argocd-sync-apps.png)
More details please refer to [argocd guide: sync the application](https://argo-cd.readthedocs.io/en/stable/getting_started/#7-sync-deploy-the-application).
## Checking Apps Status
For deployment running in more than one clusters, you don't need to create applications for each
cluster. You can get the overall and detailed status from one `Application`.
![](./images/argocd-status-overview.png)
The `svc/guestbook-ui`, `deploy/guestbook-ui` and `propagationpolicy/guestbook` in the middle of the picture are the
resources created by the manifest in the forked repo. And the `resourcebinding/guestbook-ui-service` and
`resourcebinding/guestbook-ui-deployment` in the right of the picture are the resources created by Karmada.
### Checking Detailed Status
You can obtain the Deployment's detailed status by `resourcebinding/guestbook-ui-deployment`.
![](./images/argocd-status-resourcebinding.png)
### Checking Aggregated Status
You can obtain the aggregated status of the Deployment from UI by `deploy/guestbook-ui`.
![](./images/argocd-status-aggregated.png)

View File

@ -1,240 +0,0 @@
# Use Filebeat to collect logs of Karmada member clusters
[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them either to [Elasticsearch](https://www.elastic.co/products/elasticsearch) or [kafka](https://github.com/apache/kafka) for indexing.
This document demonstrates how to use the `Filebeat` to collect logs of Karmada member clusters.
## Start up Karmada clusters
You just need to clone Karmada repo, and run the following script in Karmada directory.
```bash
hack/local-up-karmada.sh
```
## Start Filebeat
1. Create resource objects of Filebeat, the content is as follows. You can specify a list of inputs in the `filebeat.inputs` section of the `filebeat.yml`. Inputs specify how Filebeat locates and processes input data, also you can configure Filebeat to write to a specific output by setting options in the `Outputs` section of the `filebeat.yml` config file. The example will collect the log information of each container and write the collected logs to a file. More detailed information about the input and output configuration, please refer to: https://github.com/elastic/beats/tree/master/filebeat/docs
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: logging
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# node: ${NODE_NAME}
# hints.enabled: true
# hints.default_config:
# type: container
# paths:
# - /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- add_cloud_metadata:
- add_host_metadata:
#output.elasticsearch:
# hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
# username: ${ELASTICSEARCH_USERNAME}
# password: ${ELASTICSEARCH_PASSWORD}
output.file:
path: "/tmp/filebeat"
filename: filebeat
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:8.0.0-beta1-amd64
imagePullPolicy: IfNotPresent
args: [ "-c", "/usr/share/filebeat/filebeat.yml", "-e",]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /usr/share/filebeat/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: inputs
mountPath: /usr/share/filebeat/inputs.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
- name: inputs
configMap:
defaultMode: 0600
name: filebeat-config
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
```
2. Run the below command to execute Karmada PropagationPolicy and ClusterPropagationPolicy.
```
cat <<EOF | kubectl apply -f -
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: filebeat-propagation
namespace: logging
spec:
resourceSelectors:
- apiVersion: v1
kind: Namespace
name: logging
- apiVersion: v1
kind: ServiceAccount
name: filebeat
namespace: logging
- apiVersion: v1
kind: ConfigMap
name: filebeat-config
namespace: logging
- apiVersion: apps/v1
kind: DaemonSet
name: filebeat
namespace: logging
placement:
clusterAffinity:
clusterNames:
- member1
- member2
- member3
EOF
cat <<EOF | kubectl apply -f -
apiVersion: policy.karmada.io/v1alpha1
kind: ClusterPropagationPolicy
metadata:
name: filebeatsrbac-propagation
spec:
resourceSelectors:
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
name: filebeat
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
name: filebeat
placement:
clusterAffinity:
clusterNames:
- member1
- member2
- member3
EOF
```
3. Obtain the collected logs according to the `output` configuration of the `filebeat.yml`.
## Reference
- https://github.com/elastic/beats/tree/master/filebeat
- https://github.com/elastic/beats/tree/master/filebeat/docs

View File

@ -1,312 +0,0 @@
# Use Flux to support Helm chart propagation
[Flux](https://fluxcd.io/) is most useful when used as a deployment tool at the end of a Continuous Delivery Pipeline. Flux will make sure that your new container images and config changes are propagated to the cluster. With Flux, Karmada can easily realize the ability to distribute applications packaged by Helm across clusters. Not only that, with Karmada's OverridePolicy, users can customize applications for specific clusters and manage cross-cluster applications on the unified Karmada Control Plane.
## Start up Karmada clusters
To start up Karmada, you can refer to [here](https://github.com/karmada-io/karmada/blob/master/docs/installation/installation.md).
If you just want to try Karmada, we recommend building a development environment by ```hack/local-up-karmada.sh```.
```sh
git clone https://github.com/karmada-io/karmada
cd karmada
hack/local-up-karmada.sh
```
After that, you will start a Kubernetes cluster by kind to run the Karmada Control Plane and create member clusters managed by Karmada.
```sh
kubectl get clusters --kubeconfig ~/.kube/karmada.config
```
You can use the command above to check registered clusters, and you will get similar output as follows:
```
NAME VERSION MODE READY AGE
member1 v1.23.4 Push True 7m38s
member2 v1.23.4 Push True 7m35s
member3 v1.23.4 Pull True 7m27s
```
## Start up Flux
In the Karmada Control Plane, you need to install Flux CRDs but do not need controllers to reconcile them. They are treated as resource templates, not specific resource instances.
Based on the work API [here](https://github.com/kubernetes-sigs/work-api), they will be encapsulated as a work object delivered to member clusters and reconciled by Flux controllers in member clusters, finally.
```sh
kubectl apply -k github.com/fluxcd/flux2/manifests/crds?ref=main --kubeconfig ~/.kube/karmada.config
```
For testing purposes, we'll install Flux on member clusters without storing its manifests in a Git repository:
```sh
flux install --kubeconfig ~/.kube/members.config --context member1
flux install --kubeconfig ~/.kube/members.config --context member2
```
Tips:
1. If you want to manage Helm releases across your fleet of clusters, Flux must be installed on each cluster.
2. If the Flux toolkit controllers are successfully installed, you should see the following Pods:
```
$ kubectl get pod -n flux-system
NAME READY STATUS RESTARTS AGE
helm-controller-55896d6ccf-dlf8b 1/1 Running 0 15d
kustomize-controller-76795877c9-mbrsk 1/1 Running 0 15d
notification-controller-7ccfbfbb98-lpgjl 1/1 Running 0 15d
source-controller-6b8d9cb5cc-7dbcb 1/1 Running 0 15d
```
## Helm release propagation
If you want to propagate Helm releases for your apps to member clusters, you can refer to the guide below.
1. Define a Flux `HelmRepository` and a `HelmRelease` manifest in the Karmada Control Plane. They will serve as resource templates.
```yaml
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: podinfo
spec:
interval: 1m
url: https://stefanprodan.github.io/podinfo
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: podinfo
spec:
interval: 5m
chart:
spec:
chart: podinfo
version: 5.0.3
sourceRef:
kind: HelmRepository
name: podinfo
```
2. Define a Karmada `PropagationPolicy` that will propagate them to member clusters:
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: helm-repo
spec:
resourceSelectors:
- apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
name: podinfo
placement:
clusterAffinity:
clusterNames:
- member1
- member2
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: helm-release
spec:
resourceSelectors:
- apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
name: podinfo
placement:
clusterAffinity:
clusterNames:
- member1
- member2
```
The above configuration is for propagating the Flux objects to member1 and member2 clusters.
3. Apply those manifests to the Karmada-apiserver:
```sh
kubectl apply -f ../helm/ --kubeconfig ~/.kube/karmada.config
```
The output is similar to:
```
helmrelease.helm.toolkit.fluxcd.io/podinfo created
helmrepository.source.toolkit.fluxcd.io/podinfo created
propagationpolicy.policy.karmada.io/helm-release created
propagationpolicy.policy.karmada.io/helm-repo created
```
4. Switch to the distributed cluster and verify:
```sh
helm --kubeconfig ~/.kube/members.config --kube-context member1 list
```
The output is similar to:
```
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
podinfo default 1 2022-05-27 01:44:35.24229175 +0000 UTC deployed podinfo-5.0.3 5.0.3
```
Based on Karmada's propagation policy, you can schedule Helm releases to your desired cluster flexibly, just like Kubernetes schedules Pods to the desired node.
## Customize the Helm release for specific clusters
The example above shows how to propogate the same Helm release to multiple clusters in Karmada. Besides, you can use Karmada's OverridePolicy to customize applications for specific clusters.
For example, if you just want to change replicas in member1, you can refer to the overridePolicy below.
1. Define a Karmada `OverridePolicy`:
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: OverridePolicy
metadata:
name: example-override
namespace: default
spec:
resourceSelectors:
- apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
name: podinfo
overrideRules:
- targetCluster:
clusterNames:
- member1
overriders:
plaintext:
- path: "/spec/values"
operator: add
value:
replicaCount: 2
```
2. Apply the manifest to the Karmada-apiserver:
```sh
kubectl apply -f example-override.yaml --kubeconfig ~/.kube/karmada.config
```
The output is similar to:
```
overridepolicy.policy.karmada.io/example-override configured
```
3. After applying the above policy in the Karmada Control Plane, you will find that replicas in member1 have changed to 2, but those in member2 keep the same.
```sh
kubectl --kubeconfig ~/.kube/members.config --context member1 get po
```
The output is similar to:
```
NAME READY STATUS RESTARTS AGE
podinfo-68979685bc-6wz6s 1/1 Running 0 6m28s
podinfo-68979685bc-dz9f6 1/1 Running 0 7m42s
```
## Kustomize propagation
Kustomize propagation is basically the same as helm chart propagation above. You can refer to the guide below.
1. Define a Flux `GitRepository` and a `Kustomization` manifest in the Karmada Control Plane:
```yaml
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: GitRepository
metadata:
name: podinfo
spec:
interval: 1m
url: https://github.com/stefanprodan/podinfo
ref:
branch: master
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: podinfo-dev
spec:
interval: 5m
path: "./deploy/overlays/dev/"
prune: true
sourceRef:
kind: GitRepository
name: podinfo
validation: client
timeout: 80s
```
2. Define a Karmada `PropagationPolicy` that will propagate them to member clusters:
```yaml
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: kust-release
spec:
resourceSelectors:
- apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
name: podinfo-dev
placement:
clusterAffinity:
clusterNames:
- member1
- member2
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: kust-git
spec:
resourceSelectors:
- apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: GitRepository
name: podinfo
placement:
clusterAffinity:
clusterNames:
- member1
- member2
```
3. Apply those YAMLs to the karmada-apiserver:
```sh
kubectl apply -f kust/ --kubeconfig ~/.kube/karmada.config
```
The output is similar to:
```
gitrepository.source.toolkit.fluxcd.io/podinfo created
kustomization.kustomize.toolkit.fluxcd.io/podinfo-dev created
propagationpolicy.policy.karmada.io/kust-git created
propagationpolicy.policy.karmada.io/kust-release created
```
4. Switch to the distributed cluster and verify:
```sh
kubectl --kubeconfig ~/.kube/members.config --context member1 get pod -n dev
```
The output is similar to:
```
NAME READY STATUS RESTARTS AGE
backend-69c7655cb-rbtrq 1/1 Running 0 15s
cache-bdff5c8dc-mmnbm 1/1 Running 0 15s
frontend-7f98bf6f85-dw4vq 1/1 Running 0 15s
```
## Reference
- https://fluxcd.io
- https://github.com/fluxcd

View File

@ -1,478 +0,0 @@
# Working with Gatekeeper(OPA)
[Gatekeeper](https://github.com/open-policy-agent/gatekeeper) , is a customizable admission webhook for Kubernetes that enforces policies executed by the Open Policy Agent (OPA), a policy engine for Cloud Native environments hosted by [Cloud Native Computing Foundation](https://cncf.io/).
This document demonstrates how to use the `Gatekeeper` to manage OPA policies.
## Prerequisites
### Start up Karmada clusters
You just need to clone Karmada repo, and run the following script in the Karmada directory.
```
hack/local-up-karmada.sh
```
## Gatekeeper Installations
In this case, you will use Gatekeeper v3.7.2. Related deployment files are from [here](https://github.com/open-policy-agent/gatekeeper/blob/release-3.7/deploy/gatekeeper.yaml).
### Install Gatekeeper APIs on Karmada
1. Create resource objects for Gatekeeper in karmada controller plane, the content is as follows.
```console
kubectl config use-context karmada-apiserver
```
Deploy namespace: https://github.com/open-policy-agent/gatekeeper/blob/0d239574f8e71908325391d49cb8dd8e4ed6f6fa/deploy/gatekeeper.yaml#L1-L9
Deploy Gatekeeper CRDs: https://github.com/open-policy-agent/gatekeeper/blob/0d239574f8e71908325391d49cb8dd8e4ed6f6fa/deploy/gatekeeper.yaml#L27-L1999
Deploy Gatekeeper secrets: https://github.com/open-policy-agent/gatekeeper/blob/0d239574f8e71908325391d49cb8dd8e4ed6f6fa/deploy/gatekeeper.yaml#L2261-L2267
Deploy webhook config:
```yaml
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
labels:
gatekeeper.sh/system: "yes"
name: gatekeeper-mutating-webhook-configuration
webhooks:
- admissionReviewVersions:
- v1
- v1beta1
clientConfig:
#Change the clientconfig from service type to url type cause webhook config and service are not in the same cluster.
url: https://gatekeeper-webhook-service.gatekeeper-system.svc:443/v1/mutate
failurePolicy: Ignore
matchPolicy: Exact
name: mutation.gatekeeper.sh
namespaceSelector:
matchExpressions:
- key: admission.gatekeeper.sh/ignore
operator: DoesNotExist
rules:
- apiGroups:
- '*'
apiVersions:
- '*'
operations:
- CREATE
- UPDATE
resources:
- '*'
sideEffects: None
timeoutSeconds: 1
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
gatekeeper.sh/system: "yes"
name: gatekeeper-validating-webhook-configuration
webhooks:
- admissionReviewVersions:
- v1
- v1beta1
clientConfig:
#Change the clientconfig from service type to url type cause webhook config and service are not in the same cluster.
url: https://gatekeeper-webhook-service.gatekeeper-system.svc:443/v1/admit
failurePolicy: Ignore
matchPolicy: Exact
name: validation.gatekeeper.sh
namespaceSelector:
matchExpressions:
- key: admission.gatekeeper.sh/ignore
operator: DoesNotExist
rules:
- apiGroups:
- '*'
apiVersions:
- '*'
operations:
- CREATE
- UPDATE
resources:
- '*'
sideEffects: None
timeoutSeconds: 3
- admissionReviewVersions:
- v1
- v1beta1
clientConfig:
#Change the clientconfig from service type to url type cause webhook config and service are not in the same cluster.
url: https://gatekeeper-webhook-service.gatekeeper-system.svc:443/v1/admitlabel
failurePolicy: Fail
matchPolicy: Exact
name: check-ignore-label.gatekeeper.sh
rules:
- apiGroups:
- ""
apiVersions:
- '*'
operations:
- CREATE
- UPDATE
resources:
- namespaces
sideEffects: None
timeoutSeconds: 3
```
You need to change the clientconfig from service type to url type for multi-cluster deployment.
Also, you need to deploy a dummy pod in gatekeeper-system namespace in karmada-apiserver context, because when Gatekeeper generates a policy template CRD, a status object is generated to monitor the status of the policy template, and the status object is bound by the controller Pod through the OwnerReference. Therefore, when the CRD and the controller are not in the same cluster, a dummy Pod needs to be used instead of the controller. The Pod enables the status object to be successfully generated.
For example:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: dummy-pod
namespace: gatekeeper-system
spec:
containers:
- name: dummy-pod
image: nginx:latest
imagePullPolicy: Always
```
### Install GateKeeper components on host cluster
```console
kubectl config use-context karmada-host
```
Deploy namespace: https://github.com/open-policy-agent/gatekeeper/blob/0d239574f8e71908325391d49cb8dd8e4ed6f6fa/deploy/gatekeeper.yaml#L1-L9
Deploy RBAC resources for deployment: https://github.com/open-policy-agent/gatekeeper/blob/0d239574f8e71908325391d49cb8dd8e4ed6f6fa/deploy/gatekeeper.yaml#L1999-L2375
Deploy Gatekeeper controllers and secret as kubeconfig:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
control-plane: audit-controller
gatekeeper.sh/operation: audit
gatekeeper.sh/system: "yes"
name: gatekeeper-audit
namespace: gatekeeper-system
spec:
replicas: 1
selector:
matchLabels:
control-plane: audit-controller
gatekeeper.sh/operation: audit
gatekeeper.sh/system: "yes"
template:
metadata:
annotations:
container.seccomp.security.alpha.kubernetes.io/manager: runtime/default
labels:
control-plane: audit-controller
gatekeeper.sh/operation: audit
gatekeeper.sh/system: "yes"
spec:
automountServiceAccountToken: true
containers:
- args:
- --operation=audit
- --operation=status
- --operation=mutation-status
- --logtostderr
- --disable-opa-builtin={http.send}
- --kubeconfig=/etc/kubeconfig
command:
- /manager
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: POD_NAME
value: {{POD_NAME}}
image: openpolicyagent/gatekeeper:v3.7.2
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: 9090
name: manager
ports:
- containerPort: 8888
name: metrics
protocol: TCP
- containerPort: 9090
name: healthz
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9090
resources:
limits:
cpu: 1000m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
readOnlyRootFilesystem: true
runAsGroup: 999
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp/audit
name: tmp-volume
- mountPath: /etc/kubeconfig
name: kubeconfig
subPath: kubeconfig
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: gatekeeper-admin
terminationGracePeriodSeconds: 60
volumes:
- emptyDir: {}
name: tmp-volume
- name: kubeconfig
secret:
defaultMode: 420
secretName: kubeconfig
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
control-plane: controller-manager
gatekeeper.sh/operation: webhook
gatekeeper.sh/system: "yes"
name: gatekeeper-controller-manager
namespace: gatekeeper-system
spec:
replicas: 3
selector:
matchLabels:
control-plane: controller-manager
gatekeeper.sh/operation: webhook
gatekeeper.sh/system: "yes"
template:
metadata:
annotations:
container.seccomp.security.alpha.kubernetes.io/manager: runtime/default
labels:
control-plane: controller-manager
gatekeeper.sh/operation: webhook
gatekeeper.sh/system: "yes"
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: gatekeeper.sh/operation
operator: In
values:
- webhook
topologyKey: kubernetes.io/hostname
weight: 100
automountServiceAccountToken: true
containers:
- args:
- --port=8443
- --logtostderr
- --exempt-namespace=gatekeeper-system
- --operation=webhook
- --operation=mutation-webhook
- --disable-opa-builtin={http.send}
- --kubeconfig=/etc/kubeconfig
command:
- /manager
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: POD_NAME
value: {{POD_NAME}}
image: openpolicyagent/gatekeeper:v3.7.2
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: 9090
name: manager
ports:
- containerPort: 8443
name: webhook-server
protocol: TCP
- containerPort: 8888
name: metrics
protocol: TCP
- containerPort: 9090
name: healthz
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9090
resources:
limits:
cpu: 1000m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
readOnlyRootFilesystem: true
runAsGroup: 999
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /certs
name: cert
readOnly: true
- mountPath: /etc/kubeconfig
name: kubeconfig
subPath: kubeconfig
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: gatekeeper-admin
terminationGracePeriodSeconds: 60
volumes:
- name: cert
secret:
defaultMode: 420
secretName: gatekeeper-webhook-server-cert
- name: kubeconfig
secret:
defaultMode: 420
secretName: kubeconfig
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
gatekeeper.sh/system: "yes"
name: gatekeeper-controller-manager
namespace: gatekeeper-system
spec:
minAvailable: 1
selector:
matchLabels:
control-plane: controller-manager
gatekeeper.sh/operation: webhook
gatekeeper.sh/system: "yes"
---
apiVersion: v1
stringData:
kubeconfig: |-
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: {{ca_crt}}
server: https://karmada-apiserver.karmada-system.svc.cluster.local:5443
name: kind-karmada
contexts:
- context:
cluster: kind-karmada
user: kind-karmada
name: karmada
current-context: karmada
kind: Config
preferences: {}
users:
- name: kind-karmada
user:
client-certificate-data: {{client_cer}}
client-key-data: {{client_key}}
kind: Secret
metadata:
name: kubeconfig
namespace: gatekeeper-system
```
You need to fill in the dummy pod created in step 1 to {{ POD_NAME }} and fill in the secret which represents kubeconfig pointing to karmada-apiserver.
Deploy ResourceQuota: https://github.com/open-policy-agent/gatekeeper/blob/0d239574f8e71908325391d49cb8dd8e4ed6f6fa/deploy/gatekeeper.yaml#L10-L26
### Extra steps
After all, we need to copy the secret `gatekeeper-webhook-server-cert` in karmada-apiserver context to that in karmada-host context to keep secrets stored in `etcd` and volumes mounted in controller the same.
## Run demo
### Create k8srequiredlabels template
```yaml
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
validation:
openAPIV3Schema:
type: object
description: Describe K8sRequiredLabels crd parameters
properties:
labels:
type: array
items:
type: string
description: A label string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
provided := {label | input.review.object.metadata.labels[label]}
required := {label | label := input.parameters.labels[_]}
missing := required - provided
count(missing) > 0
msg := sprintf("you must provide labels: %v", [missing])
}
```
### Create k8srequiredlabels constraint
```yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-gk
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels: ["gatekeepers"]
```
### Create a bad namespace
```console
kubectl create ns test
Error from server ([ns-must-have-gk] you must provide labels: {"gatekeepers"}): admission webhook "validation.gatekeeper.sh" denied the request: [ns-must-have-gk] you must provide labels: {"gatekeepers"}
```
## Reference
- https://github.com/open-policy-agent/gatekeeper

View File

@ -1,467 +0,0 @@
# Use Istio on Karmada
This document uses an example to demonstrate how to use [Istio](https://istio.io/) on Karmada.
Follow this guide to install the Istio control plane on `karmada-host` (the primary cluster) and configure `member1` and `member2` (the remote cluster) to use the control plane in `karmada-host`. All clusters reside on the network1 network, meaning there is direct connectivity between the pods in both clusters.
<image src="images/istio-on-karmada.png" caption="Istio on Karmada" />
## Install Karmada
### Install karmada control plane
Following the steps [Install karmada control plane](https://github.com/karmada-io/karmada#install-karmada-control-plane) in Quick Start, you can get a Karmada.
## Deploy Istio
***
If you are testing multicluster setup on `kind` you can use [MetalLB](https://metallb.universe.tf/installation/) to make use of `EXTERNAL-IP` for `LoadBalancer` services.
***
### Install istioctl
Please refer to the [istioctl](https://istio.io/latest/docs/setup/getting-started/#download) Installation.
### Prepare CA certificates
Following the steps [plug-in-certificates-and-key-into-the-cluster](https://istio.io/latest/docs/tasks/security/cert-management/plugin-ca-cert/#plug-in-certificates-and-key-into-the-cluster) to configure Istio CA.
Replace the cluster name `cluster1` with `primary`, the output will looks like as follwing:
```bash
root@karmada-demo istio-on-karmada# tree certs
certs
├── primary
│   ├── ca-cert.pem
│   ├── ca-key.pem
│   ├── cert-chain.pem
│   └── root-cert.pem
├── root-ca.conf
├── root-cert.csr
├── root-cert.pem
├── root-cert.srl
└── root-key.pem
```
### Install Istio on karmada-apiserver
Export `KUBECONFIG` and switch to `karmada apiserver`:
```
# export KUBECONFIG=$HOME/.kube/karmada.config
# kubectl config use-context karmada-apiserver
```
Create a secret `cacerts` in `istio-system` namespace:
```bash
kubectl create namespace istio-system
kubectl create secret generic cacerts -n istio-system \
--from-file=certs/primary/ca-cert.pem \
--from-file=certs/primary/ca-key.pem \
--from-file=certs/primary/root-cert.pem \
--from-file=certs/primary/cert-chain.pem
```
Create a propagation policy for `cacert` secret:
```bash
cat <<EOF | kubectl apply -f -
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: cacerts-propagation
namespace: istio-system
spec:
resourceSelectors:
- apiVersion: v1
kind: Secret
name: cacerts
placement:
clusterAffinity:
clusterNames:
- member1
- member2
EOF
```
Run the following command to install istio CRDs on karmada apiserver:
```bash
cat <<EOF | istioctl install -y --set profile=minimal -f -
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
accessLogFile: /dev/stdout
values:
global:
meshID: mesh1
multiCluster:
clusterName: primary
network: network1
EOF
```
Karmada apiserver will not deploy a real istiod pod, you should press `ctrl+c` to exit installation when `Processing resources for Istiod`.
```bash
✔ Istio core installed
- Processing resources for Istiod.
```
### Install Istio on karmada host
1. Create secret on karmada-host
Karmada host is not a member cluster, we need create the `cacerts` secret for `istiod`.
Export `KUBECONFIG` and switch to `karmada host`:
```
# export KUBECONFIG=$HOME/.kube/karmada.config
# kubectl config use-context karmada-host
```
Create a secret `cacerts` in `istio-system` namespace:
```bash
kubectl create namespace istio-system
kubectl create secret generic cacerts -n istio-system \
--from-file=certs/primary/ca-cert.pem \
--from-file=certs/primary/ca-key.pem \
--from-file=certs/primary/root-cert.pem \
--from-file=certs/primary/cert-chain.pem
```
2. Create istio-kubeconfig on karmada-host
```bash
kubectl get secret -nkarmada-system kubeconfig --template={{.data.kubeconfig}} | base64 -d > kind-karmada.yaml
```
```bash
kubectl create secret generic istio-kubeconfig --from-file=config=kind-karmada.yaml -nistio-system
```
3. Install istio control plane
```bash
cat <<EOF | istioctl install -y --set profile=minimal -f -
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
accessLogFile: /dev/stdout
values:
global:
meshID: mesh1
multiCluster:
clusterName: primary
network: network1
EOF
```
4. Expose istiod service
Run the following command to create a service for the `istiod` service:
```bash
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: istiod-elb
namespace: istio-system
spec:
ports:
- name: https-dns
port: 15012
protocol: TCP
targetPort: 15012
selector:
app: istiod
istio: pilot
sessionAffinity: None
type: LoadBalancer
EOF
```
Export DISCOVERY_ADDRESS:
```bash
export DISCOVERY_ADDRESS=$(kubectl get svc istiod-elb -nistio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
# verify
echo $DISCOVERY_ADDRESS
```
### Prepare member1 cluster secret
1. Export `KUBECONFIG` and switch to `karmada member1`:
```bash
export KUBECONFIG="$HOME/.kube/members.config"
kubectl config use-context member1
```
2. Create istio remote secret for member1:
```bash
istioctl x create-remote-secret --name=member1 > istio-remote-secret-member1.yaml
```
### Prepare member2 cluster secret
1. Export `KUBECONFIG` and switch to `karmada member2`:
```bash
export KUBECONFIG="$HOME/.kube/members.config"
kubectl config use-context member2
```
2. Create istio remote secret for member1:
```bash
istioctl x create-remote-secret --name=member2 > istio-remote-secret-member2.yaml
```
### Apply istio remote secret
Export `KUBECONFIG` and switch to `karmada apiserver`:
```
# export KUBECONFIG=$HOME/.kube/karmada.config
# kubectl config use-context karmada-apiserver
```
Apply istio remote secret:
```bash
kubectl apply -f istio-remote-secret-member1.yaml
kubectl apply -f istio-remote-secret-member2.yaml
```
### Install istio remote
1. Install istio remote member1
Export `KUBECONFIG` and switch to `karmada member1`:
```bash
export KUBECONFIG="$HOME/.kube/members.config"
kubectl config use-context member1
```
```bash
cat <<EOF | istioctl install -y -f -
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
meshID: mesh1
multiCluster:
clusterName: member1
network: network1
remotePilotAddress: ${DISCOVERY_ADDRESS}
EOF
```
2. Install istio remote member2
Export `KUBECONFIG` and switch to `karmada member2`:
```bash
export KUBECONFIG="$HOME/.kube/members.config"
kubectl config use-context member2
```
```bash
cat <<EOF | istioctl install -y -f -
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
meshID: mesh1
multiCluster:
clusterName: member2
network: network1
remotePilotAddress: ${DISCOVERY_ADDRESS}
EOF
```
## Deploy bookinfo application
Export `KUBECONFIG` and switch to `karmada apiserver`:
```
# export KUBECONFIG=$HOME/.kube/karmada.config
# kubectl config use-context karmada-apiserver
```
Create an `istio-demo` namespace:
```bash
kubectl create namespace istio-demo
```
Label the namespace that will host the application with `istio-injection=enabled`:
```bash
kubectl label namespace istio-demo istio-injection=enabled
```
Deploy your application using the `kubectl` command:
```bash
kubectl apply -nistio-demo -f https://raw.githubusercontent.com/istio/istio/release-1.12/samples/bookinfo/platform/kube/bookinfo.yaml
```
Run the following command to create default destination rules for the Bookinfo services:
```bash
kubectl apply -nistio-demo -f https://raw.githubusercontent.com/istio/istio/release-1.12/samples/bookinfo/networking/destination-rule-all.yaml
```
Run the following command to create virtual service for the Bookinfo services:
```bash
kubectl apply -nistio-demo -f https://raw.githubusercontent.com/istio/istio/release-1.12/samples/bookinfo/networking/virtual-service-all-v1.yaml
```
Run the following command to create propagation policy for the Bookinfo services:
```bash
cat <<EOF | kubectl apply -nistio-demo -f -
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: service-propagation
spec:
resourceSelectors:
- apiVersion: v1
kind: Service
name: productpage
- apiVersion: v1
kind: Service
name: details
- apiVersion: v1
kind: Service
name: reviews
- apiVersion: v1
kind: Service
name: ratings
placement:
clusterAffinity:
clusterNames:
- member1
- member2
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: produtpage-propagation
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: productpage-v1
- apiVersion: v1
kind: ServiceAccount
name: bookinfo-productpage
placement:
clusterAffinity:
clusterNames:
- member1
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: details-propagation
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: details-v1
- apiVersion: v1
kind: ServiceAccount
name: bookinfo-details
placement:
clusterAffinity:
clusterNames:
- member2
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: reviews-propagation
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: reviews-v1
- apiVersion: apps/v1
kind: Deployment
name: reviews-v2
- apiVersion: apps/v1
kind: Deployment
name: reviews-v3
- apiVersion: v1
kind: ServiceAccount
name: bookinfo-reviews
placement:
clusterAffinity:
clusterNames:
- member1
- member2
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: ratings-propagation
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: ratings-v1
- apiVersion: v1
kind: ServiceAccount
name: bookinfo-ratings
placement:
clusterAffinity:
clusterNames:
- member2
EOF
```
Deploy `fortio` application using the `kubectl` command:
```bash
kubectl apply -nistio-demo -f https://raw.githubusercontent.com/istio/istio/release-1.12/samples/httpbin/sample-client/fortio-deploy.yaml
```
Run the following command to create propagation policy for the `fortio` services:
```bash
cat <<EOF | kubectl apply -nistio-demo -f -
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: fortio-propagation
spec:
resourceSelectors:
- apiVersion: v1
kind: Service
name: fortio
- apiVersion: apps/v1
kind: Deployment
name: fortio-deploy
placement:
clusterAffinity:
clusterNames:
- member1
- member2
EOF
```
Export `KUBECONFIG` and switch to `karmada member1`:
```bash
export KUBECONFIG="$HOME/.kube/members.config"
kubectl config use-context member1
```
Run the following command to verify `productpage` application installation:
```bash
export FORTIO_POD=`kubectl get po -nistio-demo | grep fortio | awk '{print $1}'`
kubectl exec -it ${FORTIO_POD} -nistio-demo -- fortio load -t 3s productpage:9080/productpage
```
## What's next
Folling the [guide](https://istio.io/latest/docs/examples/bookinfo/#confirm-the-app-is-accessible-from-outside-the-cluster) to confirm the app is accessible from outside the cluster.

View File

@ -1,270 +0,0 @@
# working-with-istio-on-non-flat-network
This document uses an example to demonstrate how to use [Istio](https://istio.io/) on Karmada when the clusters reside
on the different networks.
Follow this guide to install the Istio control plane on `member1` (the primary cluster) and configure `member2` (the
remote cluster) to use the control plane in `member1`. All clusters reside on the different network, meaning there is
not direct connectivity between the pods in all clusters.
<image src="images/istio-on-karmada-different-network.png" caption="Istio on Karmada-different-network" />
***
The reason for deploying `istiod` on the `member1` is that `kiali` needs to be deployed on the same cluster as `istiod`
. If `istiod` and `kiali` are deployed on the `karmada-host`,`kiali` will not find the namespace created by `karmada`. It
cannot implement the function of service topology for application deployed by `karmada`. I will continue to provide a new
solution later that deploys `istiod` on the `karmada-host`.
***
## Install Karmada
### Install karmada control plane
Following the steps [Install karmada control plane](https://github.com/karmada-io/karmada#install-karmada-control-plane)
in Quick Start, you can get a Karmada.
## Deploy Istio
***
If you are testing multicluster setup on `kind` you can use [MetalLB](https://metallb.universe.tf/installation/) to make use of `EXTERNAL-IP` for `LoadBalancer` services.
***
### Install istioctl
Please refer to the [istioctl](https://istio.io/latest/docs/setup/getting-started/#download) Installation.
### Prepare CA certificates
Following the
steps [plug-in-certificates-and-key-into-the-cluster](https://istio.io/latest/docs/tasks/security/cert-management/plugin-ca-cert/#plug-in-certificates-and-key-into-the-cluster)
to configure Istio CA.
Replace the cluster name `cluster1` with `primary`, the output will looks like as following:
```bash
[root@vm1-su-001 istio-1.12.6]# tree certs/
certs/
├── primary
│   ├── ca-cert.pem
│   ├── ca-key.pem
│   ├── cert-chain.pem
│   └── root-cert.pem
├── root-ca.conf
├── root-cert.csr
├── root-cert.pem
├── root-cert.srl
└── root-key.pem
```
### Install Istio on karmada-apiserver
Export `KUBECONFIG` and switch to `karmada apiserver`:
```bash
export KUBECONFIG=$HOME/.kube/karmada.config
kubectl config use-context karmada-apiserver
```
Create a secret `cacerts` in `istio-system` namespace:
```bash
kubectl create namespace istio-system
kubectl create secret generic cacerts -n istio-system \
--from-file=certs/primary/ca-cert.pem \
--from-file=certs/primary/ca-key.pem \
--from-file=certs/primary/root-cert.pem \
--from-file=certs/primary/cert-chain.pem
```
Create a propagation policy for `cacerts` secret:
```bash
cat <<EOF | kubectl apply -f -
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: cacerts-propagation
namespace: istio-system
spec:
resourceSelectors:
- apiVersion: v1
kind: Secret
name: cacerts
placement:
clusterAffinity:
clusterNames:
- member1
- member2
EOF
```
Override namespace `istio-system` label on `member1`:
```bash
cat <<EOF | kubectl apply -f -
apiVersion: policy.karmada.io/v1alpha1
kind: ClusterOverridePolicy
metadata:
name: istio-system-member1
spec:
resourceSelectors:
- apiVersion: v1
kind: Namespace
name: istio-system
overrideRules:
- targetCluster:
clusterNames:
- member1
overriders:
plaintext:
- path: "/metadata/labels"
operator: add
value:
topology.istio.io/network: network1
EOF
```
Override namespace `istio-system` label on `member2`:
```bash
cat <<EOF | kubectl apply -f -
apiVersion: policy.karmada.io/v1alpha1
kind: ClusterOverridePolicy
metadata:
name: istio-system-member2
spec:
resourceSelectors:
- apiVersion: v1
kind: Namespace
name: istio-system
overrideRules:
- targetCluster:
clusterNames:
- member2
overriders:
plaintext:
- path: "/metadata/labels"
operator: add
value:
topology.istio.io/network: network2
EOF
```
Run the following command to install istio CRDs on karmada apiserver:
```bash
istioctl manifest generate --set profile=external \
--set values.global.configCluster=true \
--set values.global.externalIstiod=false \
--set values.global.defaultPodDisruptionBudget.enabled=false \
--set values.telemetry.enabled=false | kubectl apply -f -
```
### Install Istiod on member1
1. Install istio control plane
Export `KUBECONFIG` and switch to `member1`:
```bash
export KUBECONFIG="$HOME/.kube/members.config"
kubectl config use-context member1
```
```bash
cat <<EOF | istioctl install -y -f -
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
accessLogFile: /dev/stdout
values:
global:
meshID: mesh1
multiCluster:
clusterName: member1
network: network1
EOF
```
2. Install the east-west gateway in `member1`
```bash
samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster member1 --network network1 | istioctl install -y -f -
```
3. Expose the control plane and service in `member1`
```bash
kubectl apply -f samples/multicluster/expose-istiod.yaml -n istio-system
kubectl apply -f samples/multicluster/expose-services.yaml -n istio-system
```
### Configure `member2` as a remote cluster
1. Enable API ServerAccess to `member2`
switch to `member2`:
```bash
kubectl config use-context member2
```
Prepare member2 cluster secret
```bash
istioctl x create-remote-secret --name=member2 > istio-remote-secret-member2.yaml
```
Switch to `member1`:
```bash
kubectl config use-context member1
```
Apply istio remote secret
```bash
kubectl apply -f istio-remote-secret-member2.yaml
```
2. Configure member2 as a remote
Save the address of `member1`s east-west gateway
```bash
export DISCOVERY_ADDRESS=$(kubectl -n istio-system get svc istio-eastwestgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
```
Create a remote configuration on `member2`.
Switch to `member2`:
```bash
kubectl config use-context member2
```
```bash
cat <<EOF | istioctl install -y -f -
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
meshID: mesh1
multiCluster:
clusterName: member2
network: network2
remotePilotAddress: ${DISCOVERY_ADDRESS}
EOF
```
3. Install the east-west gateway in `member2`
```bash
samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster member2 --network network2 | istioctl install -y -f -
```
### Deploy bookinfo application
See module [Deploy bookinfo application](./working-with-istio-on-flat-network.md#deploy-bookinfo-application.md)

View File

@ -1,316 +0,0 @@
# Working with Kyverno
[Kyverno](https://github.com/kyverno/kyverno), a [Cloud Native Computing Foundation](https://cncf.io/) project, is a policy engine designed for Kubernetes. It can validate, mutate, and generate configurations using admission controls and background scans. Kyverno policies are Kubernetes resources and do not require learning a new language. Kyverno is designed to work nicely with tools you already use like kubectl, kustomize, and Git.
This document gives an example to demonstrate how to use the `Kyverno` to manage policy.
## Prerequisites
## Setup Karmada
You just need to clone Karmada repo, and run the following script in Karmada directory.
```console
hack/local-up-karmada.sh
```
## Kyverno Installations
In this case, we will use Kyverno v1.6.2. Related deployment files are from [here](https://github.com/kyverno/kyverno/blob/main/config/install.yaml).
### Install Kyverno APIs on Karmada
1. Create resource objects of Kyverno in Karmada controller plane, the content is as follows.
```shell
kubectl config use-context karmada-apiserver
```
Deploy namespace: https://github.com/kyverno/kyverno/blob/61a1d40e5ea5ff4875a084b6dc3ef1fdcca1ee27/config/install.yaml#L1-L12
Deploy configmap: https://github.com/kyverno/kyverno/blob/61a1d40e5ea5ff4875a084b6dc3ef1fdcca1ee27/config/install.yaml#L12144-L12176
Deploy Kyverno CRDs: https://github.com/kyverno/kyverno/blob/61a1d40e5ea5ff4875a084b6dc3ef1fdcca1ee27/config/install.yaml#L12-L11656
### Install Kyverno components on host cluster
1. Create resource objects of Kyverno in karmada-host context, the content is as follows.
```shell
kubectl config use-context karmada-host
```
Deploy namespace: https://github.com/kyverno/kyverno/blob/61a1d40e5ea5ff4875a084b6dc3ef1fdcca1ee27/config/install.yaml#L1-L12
Deploy RBAC resources: https://github.com/kyverno/kyverno/blob/61a1d40e5ea5ff4875a084b6dc3ef1fdcca1ee27/config/install.yaml#L11657-L12143
Deploy Kyverno controllers and service:
```yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: kyverno
app.kubernetes.io/component: kyverno
app.kubernetes.io/instance: kyverno
app.kubernetes.io/name: kyverno
app.kubernetes.io/part-of: kyverno
app.kubernetes.io/version: latest
name: kyverno-svc
namespace: kyverno
spec:
type: NodePort
ports:
- name: https
port: 443
targetPort: https
nodePort: {{nodePort}}
selector:
app: kyverno
app.kubernetes.io/name: kyverno
---
apiVersion: v1
kind: Service
metadata:
labels:
app: kyverno
app.kubernetes.io/component: kyverno
app.kubernetes.io/instance: kyverno
app.kubernetes.io/name: kyverno
app.kubernetes.io/part-of: kyverno
app.kubernetes.io/version: latest
name: kyverno-svc-metrics
namespace: kyverno
spec:
ports:
- name: metrics-port
port: 8000
targetPort: metrics-port
selector:
app: kyverno
app.kubernetes.io/name: kyverno
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kyverno
app.kubernetes.io/component: kyverno
app.kubernetes.io/instance: kyverno
app.kubernetes.io/name: kyverno
app.kubernetes.io/part-of: kyverno
app.kubernetes.io/version: latest
name: kyverno
namespace: kyverno
spec:
replicas: 1
selector:
matchLabels:
app: kyverno
app.kubernetes.io/name: kyverno
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 40%
type: RollingUpdate
template:
metadata:
labels:
app: kyverno
app.kubernetes.io/component: kyverno
app.kubernetes.io/instance: kyverno
app.kubernetes.io/name: kyverno
app.kubernetes.io/part-of: kyverno
app.kubernetes.io/version: latest
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- kyverno
topologyKey: kubernetes.io/hostname
weight: 1
containers:
- args:
- --filterK8sResources=[Event,*,*][*,kube-system,*][*,kube-public,*][*,kube-node-lease,*][Node,*,*][APIService,*,*][TokenReview,*,*][SubjectAccessReview,*,*][*,kyverno,kyverno*][Binding,*,*][ReplicaSet,*,*][ReportChangeRequest,*,*][ClusterReportChangeRequest,*,*][PolicyReport,*,*][ClusterPolicyReport,*,*]
- -v=2
- --autogenInternals=false
- --kubeconfig=/etc/kubeconfig
- --serverIP={{nodeIP}}:{{nodePort}}
env:
- name: INIT_CONFIG
value: kyverno
- name: METRICS_CONFIG
value: kyverno-metrics
- name: KYVERNO_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KYVERNO_SVC
value: kyverno-svc
- name: TUF_ROOT
value: /.sigstore
image: ghcr.io/kyverno/kyverno:latest
imagePullPolicy: Always
livenessProbe:
failureThreshold: 2
httpGet:
path: /health/liveness
port: 9443
scheme: HTTPS
initialDelaySeconds: 15
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 5
name: kyverno
ports:
- containerPort: 9443
name: https
protocol: TCP
- containerPort: 8000
name: metrics-port
protocol: TCP
readinessProbe:
failureThreshold: 4
httpGet:
path: /health/readiness
port: 9443
scheme: HTTPS
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
memory: 384Mi
requests:
cpu: 100m
memory: 128Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
volumeMounts:
- mountPath: /.sigstore
name: sigstore
- mountPath: /etc/kubeconfig
name: kubeconfig
subPath: kubeconfig
initContainers:
- env:
- name: METRICS_CONFIG
value: kyverno-metrics
- name: KYVERNO_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: ghcr.io/kyverno/kyvernopre:latest
imagePullPolicy: Always
name: kyverno-pre
resources:
limits:
cpu: 100m
memory: 256Mi
requests:
cpu: 10m
memory: 64Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
securityContext:
runAsNonRoot: true
serviceAccountName: kyverno-service-account
volumes:
- emptyDir: {}
name: sigstore
- name: kubeconfig
secret:
defaultMode: 420
secretName: kubeconfig
---
apiVersion: v1
stringData:
kubeconfig: |-
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: {{ca_crt}}
server: https://karmada-apiserver.karmada-system.svc.cluster.local:5443
name: kind-karmada
contexts:
- context:
cluster: kind-karmada
user: kind-karmada
name: karmada
current-context: karmada
kind: Config
preferences: {}
users:
- name: kind-karmada
user:
client-certificate-data: {{client_cer}}
client-key-data: {{client_key}}
kind: Secret
metadata:
name: kubeconfig
namespace: kyverno
```
For multi-cluster deployment, we need to add the config of `--serverIP` which is the address of the webhook server. So you need to ensure that the network from node in Karmada control plane to those in karmada-host cluster is connected and expose Kyverno controller pods to control plane, for example, using `nodePort` above. Then, fill in the secret which represents kubeconfig pointing to karmada-apiserver, such as **ca_crt, client_cer and client_key** above.
## Run demo
### Create require-labels ClusterPolicy
ClusterPolicy is a CRD which `Kyverno` offers to support different kinds of rules. Here is an example ClusterPolicy which means that you must create pod with `app.kubernetes.io/name` label.
```shell
kubectl config use-context karmada-apiserver
```
```console
kubectl create -f- << EOF
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-labels
spec:
validationFailureAction: enforce
rules:
- name: check-for-labels
match:
any:
- resources:
kinds:
- Pod
validate:
message: "label 'app.kubernetes.io/name' is required"
pattern:
metadata:
labels:
app.kubernetes.io/name: "?*"
EOF
```
### Create a bad deployment without labels
```console
kubectl create deployment nginx --image=nginx
error: failed to create deployment: admission webhook "validate.kyverno.svc-fail" denied the request
```
## Reference
- https://github.com/kyverno/kyverno

View File

@ -1,354 +0,0 @@
# Use Prometheus to monitor Karmada member clusters
[Prometheus](https://github.com/prometheus/prometheus), a [Cloud Native Computing Foundation](https://cncf.io/) project, is a system and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts when specified conditions are observed.
This document gives an example to demonstrate how to use the `Prometheus` to monitor Karmada member clusters.
## Start up Karmada clusters
You just need to clone Karmada repo, and run the following script in Karmada directory.
```shell
hack/local-up-karmada.sh
```
## Start Prometheus
1. Create resource objects of Prometheus, the content is as follows.
```
apiVersion: v1
kind: Namespace
metadata:
name: monitor
labels:
name: monitor
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/proxy
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups:
- extensions
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: monitor
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: monitor
---
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: monitor
data:
prometheus.yml: |
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
- job_name: 'kubernetes-nodes'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
- job_name: 'kubernetes-cadvisor'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__,__meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
- job_name: 'kubernetes-services'
kubernetes_sd_configs:
- role: service
metrics_path: /probe
params:
module: [http_2xx]
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__address__]
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.example.com:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
target_label: kubernetes_name
- job_name: 'kubernetes-ingresses'
kubernetes_sd_configs:
- role: ingress
relabel_configs:
- source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
regex: (.+);(.+);(.+)
replacement: ${1}://${2}${3}
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.example.com:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_ingress_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_ingress_name]
target_label: kubernetes_name
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
- job_name: kube-state-metrics
static_configs:
- targets: ['kube-state-metrics.monitor.svc.cluster.local:8080']
---
kind: Service
apiVersion: v1
metadata:
labels:
app: prometheus
name: prometheus
namespace: monitor
spec:
type: NodePort
ports:
- port: 9090
targetPort: 9090
nodePort: 30003
selector:
app: prometheus
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: prometheus-deployment
name: prometheus
namespace: monitor
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
containers:
- image: prom/prometheus
imagePullPolicy: IfNotPresent
name: prometheus
command:
- "/bin/prometheus"
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/home/prometheus"
- "--storage.tsdb.retention=168h"
- "--web.enable-lifecycle"
ports:
- containerPort: 9090
protocol: TCP
volumeMounts:
- mountPath: "/home/prometheus"
name: data
- mountPath: "/etc/prometheus"
name: config-volume
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 3180Mi
serviceAccountName: prometheus
securityContext:
runAsUser: 0
volumes:
- name: data
hostPath:
path: "/data/prometheus/data"
- name: config-volume
configMap:
name: prometheus-config
```
2. Run the below command to execute Karmada PropagationPolicy and ClusterPropagationPolicy.
```
cat <<EOF | kubectl apply -f -
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: prometheus-propagation
namespace: monitor
spec:
resourceSelectors:
- apiVersion: v1
kind: Namespace
name: monitor
- apiVersion: v1
kind: ServiceAccount
name: prometheus
namespace: monitor
- apiVersion: v1
kind: ConfigMap
name: prometheus-config
namespace: monitor
- apiVersion: v1
kind: Service
name: prometheus
namespace: monitor
- apiVersion: apps/v1
kind: Deployment
name: prometheus
namespace: monitor
placement:
clusterAffinity:
clusterNames:
- member1
- member2
- member3
EOF
cat <<EOF | kubectl apply -f -
apiVersion: policy.karmada.io/v1alpha1
kind: ClusterPropagationPolicy
metadata:
name: prometheusrbac-propagation
spec:
resourceSelectors:
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
name: prometheus
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
name: prometheus
placement:
clusterAffinity:
clusterNames:
- member1
- member2
- member3
EOF
```
3. Use any node IP of the member cluster and the port number (default 30003) to enter the Prometheus monitoring page of the member cluster
## Reference
- https://github.com/prometheus/prometheus
- https://prometheus.io

View File

@ -1,85 +0,0 @@
# Use Submariner to connect the network between Karmada member clusters
This document demonstrates how to use the `Submariner` to connect the network between member clusters.
[Submariner](https://github.com/submariner-io/submariner) flattens the networks between the connected clusters, and enables IP reachability between Pods and Services.
## Install Karmada
### Install Karmada control plane
Following the steps [Install Karmada control plane](https://github.com/karmada-io/karmada#install-karmada-control-plane) in Quick Start, you can get a Karmada.
### Join member cluster
In the following steps, we are going to create a member cluster and then join the cluster to Karmada control plane.
1. Create member cluster
We are going to create a cluster named `cluster1` and we want the KUBECONFIG file in $HOME/.kube/cluster.config. Run following command:
```shell
hack/create-cluster.sh cluster1 $HOME/.kube/cluster1.config
```
This will create a cluster by kind.
2. Join member cluster to Karmada control plane
Export `KUBECONFIG` and switch to `karmada apiserver`:
```shell
export KUBECONFIG=$HOME/.kube/karmada.config
kubectl config use-context karmada-apiserver
```
Then, install `karmadactl` command and join the member cluster:
```shell
go install github.com/karmada-io/karmada/cmd/karmadactl
karmadactl join cluster1 --cluster-kubeconfig=$HOME/.kube/cluster1.config
```
In addition to the original member clusters, ensure that at least two member clusters are joined to the Karmada.
In this example, we have joined two member clusters to the Karmada:
```console
# kubectl get clusters
NAME VERSION MODE READY AGE
cluster1 v1.21.1 Push True 16s
cluster2 v1.21.1 Push True 5s
...
```
## Deploy Submariner
We are going to deploy `Submariner` components on the `host cluster` and `member clusters` by using the `subctl` CLI as it's the recommended deployment method according to [Submariner official documentation](https://github.com/submariner-io/submariner/tree/b4625514061c1d85c10432a78ca0ad46e679367a#installation).
`Submariner` uses a central Broker component to facilitate the exchange of metadata information between Gateway Engines deployed in participating clusters. The Broker must be deployed on a single Kubernetes cluster. This clusters API server must be reachable by all Kubernetes clusters connected by Submariner, therefore, we deployed it on the karmada-host cluster.
### Install subctl
Please refer to the [SUBCTL Installation](https://submariner.io/operations/deployment/subctl/).
### Use karmada-host as Broker
```shell
subctl deploy-broker --kubeconfig /root/.kube/karmada.config --kubecontext karmada-host
```
### Join cluster1 and cluster2 to the Broker
```shell
subctl join --kubeconfig /root/.kube/cluster1.config broker-info.subm --natt=false
```
```shell
subctl join --kubeconfig /root/.kube/cluster2.config broker-info.subm --natt=false
```
## Connectivity test
Please refer to the [Multi-cluster Service Discovery](https://github.com/karmada-io/karmada/blob/master/docs/multi-cluster-service.md).

View File

@ -1,257 +0,0 @@
# Integrate Velero to back up and restore Karmada resources
[Velero](https://github.com/vmware-tanzu/velero) gives you tools to back up and restore
your Kubernetes cluster resources and persistent volumes. You can run Velero with a public
cloud platform or on-premises.
Velero lets you:
- Take backups of your cluster and restore in case of loss.
- Migrate cluster resources to other clusters.
- Replicate your production cluster to development and testing clusters.
This document gives an example to demonstrate how to use the `Velero` to back up and restore Kubernetes cluster resources
and persistent volumes. Following example backups resources in cluster `member1`, and then restores those to cluster `member2`.
## Start up Karamda clusters
You just need to clone Karamda repo, and run the following script in Karamda directory.
```shell
hack/local-up-karmada.sh
```
And then run the below command to switch to the member cluster `member1`.
```shell
export KUBECONFIG=/root/.kube/members.config
kubectl config use-context member1
```
## Install MinIO
Velero uses Object Storage Services from different cloud providers to support backup and snapshot operations.
For simplicity, here takes, one object storage that runs locally on k8s clusters, for an example.
Download the binary from the official site:
```shell
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
```
Run the below command to set `MinIO` username and password:
```shell
export MINIO_ROOT_USER=minio
export MINIO_ROOT_PASSWORD=minio123
```
Run this command to start `MinIO`:
```shell
./minio server /data --console-address="0.0.0.0:20001" --address="0.0.0.0:9000"
```
Replace `/data` with the path to the drive or directory in which you want `MinIO` to store data. And now we can visit
`http://{SERVER_EXTERNAL_IP}/20001` in the browser to visit `MinIO` console UI. And `Velero` can use
`http://{SERVER_EXTERNAL_IP}/9000` to connect `MinIO`. The two configuration will make our follow-up work easier and more convenient.
Please visit `MinIO` console to create region `minio` and bucket `velero`, these will be used by `Velero`.
For more details about how to install `MinIO`, please run `minio server --help` for help, or you can visit
[MinIO Github Repo](https://github.com/minio/minio).
## Install Velero
Velero consists of two components:
- ### A command-line client that runs locally.
1. Download the [release](https://github.com/vmware-tanzu/velero/releases) tarball for your client platform
```shell
wget https://github.com/vmware-tanzu/velero/releases/download/v1.7.0/velero-v1.7.0-linux-amd64.tar.gz
```
2. Extract the tarball:
```shell
tar -zxvf velero-v1.7.0-linux-amd64.tar.gz
```
3. Move the extracted velero binary to somewhere in your $PATH (/usr/local/bin for most users).
```shell
cp velero-v1.7.0-linux-amd64/velero /usr/local/bin/
```
- ### A server that runs on your cluster
We will use `velero install` to set up server components.
For more details about how to use `MinIO` and `Velero` to backup resources, please ref: https://velero.io/docs/v1.7/contributions/minio/
1. Create a Velero-specific credentials file (credentials-velero) in your local directory:
```shell
[default]
aws_access_key_id = minio
aws_secret_access_key = minio123
```
The two values should keep the same with `MinIO` username and password that we set when we install `MinIO`
2. Start the server.
We need to install `Velero` in both `member1` and `member2`, so we should run the below command in shell for both two clusters,
this will start Velero server. Please run `kubectl config use-context member1` and `kubectl config use-context member2`
to switch to the different member clusters: `member1` or `member2`.
```shell
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.2.1 \
--bucket velero \
--secret-file ./credentials-velero \
--use-volume-snapshots=false \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://{SERVER_EXTERNAL_IP}:9000
```
Replace `{SERVER_EXTERNAL_IP}` with your own server external IP.
3. Deploy the nginx application to cluster `member1`:
Run the below command in the Karmada directory.
```shell
kubectl apply -f samples/nginx/deployment.yaml
```
And then you will find nginx is deployed successfully.
```shell
# kubectl get deployment.apps
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 2/2 2 2 17s
```
### Back up and restore kubernetes resources independent
Create a backup in `member1`:
```shell
velero backup create nginx-backup --selector app=nginx
```
Restore the backup in `member2`
Run this command to switch to `member2`
```shell
export KUBECONFIG=/root/.kube/members.config
kubectl config use-context member2
```
In `member2`, we can also get the backup that we created in `member1`:
```shell
# velero backup get
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
nginx-backup Completed 0 0 2021-12-10 15:16:46 +0800 CST 29d default app=nginx
```
Restore `member1` resources to `member2`:
```shell
# velero restore create --from-backup nginx-backup
Restore request "nginx-backup-20211210151807" submitted successfully.
```
Watch restore result, you'll find that the status is Completed.
```shell
# velero restore get
NAME BACKUP STATUS STARTED COMPLETED ERRORS WARNINGS CREATED SELECTOR
nginx-backup-20211210151807 nginx-backup Completed 2021-12-10 15:18:07 +0800 CST 2021-12-10 15:18:07 +0800 CST 0 0 2021-12-10 15:18:07 +0800 CST <none>
```
And then you can find deployment nginx will be restored successfully.
```shell
# kubectl get deployment.apps/nginx
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 2/2 2 2 21s
```
### Backup and restore of kubernetes resources through Velero combined with karmada
In Karmada control plane, we need to install velero crds but do not need controllers to reconcile them. They are treated as resource templates, not specific resource instances.Based on work API here, they will be encapsulated as a work object deliverd to member clusters and reconciled by velero controllers in member clusters finally.
Create velero crds in Karmada control plane:
remote velero crd directory: `https://github.com/vmware-tanzu/helm-charts/tree/main/charts/velero/crds/`
Create a backup in `karmada-apiserver` and Distributed to `member1` cluster through PropagationPolicy
```shell
# create backup policy
cat <<EOF | kubectl apply -f -
apiVersion: velero.io/v1
kind: Backup
metadata:
name: member1-default-backup
namespace: velero
spec:
defaultVolumesToRestic: false
includedNamespaces:
- default
storageLocation: default
EOF
# create PropagationPolicy
cat <<EOF | kubectl apply -f -
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: member1-backup
namespace: velero
spec:
resourceSelectors:
- apiVersion: velero.io/v1
kind: Backup
placement:
clusterAffinity:
clusterNames:
- member1
EOF
```
Create a restore in `karmada-apiserver` and Distributed to `member2` cluster through PropagationPolicy
```shell
#create restore policy
cat <<EOF | kubectl apply -f -
apiVersion: velero.io/v1
kind: Restore
metadata:
name: member1-default-restore
namespace: velero
spec:
backupName: member1-default-backup
excludedResources:
- nodes
- events
- events.events.k8s.io
- backups.velero.io
- restores.velero.io
- resticrepositories.velero.io
hooks: {}
includedNamespaces:
- 'default'
EOF
#create PropagationPolicy
cat <<EOF | kubectl apply -f -
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: member2-default-restore-policy
namespace: velero
spec:
resourceSelectors:
- apiVersion: velero.io/v1
kind: Restore
placement:
clusterAffinity:
clusterNames:
- member2
EOF
```
And then you can find deployment nginx will be restored on member2 successfully.
```shell
# kubectl get deployment.apps/nginx
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 2/2 2 2 10s
```
## Reference
The above introductions about `Velero` and `MinIO` are only a summary from the official website and repos, for more details please refer to:
- Velero: https://velero.io/
- MinIO: https://min.io/