diff --git a/docs/CHANGELOG/CHANGELOG-1.0.md b/docs/CHANGELOG/CHANGELOG-1.0.md index c21696998..9fa343a37 100644 --- a/docs/CHANGELOG/CHANGELOG-1.0.md +++ b/docs/CHANGELOG/CHANGELOG-1.0.md @@ -22,13 +22,13 @@ kubectl karmada promote deployment nginx -n default -c cluster1 Benefiting from the Kubernetes native API support, Karmada can easily integrate the single cluster ecosystem for multi-cluster, multi-cloud purpose. The following components have been verified by the Karmada community: -- argo-cd: refer to [working with argo-cd](../working-with-argocd.md) +- argo-cd: refer to [working with argo-cd](https://github.com/karmada-io/website/blob/main/docs/userguide/cicd/working-with-argocd.md) - Flux: refer to [propagating helm charts with flux](https://github.com/karmada-io/karmada/issues/861#issuecomment-998540302) -- Istio: refer to [working with Istio](../working-with-istio-on-flat-network.md) -- Filebeat: refer to [working with Filebeat](../working-with-filebeat.md) -- Submariner: refer to [working with Submariner](../working-with-submariner.md) -- Velero: refer to [working with Velero](../working-with-velero.md) -- Prometheus: refer to [working with Prometheus](../working-with-prometheus.md) +- Istio: refer to [working with Istio](https://github.com/karmada-io/website/blob/main/docs/userguide/service/working-with-istio-on-flat-network.md) +- Filebeat: refer to [working with Filebeat](https://github.com/karmada-io/website/blob/main/docs/administrator/monitoring/working-with-filebeat.md) +- Submariner: refer to [working with Submariner](https://github.com/karmada-io/website/blob/main/docs/userguide/network/working-with-submariner.md) +- Velero: refer to [working with Velero](https://github.com/karmada-io/website/blob/main/docs/administrator/backup/working-with-velero.md) +- Prometheus: refer to [working with Prometheus](https://github.com/karmada-io/website/blob/main/docs/administrator/monitoring/working-with-prometheus.md) ## OverridePolicy Improvements @@ -38,7 +38,7 @@ able to define override policies with a single policy for specified workloads. ## Karmada Installation Improvements Introduced `init` command to `Karmada CLI`. Users are now able to install Karmada by a single command. -Please refer to [Installing Karmada](../installation/installation.md) for more details. +Please refer to [Installing Karmada](https://github.com/karmada-io/website/blob/main/docs/installation/installation.md) for more details. ## Configuring Karmada Controllers diff --git a/docs/CHANGELOG/CHANGELOG-1.1.md b/docs/CHANGELOG/CHANGELOG-1.1.md index 64d4840cd..f72be5a93 100644 --- a/docs/CHANGELOG/CHANGELOG-1.1.md +++ b/docs/CHANGELOG/CHANGELOG-1.1.md @@ -42,7 +42,7 @@ Introduced `AggregateStatus` support for the `Resource Interpreter Webhook` fram Introduced `InterpreterOperationInterpretDependency` support for the `Resource Interpreter Webhook` framework, which enables propagating workload's dependencies automatically. -Refer to [Customizing Resource Interpreter](../userguide/customizing-resource-interpreter.md) for more details. +Refer to [Customizing Resource Interpreter](https://github.com/karmada-io/website/blob/main/docs/userguide/globalview/customizing-resource-interpreter.md) for more details. # Other Notable Changes diff --git a/docs/CHANGELOG/CHANGELOG-1.2.md b/docs/CHANGELOG/CHANGELOG-1.2.md index ef42f3349..261610f7e 100644 --- a/docs/CHANGELOG/CHANGELOG-1.2.md +++ b/docs/CHANGELOG/CHANGELOG-1.2.md @@ -20,7 +20,7 @@ A new component `karmada-descheduler` was introduced, for rebalancing the schedu One example use case is: it helps evict pending replicas (Pods) from resource-starved clusters so that `karmada-scheduler` can "reschedule" these replicas (Pods) to a cluster with sufficient resources. -For more details please refer to [Descheduler user guide.](../../docs/descheduler.md) +For more details please refer to [Descheduler user guide.](https://github.com/karmada-io/website/blob/main/docs/userguide/scheduling/descheduler.md) ##### 2. Multi region HA support @@ -101,15 +101,15 @@ Introduced `InterpretStatus` for the `Resource Interpreter Webhook` framework, w Karmada can thereby learn how to collect status for your resources, especially custom resources. For example, a custom resource may have many status fields and only Karmada can collect only those you want. -Refer to [Customizing Resource Interpreter](../../docs/userguide/customizing-resource-interpreter.md) for more details. +Refer to [Customizing Resource Interpreter](https://github.com/karmada-io/website/blob/main/docs/userguide/globalview/customizing-resource-interpreter.md) for more details. #### Integrating verification with the ecosystem Benefiting from the Kubernetes native APIs, Karmada can easily integrate the Kubernetes ecosystem. The following components are verified by the Karmada community: -- `Kyverno`: policy engine. Refer to [working with kyverno](../../docs/working-with-kyverno.md) for more details. -- `Gatekeeper`: another policy engine. Refer to [working with gatekeeper](../../docs/working-with-gatekeeper.md) for more details. -- `fluxcd`: GitOps tooling for helm chart. Refer to [working with fluxcd](../../docs/working-with-flux.md) for more details. +- `Kyverno`: policy engine. Refer to [working with kyverno](https://github.com/karmada-io/website/blob/main/docs/userguide/security-governance/working-with-kyverno.md) for more details. +- `Gatekeeper`: another policy engine. Refer to [working with gatekeeper](https://github.com/karmada-io/website/blob/main/docs/userguide/security-governance/working-with-gatekeeper.md) for more details. +- `fluxcd`: GitOps tooling for helm chart. Refer to [working with fluxcd](https://github.com/karmada-io/website/blob/main/docs/userguide/cicd/working-with-flux.md) for more details. ### Other Notable Changes diff --git a/docs/README.md b/docs/README.md index 38d859e22..edb96602f 100644 --- a/docs/README.md +++ b/docs/README.md @@ -1,44 +1 @@ -# Karmada - -## Overview - -## Quick Start - -## Installation -Refer to [Installing Karmada](./installation/installation.md). - -## Concepts - -## User Guide - -- [Cluster Registration](./userguide/cluster-registration.md) -- [Resource Propagating](./userguide/resource-propagating.md) -- [Cluster Failover](./userguide/failover.md) -- [Aggregated Kubernetes API Endpoint](./userguide/aggregated-api-endpoint.md) -- [Customizing Resource Interpreter](./userguide/customizing-resource-interpreter.md) -- [Configuring Controllers](./userguide/configure-controllers.md) - -## Best Practices - -## Adoptions -User cases in production. - -- Karmada at [VIPKID](https://www.vipkid.com/) - * [English](./adoptions/vipkid-en.md) - * [中文](./adoptions/vipkid-zh.md) - -## Developer Guide - -## Contributors - -- [GitHub workflow](./contributors/guide/github-workflow.md) -- [Cherry Pick Overview](./contributors/devel/cherry-picks.md) - -## Reference - -## Troubleshooting -Refer to [Troubleshooting](./troubleshooting.md) - -## Frequently Asked Questions - -Refer to [FAQ](./frequently-asked-questions.md). \ No newline at end of file +Karmada documentation is all hosted on [karmada-io/website](https://github.com/karmada-io/website). \ No newline at end of file diff --git a/docs/adoptions/vipkid-en.md b/docs/adoptions/vipkid-en.md deleted file mode 100644 index 08c09c74b..000000000 --- a/docs/adoptions/vipkid-en.md +++ /dev/null @@ -1,146 +0,0 @@ - - -**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* - -- [VIPKID: Building a PaaS Platform with Karmada to Run Containers](#vipkid-building-a-paas-platform-with-karmada-to-run-containers) - - [Background](#background) - - [Born Multi-Cloud and Cross-Region](#born-multi-cloud-and-cross-region) - - [Multi-Cluster Policy](#multi-cluster-policy) - - [Cluster Disaster Recovery](#cluster-disaster-recovery) - - [Challenges and Pain Points](#challenges-and-pain-points) - - [Running the Same Application in Different Clusters](#running-the-same-application-in-different-clusters) - - [Quickly Migrating Applications upon Faults](#quickly-migrating-applications-upon-faults) - - [Why Karmada](#why-karmada) - - [Any Solutions Available?](#any-solutions-available) - - [Karmada, the Solution of Choice](#karmada-the-solution-of-choice) - - [Karmada at VIPKID](#karmada-at-vipkid) - - [Containerization Based on Karmada](#containerization-based-on-karmada) - - [Benefits](#benefits) - - [Gains](#gains) - - - -# VIPKID: Building a PaaS Platform with Karmada to Run Containers - -Author: Ci Yiheng, Backend R&D Expert, VIPKID - -## Background - -VIPKID is an online English education platform with more than 80,000 teachers and 1 million trainees. -It has delivered 150 million training sessions across countries and regions. To provide better services, -VIPKID deploys applications by region and close to teachers and trainees. Therefore, -VIPKID purchased dozens of clusters from multiple cloud providers around the world to build its internal infrastructure. - -## Born Multi-Cloud and Cross-Region - -VIPKID provides services internationally. Native speakers can be both teaching students in China and studying with Chinese teachers. -To provide optimal online class experience, VIPKID sets up a low-latency network and deploys computing services close to teachers and trainees separately. -Such deployment depends on resources from multiple public cloud vendors. Managing multi-cloud resources has long become a part of VIPKID's IaaS operations. - -### Multi-Cluster Policy - -We first tried the single-cluster mode to containerize our platform, simple and low-cost. We dropped it after evaluating the network quality and infrastructure (network and storage) solutions across clouds and regions, and our project period. There are two major reasons: -1) Network latency and stability between clouds cannot be guaranteed. -2) Different vendors have different solutions for container networking and storage. - Costs would be high if we wanted to resolve these problems. Finally, we decided to configure Kubernetes clusters by cloud vendor and region. That's why we have so many clusters. - -### Cluster Disaster Recovery - -DR(Disaster Recovery) becomes easier for containers than VMs. Kubernetes provides DR solutions for pods and nodes, but not single clusters. Thanks to the microservice reconstruction, we can quickly create a cluster or scale an existing one to transfer computing services. - -## Challenges and Pain Points - -### Running the Same Application in Different Clusters - -During deployment, we found that the workloads of the same application vary greatly in different clusters in terms of images, startup parameters (configurations), and release versions. In the early stage, we wanted that our developers can directly manage applications on our own PaaS platform. However, the increasing customization made it more and more difficult to abstract the differences. - -We had to turn to our O&M team, but they also failed in some complex scenarios. This is not DevOps. It does not reduce costs or increase efficiency. - -### Quickly Migrating Applications upon Faults - -Fault migration can be focused on applications or clusters. The application-centric approach focuses on the -self-healing of key applications and the overall load in multi-cluster mode. -The cluster-centric approach focuses on the disasters (such as network faults) that may impact all clusters or on the -delivery requirements when creating new clusters. You need to set different policies for these approaches. - -**Application-centric: Dynamic Migration** - -Flexibly deploying an application in multiple clusters can ensure its stability. For example, if an instance in a cluster is faulty and cannot be quickly recovered, a new instance needs to be created automatically in another cluster of the same vendor or region based on the preset policy. - -**Cluster-centric: Quick Cluster Startup** - -Commonly, we start a new cluster to replace the unavailable one or to deliver services which depend on a specific cloud vendor or region. It would be best if clusters can be started as fast as pods. - -## Why Karmada - -### Any Solutions Available? - -Your service systems may evolve fast and draw clear lines for modules. To address the pain points, you need to, to some extent, abstract, decouple and reconstruct your systems. - -For us, service requirements were deeply coupled with cluster resources. We wanted to decouple them via multi-cluster management. Specifically, use the self-developed platform to manage the application lifecycle, and use a system to manage operation instructions on cluster resources. - -We probed into the open source communities to find products that support multi-cluster management. However, most products either serve as a platform like ours or manage resources by cluster. - -We wanted to manage multiple Kubernetes clusters like one single, large cluster. In this way, a workload can be regarded as an independent application (or a version of an application) instead of a replica of an application in multiple clusters. - -We also wanted to lower the access costs as much as possible. We surveyed and evaluated many solutions in the communities and decided on Karmada. - -### Karmada, the Solution of Choice - -Karmada has the following advantages: -1) Karmada allows us to manage multiple clusters like one single cluster and manage resources in an application-centric approach. In addition, almost all configuration differences can be independently declared through the Override policies in Karmada, simple, intuitive, and easy to manage. -2) Karmada uses native Kubernetes APIs. We need no adaption and the access cost is low. Karmada also manifests configurations through CRDs. It dynamically turns distribution and differentiated configurations into Propagation and Override policies and delivers them to the Karmada control plane. -3) Karmada sits under the open governance of a neutral community. The community welcomes open discussions on requirements and ideas and we got technically improved while contributing to the community. - -## Karmada at VIPKID - -Our platform caters to all container-based deployments, covering stateful or stateless applications, hybrid deployment of online and offline jobs, AI, and big data services. This platform does not rely on any public cloud. Therefore, we cannot use any encapsulated products of cloud vendors. - -We use the internal IaaS platform to create and scale out clusters, configure VPCs, subnets, and security groups of different vendors. In this way, vendor differences become the least of worries for our PaaS platform. - -In addition, we provide GitOps for developers to manage system applications and components. This is more user-friendly and efficient for skilled developers. - -### Containerization Based on Karmada - -At the beginning, we designed a component (cluster aggregation API) in the platform to interact with Kubernetes clusters. We retained the native Kubernetes APIs and added some cluster-related information. -However, there were complex problems during the implementation. For example, as the PaaS system needed to render declarations of different resources to multiple clusters, the applications we maintained in different clusters were irrelevant. We made much effort to solve these problems, even after CRDs were introduced. The system still needed to keep track of the details of each cluster, which goes against what cluster aggregation API is supposed to do. -When there are a large number of clusters that go online and offline frequently, we need to change the configurations in batches for applications in the GitOps model to ensure normal cluster running. However, GitOps did not cope with the increasing complexity as expected. - -The following figure shows the differences before and after we used Karmada. - -![Karmada在VIPKID](../images/adoptions-vipkid-architecture-en.png) - - -**After Karmada is introduced, the multi-cluster aggregation layer is truly unified.** We can manage resources by application on the Karmada control plane. We only need to interact with Karmada, not the clusters, which simplifies containerized application management and enables our PaaS platform to fully focus on service requirements. -With Karmada integrated into GitOps, system components can be easily released and upgraded in each cluster, exponentially more efficient than before. - -## Benefits - -Managing Kubernetes resources by application simplifies the platform and greatly improves utilization. Here are the improvements brought by Karmada. - -**1) Higher deployment efficiency** - -Before then, we needed to send deployment instructions to each cluster and monitor the deployment status, which required us to continuously check resources and handle exceptions. Now, application statuses are automatically collected and detected by Karmada. - -**2) Differentiated control on applications** - -Adopting DevOps means developers can easily manage the lifecycle of applications. -We leverage Karmada Override policies to directly interconnect with application profiles such as environment variables, startup parameters, and image repositories so that developers can better control the differences of applications in different clusters. - -**3) Quick cluster startup and adaptation to GitOps** - -Basic services (system and common services) are configured for all clusters in Karmada Propagation policies and managed by Karmada when a new cluster is created. These basic services can be delivered along with the cluster, requiring no manual initialization and greatly shortening the delivery process. -Most basic services are managed by the GitOps system, which is convenient and intuitive. - -**4) Short reconstruction period and no impact on services** - -Thanks to the support of native Kubernetes APIs, we can quickly integrate Karmada into our platform. -We use Karmada the way we use Kubernetes. The only thing we need to configure is Propagation policies, -which can be customized by resource name, resource type, or LabelSelector. - -## Gains - -Since February 2021, three of us have become contributors to the Karmada community. -We witness the releases of Karmada from version 0.5.0 to 1.0.0. To write codes that satisfy all is challenging. -We have learned a lot from the community during the practice, and we always welcome more of you to join us. - diff --git a/docs/adoptions/vipkid-zh.md b/docs/adoptions/vipkid-zh.md deleted file mode 100644 index 0716e1eb5..000000000 --- a/docs/adoptions/vipkid-zh.md +++ /dev/null @@ -1,111 +0,0 @@ -# VIPKID基于Karmada的容器PaaS平台落地实践 - -本篇文章来自在线教育平台VIPKID在容器体系设计过程中的落地实践,从VIPKID的业务背景、业务挑战、选型过程、引入Karmada前后的对比以及收益等方面深入剖析了VIPKID容器化改造过程。 - -## 业务背景 - -VIPKID的业务覆盖数十个国家和地区,签约教师数量超过8万名,为全球100万学员提供教育服务,累计授课1.5亿节。为了更好的为教师和学员提供服务,相关应用会按地域就近部署,比如面向教师的服务会贴近教师所在的地域部署,面向学员的服务直接部署在学员侧。为此,VIPKID从全球多个云供应商采购了数十个集群用于构建VIPKID内部基础设施。 - -### 无法躲开的多云,多Region - -VIPKID的业务形态是将国内外的教育资源进行互换互补,包括过去的北美外教资源引入到国内和将现在国内的教育服务提供到海外。 - -为了追求优质的上课体验,除了组建了一个稳定低延迟的互通链路外,不同形态的计算服务都要就近部署,比如教师客户端的依赖服务,要优先部署在海外,而家长客户端的依赖服务则首选国内。 - -因此VIPKID使用了国内外多个公有云厂商的网络资源和计算资源。对多云的资源管理,很早就成为了VIPKID的IaaS管理系统的一部分。 - -### K8s多集群策略 - -在VIPKID容器体系设计之初,我们首选的方案是单集群模式,该方案的优势是结构简单且管理成本低。但综合评估多云之间和多Region的网络质量与基础设施(网络和存储)方案,并综合我们的项目周期,只能放弃这个想法。主要原因有两点: -1)不同云之间的网络延迟和稳定性无法保证 -2)各家云厂商的容器网络和存储方案均有差异 - -若要解决以上问题,需要耗费较高的成本。最后,我们按照云厂商和Region的维度去配置K8s集群,如此就拥有很多K8s集群。 - -### 集群的容灾 - -容灾对于容器而言,相比传统的VM资源已经友好得多。K8s解决了大部分Pod和Node级别的Case,但单个集群的灾难处理,还需要我们自行解决,由于VIPKID早期已经完成了微服务化的改造,因此可以利用快速创建新集群或者扩容现有特定集群的方式来快速进行计算服务的转移。 - -## 业务挑战及痛点 - -### 如何对待不同集群中的同一个应用? - -在多云部署过程中,我们遇到了一个很复杂的问题:同一个应用在不同集群中的workload几乎都是不同的,比如使用的镜像、启动参数(配置)甚至有时候连发布版本都不一样。前期我们是依赖自己容器PaaS平台来管理这些差异,但随着差异需求增多,场景也越来越多,差异抽象越发困难。 - -我们的初衷是让开发者可以直接在我们的PaaS平台上管理其应用,但因为复杂的差异越来越难以管理,最终只能依赖运维同学协助操作。另外,在某些复杂场景下,运维同学也难以快速、准确的进行管理,如此偏离了DevOps理念,不仅增加了管理成本,而且降低了使用效率。 - -### 如何快速完成故障迁移? - -对于故障迁移,这里我从应用和集群两个不同视角来描述,应用视角看重关键应用的自愈能力以及能否在多集群状态下保障整体的负载能力。而集群维度则更看重集群整体级别的灾难或对新集群的交付需求,比如网络故障,此时应对策略会有不同。 - -**应用视角:应用的动态迁移** -从应用出发,保障关键应用的稳定性,可以灵活的调整应用在多集群中的部署情况。例如某关键应用在A集群的实例出现故障且无法快速恢复,那就需要根据事先制定的策略,在同厂商或同Region下的集群中创建实例,并且这一切应该是自动的。 - -**集群视角:新集群如何快速ready** -新集群的创建在我们的业务场景很常见。比如当某个K8s集群不可用时,我们的期望是通过拉起新集群的方式进行快速修复,再如,业务对新的云厂商或者Region有需求时候,我们也需要能够快速交付集群资源。我们希望他能够像启动Pod一样迅速。 - -## Why Karmada - -### 不自己造轮子,着眼开源社区 - -上述列举的痛点,若只试图满足暂时的需求是远远不够的,系统在快速发展过程中必须要适当的进行抽象和解耦,并且随着系统组成模块的角色分工逐渐清晰,也需要适当的重构。 - -对于我们的容器PaaS平台而言,业务需求与集群资源耦合越发严重,我们将解耦的切面画在了多集群的管理上,由我们自研的平台管理应用的生命周期,另外一个系统管理集群资源的操作指令。 - -明确需求后,我们就开始在开源社区寻找与调研这类产品,但找到的开源产品都是平台层的,也就是与我们自研平台解决思路类似,并且大多是以集群视角来进行操作的,所有资源首先在集群的维度上就被割裂开了,并不符合我们对应用视角的诉求。 - -以应用为视角,可以理解为将多个K8s集群作为一个大型集群来管理,这样一个workload就可以看做是一个应用(或一个应用的某个版本)而不是散落在多个集群中同一个应用的多个workload。 - -另外一个原则是尽量低的接入成本。我们调研了开源社区的多种方案,综合评估后,发现Karmada比较符合我们的需求。 - -### 就它了, Karmada! - -试用Karmada后,发现有以下几方面优势: - -**1)Karmada真正意义的实现了以一个K8s集群视角来管理多集群的能力**,让我们能够以应用视角管理多集群中的资源。另外,Karmada的OverridePolicy设计几乎所有差异都可以单独声明出来,简单直观且便于管理,这与我们内部对应用画像在不同集群之间的应用差异不谋而合。 - -**2)Karmada完全使用了K8s原生API**,使得我们可以像原来一样使用,同时也表明我们在后续的接入成本会很低。并且Karmada的CRD相对来讲也更容易理解,我们平台的服务画像模块可以很容易的将分发和差异配置动态渲染成Propagation和Override策略,下发给Karmada控制面。 - -**3)开源开放的社区治理模式**,也是我们团队最看重的一点。在试用Karmada过程中,不论是我们自己还是社区方面对需求的理解和设想的解决方案,都可以在社区中开放讨论。同时,在参与代码贡献过程中,我们团队整体技术能力也显著提升。 - -## Karmada at VIPKID - -我们的容器平台,承载了整个公司所有的容器化部署诉求,包括有无状态、在离线业务和AI大数据等。并且要求PaaS平台的设计和实施不会对某一家公有云产生任何依赖,因此我们无法使用云厂商封装过的一些产品。 - -我们会依赖内部的IaaS平台去管理多家云厂商的各类基础设施,包括K8s集群的创建、扩容、VPC,子网以及安全组的配置。这个很重要,因为这让我们可以标准化多个云厂商的K8s集群,让上层PaaS平台几乎无需关心厂商级别的差异。 - -另外,对于系统级别的应用和组件,我们为开发者创建了另外一条管理渠道,那就是使用GitOps。这对高阶开发者来说要更加友好,对系统应用的组件安装部署更为高效。 - -### 基于Karmada的容器化改造方案 - -在平台落地之初,我们单独剥离了一个组件(上图左侧的“集群汇聚API”),专门和K8s集群进行交互,并且向上保留K8s原生API,也会附加一些和集群相关信息。 - -但在落地过程中,“容器应用管理”系统需要进行许多操作适配多集群下的复杂情况。比如虽然PaaS系统看起来是一个应用,但系统需要渲染不同的完整资源声明到不同的集群,使得我们在真正维护多集群应用时仍然是不相关的、割裂的,因为我们没办法在底层解决这个问题。诸如此类问题的解决还是占用了团队不少的资源,尤其是引入CRD资源后,还需要重复的解决这方面的问题。并且这个系统无法不去关心每个集群里面的细节状况,如此背离了我们设计“集群汇聚API”组件的初衷。 - -另外,由于GitOps也需要与集群强相关,在集群数量较大,并且经常伴随集群上下线的情况下,此时,若要正常运转就需要对GitOps的应用配置进行批量变更,随之增加的复杂度,让整体效果并未达到预期。 - -下图是VIPKID引入Karmada之前和之后的架构对比: - -![Karmada在VIPKID](../images/adoptions-vipkid-architecture-zh.png "Karmada在VIPKID") - -**引入Karmada后,多集群聚合层得以真正的统一**,我们可以在Karmada控制平面以应用维度去管理资源,多数情况下都不需要深入到受控集群中,只需要与Karmada交互即可。如此极大的简化了我们的“容器应用管理”系统。现在,我们的PaaS平台可以完全倾注于业务需求,Karmada强大的能力已经满足了当前我们的各类需求。 - -而GitOps体系使用Karmada后,系统级组件也可以简单的在各个集群中进行发布和升级,不仅让我们体验到了GitOps本身的便利,更是让我们收获到了GitOps*Karmada的成倍级的效率提升。 - -## 收益 - -以应用为维度来管理K8s资源,降低了平台复杂度,同时大幅提升使用效率。下面以我们PaaS平台特性入手,来描述引入Karmada后的改变。 - -**1)多集群应用的部署速度显著提升:** 先前在部署时需要向每个集群发送部署指令,随之监测部署状态是否异常。如此就需要不断的检查多个集群中的资源状态,然后再根据异常情况做不同的处理,这个过程逻辑繁琐并且缓慢。引入Karmada后,Karmada会自动收集和汇聚应用在各个集群的状态,这样我们可以通过Karmada来感知应用状态。 - -**2)应用的差异控制可开放给开发者:** DevOps文化最重要的一点就是开发者要能够完全参与进来,能够便捷地对应用全生命周期进行管理。我们充分利用了Karmada的Override策略,直接与应用画像对接,让开发者可以清晰的了解和控制应用在不同集群的差异,现已支持环境变量,启动参数,镜像仓库。 - -**3)集群的快速拉起&对GitOps适配:** 我们将基础服务(系统级和通用类型服务)在Karmada的Propagation设定为全量集群,在新集群创建好以后,直接加入到Karmada中进行纳管,这些基础服务可以伴随集群交付一并交付,节省了我们对集群做基础服务初始化的操作,大大缩短了交付环节和时间。并且大部分基础服务都是由我们的GitOps体系管理的,相比过去一个个集群的配置来讲,既方便又直观。 - -**4)平台改造周期短,业务无感知:** 得益于Karmada的原生K8s API,我们花了很少的时间在接入Karmada上。Karmada真正做到了原来怎么用K8s现在继续怎么用就可以了。唯一需要考虑的是Propagation策略的定制,可以按照资源名字的维度,也可以按照资源类型或LabelSelector的维度来声明,极其方便。 - -### 参与开源项目的收获 - -从2021年的2月份接触到Karmada项目以来,我们团队先后有3人成为了Karmada社区的Contributor,从0.5.0到1.0.0版本,参与和见证了多个feature的发布。同时Karmada也见证了我们团队的成长。 - -把自己的需求写成代码很简单,把自己的需求和其他人讨论,对所有人的需求进行圈定和取舍,选择一个符合大家的解决方案,再转换成代码,难度则会升级。我们团队在此期间收获了很多,也成长了许多,并且为能够参与Karmada项目建设而感到自豪,希望更多的开发者能加入Karmada社区,一起让社区生态更加繁荣! \ No newline at end of file diff --git a/docs/bash-auto-completion-on-linux.md b/docs/bash-auto-completion-on-linux.md deleted file mode 100644 index 3179b51a3..000000000 --- a/docs/bash-auto-completion-on-linux.md +++ /dev/null @@ -1,47 +0,0 @@ -# bash auto-completion on Linux -## Introduction -The karmadactl completion script for Bash can be generated with the command karmadactl completion bash. Sourcing the completion script in your shell enables karmadactl autocompletion. - -However, the completion script depends on [bash-completion](https://github.com/scop/bash-completion), which means that you have to install this software first (you can test if you have bash-completion already installed by running `type _init_completion`). - -## Install bash-completion -bash-completion is provided by many package managers (see [here](https://github.com/scop/bash-completion#installation)). You can install it with `apt-get install bash-completion` or `yum install bash-completion`, etc. - -The above commands create `/usr/share/bash-completion/bash_completion`, which is the main script of bash-completion. Depending on your package manager, you have to manually source this file in your `~/.bashrc` file. - -```bash -source /usr/share/bash-completion/bash_completion -``` - -Reload your shell and verify that bash-completion is correctly installed by typing `type _init_completion`. - -## Enable karmadactl autocompletion -You now need to ensure that the karmadactl completion script gets sourced in all your shell sessions. There are two ways in which you can do this: - -- Source the completion script in your ~/.bashrc file: - -```bash -echo 'source <(karmadactl completion bash)' >>~/.bashrc -``` - -- Add the completion script to the /etc/bash_completion.d directory: - -```bash -karmadactl completion bash >/etc/bash_completion.d/karmadactl -``` - -If you have an alias for karmadactl, you can extend shell completion to work with that alias: - -```bash -echo 'alias km=karmadactl' >>~/.bashrc -echo 'complete -F __start_karmadactl km' >>~/.bashrc -``` - -> **Note:** bash-completion sources all completion scripts in /etc/bash_completion.d. - -Both approaches are equivalent. After reloading your shell, karmadactl autocompletion should be working. - -## Enable kubectl-karmada autocompletion -Currently, kubectl plugins do not support autocomplete, but it is already planned in [Command line completion for kubectl plugins](https://github.com/kubernetes/kubernetes/issues/74178). - -We will update the documentation as soon as it does. \ No newline at end of file diff --git a/docs/contributors/devel/cherry-picks.md b/docs/contributors/devel/cherry-picks.md deleted file mode 100644 index 7c6c1c527..000000000 --- a/docs/contributors/devel/cherry-picks.md +++ /dev/null @@ -1,119 +0,0 @@ -# Overview - -This document explains how cherry picks are managed on release branches within -the `karmada-io/karmada` repository. -A common use case for this task is backporting PRs from master to release -branches. - -> This doc is lifted from [Kubernetes cherry-pick](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-release/cherry-picks.md). - -- [Prerequisites](#prerequisites) -- [What Kind of PRs are Good for Cherry Picks](#what-kind-of-prs-are-good-for-cherry-picks) -- [Initiate a Cherry Pick](#initiate-a-cherry-pick) -- [Cherry Pick Review](#cherry-pick-review) -- [Troubleshooting Cherry Picks](#troubleshooting-cherry-picks) -- [Cherry Picks for Unsupported Releases](#cherry-picks-for-unsupported-releases) - -## Prerequisites - -- A pull request merged against the `master` branch. -- The release branch exists (example: [`release-1.0`](https://github.com/karmada-io/karmada/tree/release-1.0)) -- The normal git and GitHub configured shell environment for pushing to your - karmada `origin` fork on GitHub and making a pull request against a - configured remote `upstream` that tracks - `https://github.com/karmada-io/karmada.git`, including `GITHUB_USER`. -- Have GitHub CLI (`gh`) installed following [installation instructions](https://github.com/cli/cli#installation). -- A github personal access token which has permissions "repo" and "read:org". - Permissions are required for [gh auth login](https://cli.github.com/manual/gh_auth_login) - and not used for anything unrelated to cherry-pick creation process - (creating a branch and initiating PR). - -## What Kind of PRs are Good for Cherry Picks - -Compared to the normal master branch's merge volume across time, -the release branches see one or two orders of magnitude less PRs. -This is because there is an order or two of magnitude higher scrutiny. -Again, the emphasis is on critical bug fixes, e.g., - -- Loss of data -- Memory corruption -- Panic, crash, hang -- Security - -A bugfix for a functional issue (not a data loss or security issue) that only -affects an alpha feature does not qualify as a critical bug fix. - -If you are proposing a cherry pick and it is not a clear and obvious critical -bug fix, please reconsider. If upon reflection you wish to continue, bolster -your case by supplementing your PR with e.g., - -- A GitHub issue detailing the problem - -- Scope of the change - -- Risks of adding a change - -- Risks of associated regression - -- Testing performed, test cases added - -- Key stakeholder reviewers/approvers attesting to their confidence in the - change being a required backport - -It is critical that our full community is actively engaged on enhancements in -the project. If a released feature was not enabled on a particular provider's -platform, this is a community miss that needs to be resolved in the `master` -branch for subsequent releases. Such enabling will not be backported to the -patch release branches. - -## Initiate a Cherry Pick - -- Run the [cherry pick script][cherry-pick-script] - - This example applies a master branch PR #1206 to the remote branch - `upstream/release-1.0`: - - ```shell - hack/cherry_pick_pull.sh upstream/release-1.0 1206 - ``` - - - Be aware the cherry pick script assumes you have a git remote called - `upstream` that points at the Karmada github org. - - - You will need to run the cherry pick script separately for each patch - release you want to cherry pick to. Cherry picks should be applied to all - active release branches where the fix is applicable. - - - If `GITHUB_TOKEN` is not set you will be asked for your github password: - provide the github [personal access token](https://github.com/settings/tokens) rather than your actual github - password. If you can securely set the environment variable `GITHUB_TOKEN` - to your personal access token then you can avoid an interactive prompt. - Refer [https://github.com/github/hub/issues/2655#issuecomment-735836048](https://github.com/github/hub/issues/2655#issuecomment-735836048) - -## Cherry Pick Review - -As with any other PR, code OWNERS review (`/lgtm`) and approve (`/approve`) on -cherry pick PRs as they deem appropriate. - -The same release note requirements apply as normal pull requests, except the -release note stanza will auto-populate from the master branch pull request from -which the cherry pick originated. - -## Troubleshooting Cherry Picks - -Contributors may encounter some of the following difficulties when initiating a -cherry pick. - -- A cherry pick PR does not apply cleanly against an old release branch. In - that case, you will need to manually fix conflicts. - -- The cherry pick PR includes code that does not pass CI tests. In such a case - you will have to fetch the auto-generated branch from your fork, amend the - problematic commit and force push to the auto-generated branch. - Alternatively, you can create a new PR, which is noisier. - -## Cherry Picks for Unsupported Releases - -The community supports & patches releases need to be discussed. - -[cherry-pick-script]: https://github.com/karmada-io/karmada/blob/master/hack/cherry_pick_pull.sh diff --git a/docs/contributors/devel/lifted.md b/docs/contributors/devel/lifted.md deleted file mode 100644 index a6a9c078f..000000000 --- a/docs/contributors/devel/lifted.md +++ /dev/null @@ -1,114 +0,0 @@ -# Overview -This document explains how lifted code is managed. -A common user case for this task is developer lifting code from other repositories to `pkg/util/lifted` directory. - -- [Steps of lifting code](#steps-of-lifting-code) -- [How to write lifted comments](#how-to-write-lifted-comments) -- [Examples](#examples) - -## Steps of lifting code -- Copy code from another repository and save it to a go file under `pkg/util/lifted`. -- Optionally change the lifted code. -- Add lifted comments for the code [as guided](#how-to-write-lifted-comments). -- Run `hack/update-lifted.sh` to update the lifted doc `pkg/util/lifted/doc.go`. - -## How to write lifted comments -Lifted comments shall be placed just before the lifted code (could be a func, type, var or const). Only empty lines and comments are allowed between lifted comments and lifted code. - -Lifted comments are composed by one or multi comment lines, each in the format of `+lifted:KEY[=VALUE]`. Value is optional for some keys. - -Valid keys are as follow: - -- source: - - Key `source` is required. Its value indicates where the code is lifted from. - -- changed: - - Key `changed` is optional. It indicates whether the code is changed. Value is optional (`true` or `false`, defaults to `true`). Not adding this key or setting it to `false` means no code change. - -## Examples -### Lifting function - -Lift function `IsQuotaHugePageResourceName` to `corehelpers.go`: - -```go -// +lifted:source=https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/apis/core/helper/helpers.go#L57-L61 - -// IsQuotaHugePageResourceName returns true if the resource name has the quota -// related huge page resource prefix. -func IsQuotaHugePageResourceName(name corev1.ResourceName) bool { - return strings.HasPrefix(string(name), corev1.ResourceHugePagesPrefix) || strings.HasPrefix(string(name), corev1.ResourceRequestsHugePagesPrefix) -} -``` - -Added in `doc.go`: - -```markdown -| lifted file | source file | const/var/type/func | changed | -|--------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|---------| -| corehelpers.go | https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/apis/core/helper/helpers.go#L57-L61 | func IsQuotaHugePageResourceName | N | -``` - -### Changed lifting function - -Lift and change function `GetNewReplicaSet` to `deployment.go` - -```go -// +lifted:source=https://github.com/kubernetes/kubernetes/blob/release-1.22/pkg/controller/deployment/util/deployment_util.go#L536-L544 -// +lifted:changed - -// GetNewReplicaSet returns a replica set that matches the intent of the given deployment; get ReplicaSetList from client interface. -// Returns nil if the new replica set doesn't exist yet. -func GetNewReplicaSet(deployment *appsv1.Deployment, f ReplicaSetListFunc) (*appsv1.ReplicaSet, error) { - rsList, err := ListReplicaSetsByDeployment(deployment, f) - if err != nil { - return nil, err - } - return FindNewReplicaSet(deployment, rsList), nil -} -``` - -Added in `doc.go`: - -```markdown -| lifted file | source file | const/var/type/func | changed | -|--------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|---------| -| deployment.go | https://github.com/kubernetes/kubernetes/blob/release-1.22/pkg/controller/deployment/util/deployment_util.go#L536-L544 | func GetNewReplicaSet | Y | -``` - -### Lifting const - -Lift const `isNegativeErrorMsg` to `corevalidation.go `: - -```go -// +lifted:source=https://github.com/kubernetes/kubernetes/blob/release-1.22/pkg/apis/core/validation/validation.go#L59 -const isNegativeErrorMsg string = apimachineryvalidation.IsNegativeErrorMsg -``` - -Added in `doc.go`: - -```markdown -| lifted file | source file | const/var/type/func | changed | -|--------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|---------| -| corevalidation.go | https://github.com/kubernetes/kubernetes/blob/release-1.22/pkg/apis/core/validation/validation.go#L59 | const isNegativeErrorMsg | N | -``` - -### Lifting type - -Lift type `Visitor` to `visitpod.go`: - -```go -// +lifted:source=https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/api/v1/pod/util.go#L82-L83 - -// Visitor is called with each object name, and returns true if visiting should continue -type Visitor func(name string) (shouldContinue bool) -``` - -Added in `doc.go`: - -```markdown -| lifted file | source file | const/var/type/func | changed | -|--------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|---------| -| visitpod.go | https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/api/v1/pod/util.go#L82-L83 | type Visitor | N | -``` diff --git a/docs/contributors/devel/profiling-karmada.md b/docs/contributors/devel/profiling-karmada.md deleted file mode 100644 index 9935dcff0..000000000 --- a/docs/contributors/devel/profiling-karmada.md +++ /dev/null @@ -1,61 +0,0 @@ -# Profiling Karmada - -## Enable profiling - -To profile Karmada components running inside a Kubernetes pod, set --enable-pprof flag to true in the yaml of Karmada components. -The default profiling address is 127.0.0.1:6060, and it can be configured via `--profiling-bind-address`. -The components which are compiled by the Karmada source code support the flag above, including `Karmada-agent`, `Karmada-aggregated-apiserver`, `Karmada-controller-manager`, `Karmada-descheduler`, `Karmada-search`, `Karmada-scheduler`, `Karmada-scheduler-estimator`, `Karmada-webhook`. - -``` ---enable-pprof - Enable profiling via web interface host:port/debug/pprof/. ---profiling-bind-address string - The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060") - -``` - -## Expose the endpoint at the local port - -You can get at the application in the pod by port forwarding with kubectl, for example: - -```shell -$ kubectl -n karmada-system get pod -NAME READY STATUS RESTARTS AGE -karmada-controller-manager-7567b44b67-8kt59 1/1 Running 0 19s -... -``` - -```shell -$ kubectl -n karmada-system port-forward karmada-controller-manager-7567b44b67-8kt59 6060 -Forwarding from 127.0.0.1:6060 -> 6060 -Forwarding from [::1]:6060 -> 6060 -``` - -The HTTP endpoint will now be available as a local port. - -## Generate the data - -You can then generate the file for the memory profile with curl and pipe the data to a file: - -```shell -$ curl http://localhost:6060/debug/pprof/heap > heap.pprof -``` - -Generate the file for the CPU profile with curl and pipe the data to a file (7200 seconds is two hours): - -```shell -curl "http://localhost:6060/debug/pprof/profile?seconds=7200" > cpu.pprof -``` - -## Analyze the data - -To analyze the data: - -```shell -go tool pprof heap.pprof -``` - -## Read more about profiling - -1. [Profiling Golang Programs on Kubernetes](https://danlimerick.wordpress.com/2017/01/24/profiling-golang-programs-on-kubernetes/) -2. [Official Go blog](https://blog.golang.org/pprof) \ No newline at end of file diff --git a/docs/contributors/guide/git_workflow.png b/docs/contributors/guide/git_workflow.png deleted file mode 100644 index bb1f2330e..000000000 Binary files a/docs/contributors/guide/git_workflow.png and /dev/null differ diff --git a/docs/contributors/guide/github-workflow.md b/docs/contributors/guide/github-workflow.md deleted file mode 100644 index 2a6fa72f4..000000000 --- a/docs/contributors/guide/github-workflow.md +++ /dev/null @@ -1,279 +0,0 @@ ---- -title: "GitHub Workflow" -weight: 6 -description: | - An overview of the GitHub workflow used by the Karmada project. It includes - some tips and suggestions on things such as keeping your local environment in - sync with upstream and commit hygiene. ---- - -> This doc is lifted from [Kubernetes github-workflow](https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md). - -![Git workflow](git_workflow.png) - -### 1 Fork in the cloud - -1. Visit https://github.com/karmada-io/karmada -2. Click `Fork` button (top right) to establish a cloud-based fork. - -### 2 Clone fork to local storage - -Per Go's [workspace instructions][go-workspace], place Karmada' code on your -`GOPATH` using the following cloning procedure. - -[go-workspace]: https://golang.org/doc/code.html#Workspaces - -Define a local working directory: - -```sh -# If your GOPATH has multiple paths, pick -# just one and use it instead of $GOPATH here. -# You must follow exactly this pattern, -# neither `$GOPATH/src/github.com/${your github profile name/` -# nor any other pattern will work. -export working_dir="$(go env GOPATH)/src/github.com/karmada-io" -``` - -Set `user` to match your github profile name: - -```sh -export user={your github profile name} -``` - -Both `$working_dir` and `$user` are mentioned in the figure above. - -Create your clone: - -```sh -mkdir -p $working_dir -cd $working_dir -git clone https://github.com/$user/karmada.git -# or: git clone git@github.com:$user/karmada.git - -cd $working_dir/karmada -git remote add upstream https://github.com/karmada-io/karmada.git -# or: git remote add upstream git@github.com:karmada-io/karmada.git - -# Never push to upstream master -git remote set-url --push upstream no_push - -# Confirm that your remotes make sense: -git remote -v -``` - -### 3 Branch - -Get your local master up to date: - -```sh -# Depending on which repository you are working from, -# the default branch may be called 'main' instead of 'master'. - -cd $working_dir/karmada -git fetch upstream -git checkout master -git rebase upstream/master -``` - -Branch from it: -```sh -git checkout -b myfeature -``` - -Then edit code on the `myfeature` branch. - -### 4 Keep your branch in sync - -```sh -# Depending on which repository you are working from, -# the default branch may be called 'main' instead of 'master'. - -# While on your myfeature branch -git fetch upstream -git rebase upstream/master -``` - -Please don't use `git pull` instead of the above `fetch` / `rebase`. `git pull` -does a merge, which leaves merge commits. These make the commit history messy -and violate the principle that commits ought to be individually understandable -and useful (see below). You can also consider changing your `.git/config` file via -`git config branch.autoSetupRebase always` to change the behavior of `git pull`, or another non-merge option such as `git pull --rebase`. - -### 5 Commit - -Commit your changes. - -```sh -git commit --signoff -``` -Likely you go back and edit/build/test some more then `commit --amend` -in a few cycles. - -### 6 Push - -When ready to review (or just to establish an offsite backup of your work), -push your branch to your fork on `github.com`: - -```sh -git push -f ${your_remote_name} myfeature -``` - -### 7 Create a pull request - -1. Visit your fork at `https://github.com/$user/karmada` -2. Click the `Compare & Pull Request` button next to your `myfeature` branch. - -_If you have upstream write access_, please refrain from using the GitHub UI for -creating PRs, because GitHub will create the PR branch inside the main -repository rather than inside your fork. - -#### Get a code review - -Once your pull request has been opened it will be assigned to one or more -reviewers. Those reviewers will do a thorough code review, looking for -correctness, bugs, opportunities for improvement, documentation and comments, -and style. - -Commit changes made in response to review comments to the same branch on your -fork. - -Very small PRs are easy to review. Very large PRs are very difficult to review. - -#### Squash commits - -After a review, prepare your PR for merging by squashing your commits. - -All commits left on your branch after a review should represent meaningful milestones or units of work. Use commits to add clarity to the development and review process. - -Before merging a PR, squash the following kinds of commits: - -- Fixes/review feedback -- Typos -- Merges and rebases -- Work in progress - -Aim to have every commit in a PR compile and pass tests independently if you can, but it's not a requirement. In particular, `merge` commits must be removed, as they will not pass tests. - -To squash your commits, perform an [interactive -rebase](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History): - -1. Check your git branch: - - ``` - git status - ``` - - Output is similar to: - - ``` - On branch your-contribution - Your branch is up to date with 'origin/your-contribution'. - ``` - -2. Start an interactive rebase using a specific commit hash, or count backwards from your last commit using `HEAD~`, where `` represents the number of commits to include in the rebase. - - ``` - git rebase -i HEAD~3 - ``` - - Output is similar to: - - ``` - pick 2ebe926 Original commit - pick 31f33e9 Address feedback - pick b0315fe Second unit of work - - # Rebase 7c34fc9..b0315ff onto 7c34fc9 (3 commands) - # - # Commands: - # p, pick = use commit - # r, reword = use commit, but edit the commit message - # e, edit = use commit, but stop for amending - # s, squash = use commit, but meld into previous commit - # f, fixup = like "squash", but discard this commit's log message - - ... - - ``` - -3. Use a command line text editor to change the word `pick` to `squash` for the commits you want to squash, then save your changes and continue the rebase: - - ``` - pick 2ebe926 Original commit - squash 31f33e9 Address feedback - pick b0315fe Second unit of work - - ... - - ``` - - Output (after saving changes) is similar to: - - ``` - [detached HEAD 61fdded] Second unit of work - Date: Thu Mar 5 19:01:32 2020 +0100 - 2 files changed, 15 insertions(+), 1 deletion(-) - - ... - - Successfully rebased and updated refs/heads/master. - ``` -4. Force push your changes to your remote branch: - - ``` - git push --force - ``` - -For mass automated fixups (e.g. automated doc formatting), use one or more -commits for the changes to tooling and a final commit to apply the fixup en -masse. This makes reviews easier. - -### Merging a commit - -Once you've received review and approval, your commits are squashed, your PR is ready for merging. - -Merging happens automatically after both a Reviewer and Approver have approved the PR. If you haven't squashed your commits, they may ask you to do so before approving a PR. - -### Reverting a commit - -In case you wish to revert a commit, use the following instructions. - -_If you have upstream write access_, please refrain from using the -`Revert` button in the GitHub UI for creating the PR, because GitHub -will create the PR branch inside the main repository rather than inside your fork. - -- Create a branch and sync it with upstream. - - ```sh - # Depending on which repository you are working from, - # the default branch may be called 'main' instead of 'master'. - - # create a branch - git checkout -b myrevert - - # sync the branch with upstream - git fetch upstream - git rebase upstream/master - ``` -- If the commit you wish to revert is a:
- - **merge commit:** - - ```sh - # SHA is the hash of the merge commit you wish to revert - git revert -m 1 SHA - ``` - - - **single commit:** - - ```sh - # SHA is the hash of the single commit you wish to revert - git revert SHA - ``` - -- This will create a new commit reverting the changes. Push this new commit to your remote. - -```sh -git push ${your_remote_name} myrevert -``` - -- [Create a Pull Request](#7-create-a-pull-request) using this branch. diff --git a/docs/descheduler.md b/docs/descheduler.md deleted file mode 100644 index ab4a5a85e..000000000 --- a/docs/descheduler.md +++ /dev/null @@ -1,148 +0,0 @@ -# Descheduler - -Users could divide their replicas of a workload into different clusters in terms of available resources of member clusters. -However, the scheduler's decisions are influenced by its view of Karmada at that point of time when a new `ResourceBinding` -appears for scheduling. As Karmada multi-clusters are very dynamic and their state changes over time, there may be desire -to move already running replicas to some other clusters due to lack of resources for the cluster. This may happen when -some nodes of a cluster failed and the cluster does not have enough resource to accommodate their pods or the estimators -have some estimation deviation, which is inevitable. - -The karmada-descheduler will detect all deployments once in a while, every 2 minutes by default. In every period, it will find out -how many unschedulable replicas a deployment has in target scheduled clusters by calling karmada-scheduler-estimator. Then -it will evict them from decreasing `spec.clusters` and trigger karmada-scheduler to do a 'Scale Schedule' based on the current -situation. Note that it will take effect only when the replica scheduling strategy is dynamic division. - -## Prerequisites - -### Karmada has been installed - -We can install Karmada by referring to [quick-start](https://github.com/karmada-io/karmada#quick-start), or directly run `hack/local-up-karmada.sh` script which is also used to run our E2E cases. - -### Member cluster component is ready - -Ensure that all member clusters have joined Karmada and their corresponding karmada-scheduler-estimator is installed into karmada-host. - -Check member clusters using the following command: - -```bash -# check whether member clusters have joined -$ kubectl get cluster -NAME VERSION MODE READY AGE -member1 v1.19.1 Push True 11m -member2 v1.19.1 Push True 11m -member3 v1.19.1 Pull True 5m12s - -# check whether the karmada-scheduler-estimator of a member cluster has been working well -$ kubectl --context karmada-host -n karmada-system get pod | grep estimator -karmada-scheduler-estimator-member1-696b54fd56-xt789 1/1 Running 0 77s -karmada-scheduler-estimator-member2-774fb84c5d-md4wt 1/1 Running 0 75s -karmada-scheduler-estimator-member3-5c7d87f4b4-76gv9 1/1 Running 0 72s -``` - -- If a cluster has not joined, use `hack/deploy-agent-and-estimator.sh` to deploy both karmada-agent and karmada-scheduler-estimator. -- If the clusters have joined, use `hack/deploy-scheduler-estimator.sh` to only deploy karmada-scheduler-estimator. - -### Scheduler option '--enable-scheduler-estimator' - -After all member clusters have joined and estimators are all ready, specify the option `--enable-scheduler-estimator=true` to enable scheduler estimator. - -```bash -# edit the deployment of karmada-scheduler -$ kubectl --context karmada-host -n karmada-system edit deployments.apps karmada-scheduler -``` - -Add the option `--enable-scheduler-estimator=true` into the command of container `karmada-scheduler`. - -### Descheduler has been installed - -Ensure that the karmada-descheduler has been installed onto karmada-host. - -```bash -$ kubectl --context karmada-host -n karmada-system get pod | grep karmada-descheduler -karmada-descheduler-658648d5b-c22qf 1/1 Running 0 80s -``` - -## Example - -Let's simulate a replica scheduling failure in a member cluster due to lack of resources. - -First we create a deployment with 3 replicas and divide them into 3 member clusters. - -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: PropagationPolicy -metadata: - name: nginx-propagation -spec: - resourceSelectors: - - apiVersion: apps/v1 - kind: Deployment - name: nginx - placement: - clusterAffinity: - clusterNames: - - member1 - - member2 - - member3 - replicaScheduling: - replicaDivisionPreference: Weighted - replicaSchedulingType: Divided - weightPreference: - dynamicWeight: AvailableReplicas ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: nginx - labels: - app: nginx -spec: - replicas: 3 - selector: - matchLabels: - app: nginx - template: - metadata: - labels: - app: nginx - spec: - containers: - - image: nginx - name: nginx - resources: - requests: - cpu: "2" -``` - -It is possible for these 3 replicas to be evenly divided into 3 member clusters, that is, one replica in each cluster. -Now we taint all nodes in member1 and evict the replica. - -```bash -# mark node "member1-control-plane" as unschedulable in cluster member1 -$ kubectl --context member1 cordon member1-control-plane -# delete the pod in cluster member1 -$ kubectl --context member1 delete pod -l app=nginx -``` - -A new pod will be created and cannot be scheduled by `kube-scheduler` due to lack of resources. - -```bash -# the state of pod in cluster member1 is pending -$ kubectl --context member1 get pod -NAME READY STATUS RESTARTS AGE -nginx-68b895fcbd-fccg4 1/1 Pending 0 80s -``` - -After about 5 to 7 minutes, the pod in member1 will be evicted and scheduled to other available clusters. - -```bash -# get the pod in cluster member1 -$ kubectl --context member1 get pod -No resources found in default namespace. -# get a list of pods in cluster member2 -$ kubectl --context member2 get pod -NAME READY STATUS RESTARTS AGE -nginx-68b895fcbd-dgd4x 1/1 Running 0 6m3s -nginx-68b895fcbd-nwgjn 1/1 Running 0 4s -``` - diff --git a/docs/frequently-asked-questions.md b/docs/frequently-asked-questions.md deleted file mode 100644 index 0c45a44ea..000000000 --- a/docs/frequently-asked-questions.md +++ /dev/null @@ -1,40 +0,0 @@ -# FAQ(Frequently Asked Questions) - -## What is the difference between PropagationPolicy and ClusterPropagationPolicy? - -The `PropagationPolicy` is a namespace-scoped resource type which means the objects with this type must reside in a namespace. -And the `ClusterPropagationPolicy` is the cluster-scoped resource type which means the objects with this type don't have a namespace. - -Both of them are used to hold the propagation declaration, but they have different capacities: -- PropagationPolicy: can only represent the propagation policy for the resources in the same namespace. -- ClusterPropagationPolicy: can represent the propagation policy for all resources including namespace-scoped and cluster-scoped resources. - -## What is the difference between 'Push' and 'Pull' mode of a cluster? - -Please refer to [Overview of Push and Pull](./userguide/cluster-registration.md#overview-of-cluster-mode). - -## Why Karmada requires `kube-controller-manager`? - -`kube-controller-manager` is composed of a bunch of controllers, Karmada inherits some controllers from it -to keep a consistent user experience and behavior. - -It's worth noting that not all controllers are needed by Karmada, for the recommended controllers please -refer to [Recommended Controllers](./userguide/configure-controllers.md#recommended-controllers). - - -## Can I install Karmada in a Kubernetes cluster and reuse the kube-apiserver as Karmada apiserver? - -The quick answer is `yes`. In that case, you can save the effort to deploy -[karmada-apiserver](https://github.com/karmada-io/karmada/blob/master/artifacts/deploy/karmada-apiserver.yaml) and just -share the APIServer between Kubernetes and Karmada. In addition, the high availability capabilities in the origin clusters -can be inherited seamlessly. We do have some users using Karmada in this way. - -There are some things you should consider before doing so: - -- This approach hasn't been fully tested by the Karmada community and no plan for it yet. -- This approach will increase computation costs for the Karmada system. E.g. - After you apply a `resource template`, take `Deployment` as an example, the `kube-controller` will create `Pods` for the - Deployment and update the status persistently, Karmada system will reconcile these changes too, so there might be - conflicts. - -TODO: Link to adoption use case once it gets on board. diff --git a/docs/images/Karmada-logo-horizontal-color.png b/docs/images/Karmada-logo-horizontal-color.png deleted file mode 100644 index 8e57009ce..000000000 Binary files a/docs/images/Karmada-logo-horizontal-color.png and /dev/null differ diff --git a/docs/images/adoptions-vipkid-architecture-en.png b/docs/images/adoptions-vipkid-architecture-en.png deleted file mode 100644 index b9190cfe0..000000000 Binary files a/docs/images/adoptions-vipkid-architecture-en.png and /dev/null differ diff --git a/docs/images/adoptions-vipkid-architecture-zh.png b/docs/images/adoptions-vipkid-architecture-zh.png deleted file mode 100644 index 5cef771b3..000000000 Binary files a/docs/images/adoptions-vipkid-architecture-zh.png and /dev/null differ diff --git a/docs/images/architecture.drawio b/docs/images/architecture.drawio deleted file mode 100644 index 4cf4c1588..000000000 --- a/docs/images/architecture.drawio +++ /dev/null @@ -1 +0,0 @@ -7Vttc5s4EP41/ugMSIDxR9txX27auSRuL82njgIy5ooRBTmx++tPGGFeJDDEYLvTc2catBJCPM/uandlD+BsvX0fomD1mdjYGwDF3g7g7QAAVTV19ieW7BLJSOECJ3RtPigTLNxfmAsVLt24No4KAykhHnWDotAivo8tWpChMCSvxWFL4hWfGiAHC4KFhTxR+ujadJVITTDK5B+w66zSJ6vGOOlZo3Qwf5NohWzymhPB+QDOQkJocrXezrAXg5fiktz3rqL3sLAQ+7TJDesA3wPVvv86tKMlop8egs1yqGrJNC/I2/A35quluxSCkGx8G8ezqAM4fV25FC8CZMW9r4x0JlvRtce7UWhxEg3WEtfIl/2CQ4q3ORFf83tM1piGOzaE9wKN48cVSOcrfs3YgGM+ZJVjAphciLgGOIepM5DYBcdJjpnxCIwnMHyZ6/Dx4+7p/n720RsCAbL5l9mtABvjO4gvbURRREmIj4O3dD1vRjwS7meAc/VWn486gtEowqjquoCjIYFR6wvFBoqHbWaKvElCuiIO8ZE3z6TTTDUV1srGfCIk4Jj+iyndcZVEG0qKiFciG5FNaOGa9XP4KAodTI9rS/wutTyF2EPUfSm6nVNQfzKHOvyOpr6l/WLu6GUBnDuJ7l4C9YihRiexd2YCy0NR5Fqp+J3rVZjDcv+J5cSnfG7mv+AU+3Y6mU98nEj4PErXJEthVS9Fct2qcyT/QOEa2YgJJ3cf2f8LHDLXcZqnL7FjI2wuExZD8gPnegzLxM8y3vZt/nTJdtbax8Gyj9NEH6dqhujkjA6cnHR7NY+bm8NADwpGVdxGY//vxCZyTJkFnA5BEXpOH6bU4gdGsIAfgIq41wLJJjHuCz+j4+ikvc5mRKjy/aKW9+OqexzaGyX/6cBn3PqLh8Xn2Zfh9tfPfx6+f/r614+fQwgFqO+I51rxGmfMTkPieR17jKUe/5Oiv/9wD5GTJ58+aOC9JrzRCyYAx1L4c6yNpEFT56RJHb0qkjZ1fdv1nQEwPLaY6TOjzHDiqz+HRLW0DVSwkQ98x5fjEIg+br7F1oa6xO+PxfZs2ShaHWbvi7pysnecujOZn/T1xBRm5m0iGnPz51qf3pLAM9me9OVGkvCCM5cSl0XOGYVROoo99DBQ4JXhReuMjqcqeUa5CHmu48eZEaOGKQycxui7FvImvGPt2rZXVU8oZmd90WxoBZ7VEmMKEIhWpVWa8n1aT1SPBaozZhcW82ybrm2zaULUYQKka8eLPGZPRR55ebFJvSHL3rNSQA5WvHXpN67N8fVTfH2j89btNtd1u0sbzFR339IJ4kburriZ3bZvpfdVcpCvFNTp19FyECfkaKUgx5guYSyVNS4o8CfcEZe9WZYxm0WFgaOSJiTvze8CueJyaSINliYCpYkSYISJ9lp1eO0TFE2MhNsrWqo0OT15yvdVKM1BQQ9K+ZTXSamCvl3RjP8V7aKKJhbX0mggCpBf0Djj5yY+3Nm7+WG0ryJM2ABVC7ZZZxY/5Gc5yNA63mX85ygoBCNWEowMAw+xkCELR0p3y8WdByoeXtLTwpSud0KgFeNQ3RDjEWMki0d62glBAwclLwU2R6N1wU8dlYxpLBb8VE0Ckwp7ixh6Lvnh/UcWlUEDjqHdPGQ+MNqqyHduOMUUp7sTgO6BSncL5aaUR0oq0aaR1urOE8qKtfxDtq9MenCqp2Z/vTIEio5D5jeUs7pXsRYzcTAPAS7qY8uHUlfgY4EshrlOH6tV0HBNPlZSwT2/j20OVKWPlZyWntnHAnG3ynzsNVbYemXogj721Xx4+DC0vmjf7ABDuCGTqSatlpZT7HN9Y6eU1rbAP59gS18Tykk5MQHWi0frqa0dyVonYYh2uWFBPCBq+5SM9GS+tybE1XA12HhbeT3JYZNkJ27IeaWBjQtgqZIQ0zxnCNPgyyJvB7GLLbp9PVpvEPtASdVJVXtDWTwI+O1RFs67L45y+rQcyn/TlfRgNNnfo9+ehFIpRZO4k3OToAokiBt285q4Bo3iYYpy7DRl37rDocveJ46y9sJiACCvnafHPmeonddWTa6keK7B4mErTIlsXTwvnQ/Csu51FIZosBiHpDlw1boANOvGnxy4yM0DnG4epytz1aGTcsS2Ko0gr9x1AZsQQTeznosdIKW54akHSHBUynqvxAY0tXZ8TzbQwbFpyQb61OfaWONanLUykvLY3lmXJ+pJUQ2oSxfcleLV/WAi/3WczTO2aHxj/GigWJ7LVGHoEEEh25UmS785qNSzFpWz0tfkZUmN7KvcbyhAsmb2A8EE++xnlnD+Hw== \ No newline at end of file diff --git a/docs/images/architecture.png b/docs/images/architecture.png deleted file mode 100644 index 988cade43..000000000 Binary files a/docs/images/architecture.png and /dev/null differ diff --git a/docs/images/argocd-new-app-cluster.png b/docs/images/argocd-new-app-cluster.png deleted file mode 100644 index 9ac48ea54..000000000 Binary files a/docs/images/argocd-new-app-cluster.png and /dev/null differ diff --git a/docs/images/argocd-new-app-name.png b/docs/images/argocd-new-app-name.png deleted file mode 100644 index da7b14bb1..000000000 Binary files a/docs/images/argocd-new-app-name.png and /dev/null differ diff --git a/docs/images/argocd-new-app-repo.png b/docs/images/argocd-new-app-repo.png deleted file mode 100644 index ec50fb509..000000000 Binary files a/docs/images/argocd-new-app-repo.png and /dev/null differ diff --git a/docs/images/argocd-new-app.png b/docs/images/argocd-new-app.png deleted file mode 100644 index 80d8d577b..000000000 Binary files a/docs/images/argocd-new-app.png and /dev/null differ diff --git a/docs/images/argocd-register-karmada.png b/docs/images/argocd-register-karmada.png deleted file mode 100644 index 396faaaba..000000000 Binary files a/docs/images/argocd-register-karmada.png and /dev/null differ diff --git a/docs/images/argocd-status-aggregated.png b/docs/images/argocd-status-aggregated.png deleted file mode 100644 index 2f1034e46..000000000 Binary files a/docs/images/argocd-status-aggregated.png and /dev/null differ diff --git a/docs/images/argocd-status-overview.png b/docs/images/argocd-status-overview.png deleted file mode 100644 index 5f19bdf0a..000000000 Binary files a/docs/images/argocd-status-overview.png and /dev/null differ diff --git a/docs/images/argocd-status-resourcebinding.png b/docs/images/argocd-status-resourcebinding.png deleted file mode 100644 index 1a9ba39a7..000000000 Binary files a/docs/images/argocd-status-resourcebinding.png and /dev/null differ diff --git a/docs/images/argocd-sync-apps.png b/docs/images/argocd-sync-apps.png deleted file mode 100644 index 5353be595..000000000 Binary files a/docs/images/argocd-sync-apps.png and /dev/null differ diff --git a/docs/images/binding-controller-process.drawio b/docs/images/binding-controller-process.drawio deleted file mode 100644 index 26caf962b..000000000 --- a/docs/images/binding-controller-process.drawio +++ /dev/null @@ -1 +0,0 @@ -7V1bd5s4EP41PvuUHJur/Zhrm91ut23azfapRwbZpgHkCNmx++tXAgkDwgHHMcK46TktTCSBNBfNfDOoPf0qWL3DYD77G7nQ72l9d9XTr3sa/TFG9B9GWSeUwcjuJ5Qp9lxO2xDuvV+QE0WzhefCKNeQIOQTb54nOigMoUNyNIAxes43myA//9Q5mEKJcO8AX6Y+eC6ZJdShZm/o76E3nYknDyw+4wCIxnwm0Qy46DlD0m96+hVGiCRXweoK+mz1xLo83K0f/A+P1rs/P0dP4NvlX18//nuWDHa7S5d0ChiG5NVDf1neBeuviDz9GK3uf179M71d+mdiamQt1gu6dPn4LcJkhqYoBP7NhnqJ0SJ0IRu1T+82bT4gNKfEASX+hISsuSyABUGUNCOBz3+bPJM9qMCiivnxdhFaYAe+MCkhZgBPIXmp3YaLVP4hCiDBa9oRQx8Qb5l/O8DlcJq2410vMAbrTIM58kISZUb+xAi0AdepM1MIFFeps9GwwLliD916uQe9SN5C3GWmsyHF8rCDbPCJL4G/4Etx4bqUcLNkbCrKzUYqGIufZx6B93MQ8+mZmpa8BPhgDP1L4DxO425XyEeY/ipEIROwief7gtTT9InJ/lB6RDB6hJnfWPEP64FCkqEnP6mkLSEmcPUKWZNFI+XhsMCRQd/glOeNlRkITs8yFka0KxOoDEN355emQpfpKuL1f6z/uSluv/Ph4pvrVe5unb37BLFH5w4xJx7aMOjtNgxa/zgMgy4ZhmvoUza+jW04Au3XisveAu03lGj/yiMZ5ad33zO/2ag+u1lnbppWfKuu4u+p93tx0Dwq+90xzusqOW9JBvUunCAc0FUqikRiBUUkM8gzk0Yoc9YuWE1ZNHc+8dGzMwOYnNMFD7wQEGYjSw3wwWylUdzUBqldzNjKYZOm0paWW4XmHVryh005O3vxYijx4tvcBW/lSxx9nGEUY8UWeBqjk1AfgV9V64+mUn/Ea2a48YDw4+cFpLdvtHm4HoYO+UHVEjS+ewyKAY5VIv79JsV/IOMinZR/ra7820rlX5N9J+rsJDsIvQr220CaM+zDoWq7PtBVyPFrA5CDy79dV/5HSuXfkOT/YykbPzBXKL/0wPemIb126IqxgOySybnnAP+C/yLwXDfhMoy8X2Acj8cWn0NZdHDzsmdev6QoPK/DO/fSbEqWUS+I43YN6p/bOQ3ihmA3jG4DqYkmaDKJIJH48hZ4+lEF+K3RL00pKDOQY/PvHdEvs1K/dNtsvVLJwXwMRnvhtKffSpx6rQcMHS/yUNi4+yt5CX3lXoIcsaswawc3T6O65mmLGjVknuRg/D0ImVHR+l8gtQ2Ox64tn07hcozp1ZRdfcLIgVEkMa5riRopTWuXYI8NZ2nlgL2L+pPW+lTpj6EUPtHkYD7VnzSpeSraIqU1W6Atv2NSaa+psykpxWSs00DIanPDtJSaOFPixsSj60iXDbP1WXkR6YCrLG31lvJEiSbHjp3Ug7qRvK40ktfkSBHE9YwZbWglVGwbBck2y3IgzUr2aQSBel0nVlfrxI7UOkk5F2njMbWq8rPlNeF2cfcyrIrST6nHyGii9FOOXjF8Kk02HyUoqm1J5mRA0ZFp5heeP7O9GKku++PRwolD2uP3+6Td0VCeSNU1ab2TnDRbHq1/zaqP3mjVK+v6JHU7HCP0IiPKsOpGSzX0slKlBAt1vaUAQ9+jZ7YzIPrXeOH5zCVkJTSxbiSN6cMz7QV14Rcpvico3kSMQqflYwjcdRpraXQy/YWocotbUmaGU+ieZx64GUse/S4zeohIfmSHPi0Zmew03hcYoCXrh/Ccvg5/wstDUKK8Cpmluoh5xN+UGmUCvJBug/24Lq/HlNJ6WrDv5y4fAQ6AC849yobbZAru2ZiK0kXSDARMmhMY2wtdlt6hcwcBjGJ5127ZddpOvA8ffBsfCzpINYIUFG87OLcFzhP65sMJeWlzK9PVvEvWfK1h0ZIOdFmB00ZZBU7N69t/1SD7GydgSk27dabUOA1Mz+Czqk5bKEVYxWtmuOGKbAWIuz/HNreVeIaUZlCP1Blq0wzpTYsjaKN2PZyaCHpQ/M7ENCoiaKlHIxG0IRfsdaSgyKgs2Bv1DSu/5NyOtTd2NuXsSacAj2qm2ZbeeoTDkLnUSe9kX6djiyWUPjxPMRQxSGL1eb8D8E/OlHWkitmorrIc2Fbr9askdxayacyhiMVFFs3tAKYoeajqM26m7PGX4C0PM0D+iGJRShCniDAsqArJ2hHciubQOZ/7lBdB/IVmfwbYM8cQsph8nlRLQYajjeN3cGbQXfgQ14Wj4vGpsibmkC4MG31CCbPdwajfkM9OkI9m1oR87IMJuuwfl4jpbZqzZwAhR/3CcTTPsD6aMxTTKmCBOaCRyuocTGM0iaOKZwylxMj3WTlACX5YGPS3jFXK2LDoXOglVYX6oETGDoYqmrYK7/CoEuZm7SNT1IT71s7hvtSjkXDflFNQHfFszS3BSCbct0aj3JK3Po4Ubl5Xo/1qnpn2kX1Vacmnkakw7q2xx5a2pz3eb2eVA/kkFDn+oFAqMFYfFFqncZKRpdWUfUPpUUbiNTPcyNWTFDnTjmycJNbq66esOmhu6F6w06HZVuuDKPKcgrPNz6cT1xlXe6fz6TKfAmk7fQvkgmiWMrc1u4O5ZXfI8NosYbWg7Qs0S4fBmXpBiJKpSkBzydGnZnGohjFrq87XHRVSmhOSMpGtOFCxPKKskMzWmGt9S7FtU8JYRHklUKu2MErHVOlF7OLQwiiHmSdQJ6UVbUAZtNRsnZQtOwFddMmEplTquK30my+7LJ7nZa5t/+xLSkKpd8zs04i17bofC1lKj/6y1Rz5fUxFa7U52ZaiNb3qxH81RWtiHTv5BZFsaA94yBK93fyPQQl7Nv/xkn7zPw== \ No newline at end of file diff --git a/docs/images/binding-controller-process.png b/docs/images/binding-controller-process.png deleted file mode 100644 index 4aef69c2b..000000000 Binary files a/docs/images/binding-controller-process.png and /dev/null differ diff --git a/docs/images/cluster-controller-process.drawio b/docs/images/cluster-controller-process.drawio deleted file mode 100644 index 0c2f03096..000000000 --- a/docs/images/cluster-controller-process.drawio +++ /dev/null @@ -1 +0,0 @@ -7V1Zd9o6EP41PJLjHXjM1uU27WmT25v2qUexBbgxFpVFAv31V7LlTbLBCRgZSNpzYsvyNjPfzKfxSOmZl7Plewzm08/Ig0HP0Lxlz7zqGYZuGUaP/de8FW8ZDbSkZYJ9j7flDXf+X8gb024L34NRqSNBKCD+vNzoojCELim1AYzRc7nbGAXlu87BBEoNdy4I5NZ73yPTpHVoDPL2D9CfTNM7684oOTIDaWf+JtEUeOi50GRe98xLjBBJtmbLSxgw6aVyuf+4ug9uHp33/3yL/oDvF5/+/fJfP7nYu5eckr0ChiF59aWt6Pb7p+s/D083t/C9Plr86p/f9XUuhicQLLjA+MuSVSpB6FGB8l2EyRRNUAiC67z1AqNF6EF2H43u5X1uEJrTRp02/oaErLh1gAVBtGlKZgE/mtyT3UhQ2oY35v0itMAuXPOaXIME4Akk68SRqZUCAqIZJHhFz8MwAMR/Kj8c4IY5yfrlwqcbXP4v0IUmqeIjgRgQSBvp1kxSTC52JsPnqU/g3RzEgnimaK4S8RPEBC5fIWRZKPwqZgoP7iAsvvucg01Pu0wLQEv77VyM+lCFBVMR4tUPdv6Zne7+5JeLd66Wpb0V32vb8lOXu8nyLZWWr48k0/9ZqcQb8EDjU0nwIPAnId12qbwgpg3MxH3q/8/5gZnveYmOYeT/BQ/x9Zjo58gPSfwq9kXPvlqHER6c+Ml5SCiqaY0x1oKnr505dgk+3P02lju/9Ff2LoUuaDyOqL5FxWRP8HpdZdHwDV4vgJehEl7phQvw+nIc8CrwsBp4DRyjDLB+5xEma+sKBpD44aRnvpP0FhGMHjNqq5e1RynrnPWbLSeM3p+NA/TsTgEmZx50/chHYQ1x2BdbyKiBMrqQetwjZ8BGQ0c1UuqoJFV8ACHzLoZ2C6mTcH227QT0BS4eMN2asK2vGLkwiiStvYgeB8z1XQD3cRKfdokCRL3dVYhCpt6xHwRpU88wxzb7x/AVg69wxIl/2BkoJIX25KdFYOlOGViGphxYMqk7RmBZDYGlmyqRZdUjKw4ubGMnOOo+Uqxh55BinwRS7IZIqRky7QcosipgGC0ww8fYp9KkwsPbIWRfEcBSnolxJFnmIjTPe+xZnD8Llj69eAR4Bjxw5iN2Jzh7gNgNFhEdcvRp3Ke+IwiY3NPukgaoLIlAfUsOpyKO86Z0gBPAMVk3vKnSahlyzJGlENXa0rLovRxb1rJZpWWjNe/lqHBWr00IxHtfIfbpyzMz3EuWYNiUIzhKOYJMEo4kC2fUyDVPEwyFLAG/Y3eTBKatFnUlzOUQPETU2dsSDkH5NamOvuiBkwfjZ+UWco4xWBW6cYDU3kcX7mNbwnfADf3T58oNNHmCnZqrZUq+5UhSkAkQ1/mWkTOyyhK3uu5dhpK2qliwmIiJFm48gDz4LKVIpW31WUoZPyr8f9sue9SUKNXk/fdDlOTEVoYOuITuglCLjWJLPYiBojFUbt0y8zxG605LojYzEpXWbZoqZH9QzLK5IndCLbdlhOawi4xQ5hjHwghrMv0FRminb59K3Ow6I0xNvjLoRSfEBU1HdbQ05QKBo4yWRlMyqLR2zbSUhkv9EMJlU02qCZeO9bJwKfbfT7iUC7WPJVzWwLcQLgdmOVx2Pn+SWnxBWxjO0NPRR0vx81QHoqVMXI4yWppNo6WhMlrqch4rB8YB5E6k4hHluRNTdjVHad+N66yUJk90OZOV2XfH60dE01ZfP2IomcnTuiVva6A1nxvFwhBNUEvN58adEVS5cupYqgc2zuEZGlqZoHIX3F2Caqot2jmIUWvTesWW6gcsSwR0IzxvO/rdVD6gZvQrlxIey+h3Y2nSyBnqJYl3vjZJl0NBFQk6mUGw+vIBXcZPMrecCcfQrlgp7I5kTi87ozomrMa2UuoSHttSg1iA5Sgv9DflJN5t3eiXPmV8QYL9yYTBRXNB5AKPTQM0NA8GXHcsJLOjZOpH0mXOKLbAjEk9fIhyk+9Y4XRbBuCIg/WKEU3Wth8DaDAnyl3gp2wsCEPvnC2CwvASgCjyXYFRLX3yo7Bd4FN0L6dTbGdvc6Jrvn8WpG5XCD1t25I2OeLHb11QZvL8Em+SLzQQrKctAjaofuDa5zKqn6tdAlbx+bfssKondPRh1M98UBzv+bQO2jkEM5gdmyTTOt4meEgTPEaqJ3hY8vfmz/EMnctMlejhN1u7KY9GXjpjncpjCiIWiHau0M2EXl3oybxOytkHFdzD2asWDUmLd4z+v7utoOZaTC0ELZ+UAo1RWYFWxTyrTKn7UWBF2Wkkul2RRb75U8mfViBxv/7UroiljLrHSxDQ349wtXZ6ZK2O2aVyhb+pXlT9QFetekdJsVCvZcJv89DS0foeI11eLM1YOusJtvglpb8Pgp3KsOAUzj2PNlw/QT6+ON21TWxhCafqVYOGMpDbm9quZPpl60Dedo2ShoAS1bKjkbRtHgLQ5QqYbAWWHWC9+2i2Bp1Ds62mhpfn7vJ83c/CkercHdvZ94dQe9A0vBtbuo/tlHgiq+g01obSpdzsw1oV5OigqHS1MVv+zvUxHCM8q0gjtfWZsa3wZZoCaahcP0yrGFW2R0ZPY3K43XRBD6WTw215XuM9wo/fFnCxs2/sno+hS355gADV1l9VJLpf4z+N2XB206URts6xbGf9cpL6+9wDuxrdHHomQ1oTqM31j+lu/sdJkuFr/jdezOv/AQ== diff --git a/docs/images/cluster-controller-process.png b/docs/images/cluster-controller-process.png deleted file mode 100644 index 3d5ff31aa..000000000 Binary files a/docs/images/cluster-controller-process.png and /dev/null differ diff --git a/docs/images/cncf-logo.png b/docs/images/cncf-logo.png deleted file mode 100644 index fc25a4660..000000000 Binary files a/docs/images/cncf-logo.png and /dev/null differ diff --git a/docs/images/demo-3in1.svg b/docs/images/demo-3in1.svg deleted file mode 100644 index f069db2ce..000000000 --- a/docs/images/demo-3in1.svg +++ /dev/null @@ -1 +0,0 @@ -root@lfbear-desktop:~/karmada/samples/alpine#root@lfbear-desktop:~/karmada/samples/alpine##Today,Iwillshow3experimentstoexplainthemulti-clustermanagementsystem'karmada'root@lfbear-desktop:~/karmada/samples/alpine#kubectlconfigget-contextsCURRENTNAMECLUSTERAUTHINFONAMESPACE*karmadakarmada-apiserverkarmada-apiservermember1kind-member1kind-member1member2kind-member2kind-member2root@lfbear-desktop:~/karmada/samples/alpine##Herearemyclusters,clusternamed'karmada'iskarmadacontrol-planeandtheothersaremynormalclusters.root@lfbear-desktop:~/karmada/samples/alpine#kubectlconfiguse-contextkarmadaSwitchedtocontext"karmada".root@lfbear-desktop:~/karmada/samples/alpine##FirstlyIwillshowhowtodeployadeploymenttotwoclustersroot@lfbear-desktop:~/karmada/samples/alpine##Letuscheckmydeploymentandpropagationroot@lfbear-desktop:~/karmada/samples/alpine#catdeployment.yamlapiVersion:apps/v1kind:Deploymentmetadata:name:mytestlabels:app:mytestspec:replicas:1selector:matchLabels:app:mytesttemplate:metadata:labels:app:mytestspec:containers:-image:alpinename:alpinecommand:["/bin/sh","-c","echo'hey,myfriend'&&sleep1d"]root@lfbear-desktop:~/karmada/samples/alpine#catpropagationpolicy.yamlapiVersion:policy.karmada.io/v1alpha1kind:PropagationPolicyname:mytest-propagationresourceSelectors:-apiVersion:apps/v1kind:Deploymentname:mytestplacement:clusterAffinity:clusterNames:-member1-member2root@lfbear-desktop:~/karmada/samples/alpine#kubectlapply-fdeployment.yamldeployment.apps/mytestcreatedroot@lfbear-desktop:~/karmada/samples/alpine#kubectlapply-fpropagationpolicy.yamlpropagationpolicy.policy.karmada.io/mytest-propagationcreatedroot@lfbear-desktop:~/karmada/samples/alpine##Letuscheckmydeploymentstatusroot@lfbear-desktop:~/karmada/samples/alpine#kubectlconfiguse-contextmember1Switchedtocontext"member1".root@lfbear-desktop:~/karmada/samples/alpine#kubectlgetpodNAMEREADYSTATUSRESTARTSAGEmytest-7bf85c7c46-6x4lj1/1Running08sroot@lfbear-desktop:~/karmada/samples/alpine#kubectllogsroot@lfbear-desktop:~/karmada/samples/alpine#kubectllogsmytest-7bf85c7c46-6x4ljhey,myfriendroot@lfbear-desktop:~/karmada/samples/alpine##Andpodinclustermember2root@lfbear-desktop:~/karmada/samples/alpine#kubectlconfiguse-contextmember2Switchedtocontext"member2".mytest-7bf85c7c46-q9pgn1/1Running020sroot@lfbear-desktop:~/karmada/samples/alpine#kubectllogsmytest-7bf85c7c46-q9pgnroot@lfbear-desktop:~/karmada/samples/alpine##Thedeploymentmytesthasrunningsameintwoclustersroot@lfbear-desktop:~/karmada/samples/alpine##ThenI'llshowhowtochangepropagationpolicyandremoveadeploymentinclustermember1root@lfbear-desktop:~/karmada/samples/alpine#kubectlgetpropagationpolicyNAMEAGEmytest-propagation41sroot@lfbear-desktop:~/karmada/samples/alpine#kubectleditpropagationpolicymytest-propagation"/tmp/kubectl-edit-g6jh6.yaml""/tmp/kubectl-edit-g6jh6.yaml"28L,1153C#Pleaseedittheobjectbelow.Linesbeginningwitha'#'willbeignored,#andanemptyfilewillaborttheedit.Ifanerroroccurswhilesavingthisfilewillbe#reopenedwiththerelevantfailures.#apiVersion:policy.karmada.io/v1alpha1kind:PropagationPolicymetadata:annotations:kubectl.kubernetes.io/last-applied-configuration:|{"apiVersion":"policy.karmada.io/v1alpha1","kind":"PropagationPolicy","metadata":{"annotations":{},"name":"mytest-propagation","namespace":"default"},"spec":{"placement":{"clusterAffinity":{"clusterNames":["member1","member2"]}},"resourceSelectors":[{"apiVersion":"apps/v1",resourceSelectors":[{"apiVersion":"apps/v1","kind":"Deployment","name":"mytest"}]}}creationTimestamp:"2021-04-16T07:09:43Z"generation:1name:mytest-propagationnamespace:defaultresourceVersion:"151425"selfLink:/apis/policy.karmada.io/v1alpha1/namespaces/default/propagationpolicies/mytest-propagationuid:4dabd644-2bff-4f1e-9edf-9ef780284e22spec:placement:clusterAffinity:clusterNames:-member1-member2resourceSelectors:-apiVersion:apps/v1kind:Deploymentname:mytest"/tmp/kubectl-edit-g6jh6.yaml"28L,1153C1,1顶端#Pleaseedittheobjectbelow.Linesbeginningwitha'#'willbeignored,#andanemptyfilewillaborttheedit.Ifanerroroccurswhilesavingthisfilewillbe#reopenedwiththerelevantfailures.#apiVersion:policy.karmada.io/v1alpha1kind:PropagationPolicymetadata:annotations:kubectl.kubernetes.io/last-applied-configuration:|{"apiVersion":"policy.karmada.io/v1alpha1","kind":"PropagationPolicy","metadata":{"annotations":{},"name":"mytest-propagation","namespace":"default"},"spec":{"placement":{"clusterAffinity":{"clusterNames"t-propagation","namespace":"default"},"spec":{"placement":{"clusterAffinity":{"clusterNames":["member1","member2"]}},"resourceSelectors":[{"apiVersion":"apps/v1","kind":"Deployment","name":"mytest"}]}}creationTimestamp:"2021-04-16T07:09:43Z"generation:1name:mytest-propagationnamespace:defaultresourceVersion:"151425"selfLink:/apis/policy.karmada.io/v1alpha1/namespaces/default/propagationpolicies/mytest-propagationuid:4dabd644-2bff-4f1e-9edf-9ef780284e22spec:placement:clusterAffinity:clusterNames:-member1-member2resourceSelectors:-apiVersion:apps/v1kind:Deploymentname:mytest//m#andanemptyfilewillaborttheedit.Ifanerroroccurswhilesavingthisfilewillbe/memetadata:/memt-propagation","namespace":"default"},"spec":{"placement":{"clusterAffinity":{"clusterNames":["member1","member2"]}},"/meme/membt-propagation","namespace":"default"},"spec":{"placement":{"clusterAffinity":{"clusterNames":["member1","member2"]}},"/membet-propagation","namespace":"default"},"spec":{"placement":{"clusterAffinity":{"clusterNames":["member1","member2"]}},"/membert-propagation","namespace":"default"},"spec":{"placement":{"clusterAffinity":{"clusterNames":["member1","member2"]}},"/member1t-propagation","namespace":"default"},"spec":{"placement":{"clusterAffinity":{"clusterNames":["member1","member2"]}},"/member1d22,9顶端namespace:default::w:wqpropagationpolicy.policy.karmada.io/mytest-propagationeditedroot@lfbear-desktop:~/karmada/samples/alpine##LetuscheckitNAMEREADYSTATUSRESTARTSAGEmytest-7bf85c7c46-6x4lj1/1Terminating058sroot@lfbear-desktop:~/karmada/samples/alpine##Thepodhasgone,becausemember1isnotinclusteraffinityroot@lfbear-desktop:~/karmada/samples/alpine##Thelastexperiment,wemakesomedifferencesbetweenclustermember1andmember2inmytestdeploymentroot@lfbear-desktop:~/karmada/samples/alpine#catoverride1.yamlkind:OverridePolicyname:mytest-override-1namespace:defaulttargetCluster:clusterNames:-member1#allmatchingtargetClusterswouldsharethesamesetofoverridesbelowoverriders:plaintext:-path:"/spec/template/spec/containers/0/command"operator:replacevalue:["/bin/sh","-c","echo'hey,thispodinmember1'&&sleep1d"]root@lfbear-desktop:~/karmada/samples/alpine##Weapplytheoverridepolicyroot@lfbear-desktop:~/karmada/samples/alpine#kubectlapply-foverride1.yamloverridepolicy.policy.karmada.io/mytest-override-1createdroot@lfbear-desktop:~/karmada/samples/alpine##Weaddclustermember1inpropagationpolicy"/tmp/kubectl-edit-tfyz6.yaml""/tmp/kubectl-edit-tfyz6.yaml"27L,1137Cgeneration:2resourceVersion:"151519"namespace:default"/tmp/kubectl-edit-tfyz6.yaml"27L,1137C1,1全部generation:2resourceVersion:"151519"/member2t-propagation","namespace":"default"},"spec":{"placement":{"clusterAffinity":{"clusterNames":["member1","member2"]}},"/member222,9全部22,7底端~22,15底端--插入--22,15底端--插入--22,16底端mytest-6d6df48479-dxxqc1/1Running05sroot@lfbear-desktop:~/karmada/samples/alpine#kubectllogsmytest-6d6df48479-dxxqchey,thispodinmember1root@lfbear-desktop:~/karmada/samples/alpine##Theend,Thanks!WelcomeattentionKarmadaroot@lfbear-desktop:~/karmada/samples/alpine#exit*karmadakarmada-apiserverNAMEREADYmytest-7bf85c7c46-6x4lj1/1Running/member110,214顶端/member1n10,214顶端/member122,9顶端22,7全部:22,7全部"/tmp/kubectl-edit-g6jh6.yaml"27L,1137C已写入/member210,224全部/member2y22,9全部/member2yy22,9全部/member2p22,9全部23,7底端~@k23,7底端^E22,7底端^[22,7底端~@k22,7底端22,8底端~@k22,8底端22,9底端~@k22,9底端22,10底端22,11底端~@k22,11底端22,12底端~@k22,12底端22,13底端~@k22,13底端22,14底端~@k22,14底端~@k22,15底端i22,15底端-member^["/tmp/kubectl-edit-tfyz6.yaml"28L,1153C已写入mytest-6d6df48479-dxxqc1/1Running0root@lfbear-desktop:~/karmada/samples/alpine#eroot@lfbear-desktop:~/karmada/samples/alpine#exroot@lfbear-desktop:~/karmada/samples/alpine#exiexit \ No newline at end of file diff --git a/docs/images/execution-controller-process.drawio b/docs/images/execution-controller-process.drawio deleted file mode 100644 index 3b902107e..000000000 --- a/docs/images/execution-controller-process.drawio +++ /dev/null @@ -1 +0,0 @@ -7V1fV+I6EP8s98FHPf0PfVR01b2rx13vXtf7sie0Abq2BEMR2E9/E9pASSJULE2oiw+2IU3bmfnNTCYz4cjuJLNLDEaDGxTC+MgywtmRfX5kWaZhOeQfbZnnLb5hZC19HIV526rhPvoN2aV56yQK4XitY4pQnEaj9cYADYcwSNfaAMZout6th+L1u45AHwoN9wGIxdaHKEwHWWvbaq3ar2DUH7A7m56ffZMA1jl/k/EAhGhaaLIvjuwORijNjpJZB8aUeowuD9fzh/jLk3f5+ev4GXw/+/uf23+Ps8E+veWS5StgOEx3HnoK227kP186ne9X+NIa+fe/7o7dfOwXEE9yguUvm84ZBWFICJqfIpwOUB8NQXyxaj3DaDIMIb2PQc5Wfb4gNCKNJmn8BdN0nksHmKSINA3SJM6/LfmC+cOO0QQHcMNbMTkDuA/TDf0sN+tI37AgLTn9LiFKYIrnpAOGMUijl3WRArlk9pf9VtQnBzkD3sAMkRfXKcQghaSRHCUCZ1Z0p0ScDqIU3o/AgjRTAuf30PgF4hTONhIl/9ZxcnzkGoKdTldoMxmEBgWksX6Vk7FlN1Gm7ZIy3bK0kmmRF1dgGMZUpL9BovODiB57MXmlsy4mR316dIdRAMfj9wl8L4rjDooRXlxr91z6R9rHKUZPsPCNt/jQK9AwLbRnnz1Cx2qvQ8cyVGPHNFpNBI9TEjy+XgbBeR085zCGC8PwQaDiGbpBpZFAcUsCxVOFE/PzTf++O32ez2ajrvffCA/A1bHlNYf01ZE0v/QOReT5lkBy3XUgHVscQDLe51etGHOKMZgXuo1oh/Hr92lbnFvocdOTLf3Zc63kInuClZQsabI7hl0Bw514MiZO98I9AXTS+0kQrUwpssmkuS4UZJI4ov2SWZ9OqE96MZoGA4DTkxAG0ThCw1e08d4UJ+djOMr9c7ORc06vpOZkcRFNXAxP4EXIPIspwk8xAqHonvcwSmiEBCbdBVQCBhod56c8AFxLOQAaOUFlgr0dAXrNUJe8bxg7rLLsaGvGDoEbGCbohWqkXkQISuh3IJpmeV6HppE7xs3UNG1RtOWvr0zTyB/HUUF8QnM8/1E8eaSDnbjs9HyWD56dzYtndxBH5OUhzhtVcbJyHVXJnKbV1nBOw2hawPw3pkDvJ8EiWNS4WU3LU+3U2b5ScBfw/LgGZzm4K8axVTZu5Gjlazii5/co5eIX0IXxOuWJI9InUn8eEApS9XhGhTcKQHyaf5FEYZgxGY6j36C7GI/SPtcyZHD37Mg9l3Jjo4gJMFmunud3OSouUMvgc2yc+Ibd4pRTdrprfIl1Qb3eGKYCb96m2aRGwFRrPw8EYrZWPo8phthu9YDYRhGrBGJsIVoLRG0UqgJ3FgtL0bB/+D4Cn5hgKg/8tHyB3Co0WsWaiWUcbE9NUGb95VhvqzUn+k3HZJzUeTq2jm/b2DIbUzAZYxRt8gqTkMVSZ9hLSnX2AB/SUWyVTXZptVSpYznTzD9MK8E0vaL1LTFFqTOAwRNp6qBhGKVUHfFs1SJYL1gP5RGktrgW2wTvsFVSstt6xYZMRzTdKtjBJdoZi88e2CSJxMtTJPXKXzAdUQM1mE3tsnbCdJTFgTY+eIFNAYaASzTR0lBwmaca5I84euSe1iXzZSOfpqMsLXXjgxfYtIjjeSChsjvsjrNpGQ0tMleJT7Ui/58ntMLs7HQ0iiPCrmUDGTHFZOB3oaYaHu6OplpzJORS44psajCamP3ejiZXLwvii4t1k1F4CBZkObXQxoL4om+blRoudJBxjoaiWtkxKEWGTaIhSKlcSxkgrPLsjQ3cekBblglt1Kp6mpkK7Zc12L5e9to0lVBfkxBUea7pFTf0JUUchxGCEmra6vSF7JuvP7u9cce9ubt2E/s+ubqdH+uUCc2JV5WSXrZWw/eVCrYsHvj6bMAy/lp0Mt4/KdiXwC8dH21irqapR/ioagkvG3TVq97ZlGQLNYEbfklumIapFT/Ygx9gLQavanQoe1Scb/I277KW9H8m7yWQYWmFjOWTNzrL3nW2By7atWKIicGHCly4Nhczle19UW/gwtHDUNcUM10upm1fxNbMoWKbPejBlkpzH+EsSn8UjgtDkbPVSPRkXjjZt02TyUqNSZRC2iOfBu1xSqGi/T64GLe3nl25rXs9lXGOWA7LFqH5dbdxbsMP24Bb1nbLUa8Bd8TqRJ00VOXKoHSoSTP/lm0MqgdbdE6al/JYJ4Xvtvei8Lm7bKuFdpQofDGEusjA2JRx0UzNLyuQrlfzuwdVvSnVHK+qJ+aKLt3PR3Z7pa6oW3baUr312U2lmP66u+Ju0Slc/206yPY36qw9KSFX9DpZ4kozdY/N74Nj1qd7pIZXydYMu4NZvnmMUifjrUjmc22Ot+z+6Bob+78bmZtIWsDlaRiShosXmLtGOy9rxLSa/QwET/3FZcxoDGl88hC22xWKJWstnpaX6x5UfVjFMw8pRdQGmt4cIOJDPjqqBNFULzfarkAr6I97PiVJA9yrmTYo9+iltCgbaVAWTJI+jXtQivuwOa/X7j9i+OV62EM4kWSN7GuVdl+a0hH3Ua9xVVbuIemUvlmZ5EtS2erxct7FC7FC7zsrVvkzvRBC1Bq4GZJ9hxsAnrJV3cryDjc9dYEVDwg/fZ1ASWLzrqGxCMMg/UkgCVQbjjrLkOSiL9slOgtNjkdgyIKTtygk356K9auSmtU4+/+peL1szFUWKR3YWKWzPwGcgBCcRIiMAmcwmNDVmuOAKCqM4pg6EKvM91dux0kK4U/KiceaQpRoz7yJZXrFsEdH4LchTCm2pTK0rg7q0qa+RJnatS72tGSr/JtKF2gf2k3j4gW+ilN9RrEv2wbgwKnMb+xca4mIfC6tdvqq467tkhJyjQOP/C7s21LT1OzaLs7dlpHHxuSTCz+QUeeP+G38hYgPlU/uccvg6gvhW2+1ZNqXOvL7nOzTWyCnq18hzxTS6sfc7Yv/AQ== diff --git a/docs/images/execution-controller-process.png b/docs/images/execution-controller-process.png deleted file mode 100644 index ef33cc581..000000000 Binary files a/docs/images/execution-controller-process.png and /dev/null differ diff --git a/docs/images/istio-on-karmada-different-network.png b/docs/images/istio-on-karmada-different-network.png deleted file mode 100644 index 59a64b400..000000000 Binary files a/docs/images/istio-on-karmada-different-network.png and /dev/null differ diff --git a/docs/images/istio-on-karmada.png b/docs/images/istio-on-karmada.png deleted file mode 100644 index f4912ec45..000000000 Binary files a/docs/images/istio-on-karmada.png and /dev/null differ diff --git a/docs/images/karmada-resource-relation.drawio b/docs/images/karmada-resource-relation.drawio deleted file mode 100644 index 19deb7c77..000000000 --- a/docs/images/karmada-resource-relation.drawio +++ /dev/null @@ -1 +0,0 @@ -7Vxbk6O2Ev41rjp5GArduDzOeCc5OZtUpjKpSvKUYkC22cXIwXjHPr8+wkgGJBmDDR7PbuwqxrREA92fWt2t1kzQdLn9IQtWi59ZRJMJtKPtBH2YQEigw48FYScIPioJ8yyOShKoCM/x/6kgYkHdxBFdNzrmjCV5vGoSQ5amNMwbtCDL2Guz24wlzbuugjnVCM9hkOjU3+MoX5RUD7oV/b80ni/knYHjly3LQHa2S8J6EUTstUZCjxM0zRjLy1/L7ZQmheykXMrrvj/SeniwjKZ5lwuS3f+eNq/Rgzf/68fX5c+fvN8+Le4El3W+ky9MI/7+4pRl+YLNWRokjxX1IWObNKIFV5ufVX1+YmzFiYATP9E83wllBpuccdIiXyailT9wtvujuN4iHpGEPznhzrZs25WUD1txj/JsJ85mLM0Fb44S3rqNc8FNnP0pb8R/V0yKk12d4xPN4iXNaSaIukSleNgmC2mLGJFQeh5kc5q3dBSoLoRcu4NQ2A+U8cfJdrxDRpMgj780QRgILM8P/Q6XPrGYPzO0xbCDWOBTjDpoexaA/uHjuk2O5QsKJhWG7rMs2NW6rYoO6+O3RXJwi9sCpwFJ/qPkKM9qr1yR9rDtAWFY3vJLkGyEuCbQSXIBlAa4nb83TDbcrfcQuucdgLPaVo3811z83XN5kYSPQbYMooBfMGVpSFeFJMou/Llf1Ms4rby/JCvjLKfbvDks1nnGPtMpS1gByJSlxWCbxUmikIIknqf8NOQwLbD78IVmecxN1r1oWMZRtB+pr4s4p8+rYI/dV26ftdF7FPIFT7ptxahslQZOWnhx+lqZy4NRXNRMpbzMhOoaYvoDAhkAoQh/zqVQCCMJXmjyEISf53uxSEFPIHKc6dS2G/ICveR1mJKCF3lXu1WO0G3KEQJdkBDZuiAdz4JkJFliTZZPGeOTHjdOLOUNTyyJw50m36bQTDCs4b4OcS75MKRkNtPGA29BDjcwkVELrUA4DeW6iIFBxHhEEZNvQ8SytQlyE8avrQDn8gkEmyaQX6nwG6D9EKdRnM67zg3vVLfKRABvQLfSMaspt6aW3+hyxf0sepjqM3UW74wAaELAfz7QVcJ2S7pnUrwJtJ9p9iXe3708n7LlLJ4vuVarHmFGiwssy+LH744/3cHpeNwGYZ4UqlgHS7qXORc5tD963Eex759+bHVVdMZvDFDVwx/AS4FOE5y+AZyuAZzuWF4KHsBvNZqdX7hQeChNa1PHTSh1ACUSKTRpYdzKdNTdTewZrIwzliI7+Js0je6LZETlxBvD4XooLKLZljh4zaPNXLINk2C9jkNJ/j5OJPujcj8drIoXORmt1mRPDGNI0i4MapFPGtpHtsLiSBSrMSLQVRhBbQYqhaPxGipoxbpj/bWAhtwYaJSUhIctv/45E0KIKGyvjJ/bjBpGmLgJViIGx5BfaAPQJQb/Mf0L+fOdR//+vGYfn6f5U/qLMeFkzC+MlyvAigXDrqO7M9ggFIDG8megjshB0TcjxdeEPmf/Eeiro3L/MeTHXlies6URnq0K75dHuLL4j0ex61WQdnInAXcn937kkUxoZ7/0GKNpslnnNDseZhzI5TPfdOp0b+vEQ6nLEsUz8SGdB3HasrpwIdhkqwe0sU8cHXuj5VulLapBrx73DmgDIkK9CJtsgAdf0N4GjCZlZNn1D1EWWLCmAx6bGLSA4Fha8DQtyETCkFZ4Rp0wNGkgcv0Xe1ScI9fyEHEBhl5xtJ1TGiD4mvLXM01TlsrUzpAq8EJqVsGLRzAZUwVuM/x2dcMjewwtcuPz91joudoqDoBNWGJXz08ccQ1aYpuLkKkbhpt3zOpzq6HQoRUNp9G8ayj47Xw23WLcrMt2/024bIMjTbT6zZSDpwdrV3XY5CLMe3LYesscQ8tt5mgMmeIr+2gAaIK/aR+t/+quQyxEam4yPKWBq/poQE/d3LqPdsYCO7SU5KR0yt7KT+uQ676+n4b9howQ0eMHaBuk5OGxwGlyZ/UF2ktQ2hFz+IiQ2/Nd1xWWnm183NJwU2S/y/eviW4Q92dwhyahs3wAd6a/+9JdvXINxUK47r/IFc+a9n2D4zya3wz0XOcryz4PasHPWtAeSNzY4u6Re/g6SkTpWIaYEnuWW7vGRYZ5FWJrrDoboOcAvyaV+LaFCfJ8B9jF0fbegUb0sP9r0gjA2OKBFJJfOX3ftEqwJv0r77aY1GsFJt23WcjNFPo2i8POCuM2i7OrDdpAcnI3hpwfTtYqyDFyslZBgO4OouPQMBcKAKCkqcfZeUEUb1bZCqQ+lFJq3uw+0jYNPfbSR0NV8VKBpW5+OkGyw86fM4ZD35GqY7rrZqLO6HU7ovfSEhqldBJBxavvWjSDTzE6UjTTfxuSUigm7Pyx54IObus/0njQw7371Wpfw8sOFZyG2eKnYstKgTbKwxcR7drNYdI9ZaqDtHXsmmPsTtN2ZwxelGnoUIMyemmOUq9kTCyYyuBHi5WRnu4dOrFwZq0DOSL09kTDdYWnp2zPWlvRMgsXpyv6V9cMnsLISgVcksMYHTnSQQIWdGs5jGb5hr4UCAiPIAwZrfF2quj+0VvEa6o+ekwdwygK8LgOVXGdS0BDU5jcYGCH9Ln8m9Sda1sOqn/fger0xYlvUnUAuhZ2oe26PiyO3jtQXQdvzxhRHslKtAWa5waNZ+RMLg802xBx+l9ZCCGeDD+ltjsnTwx7ffvW7ZvjtzvgN33uO6zw6Bqh3gHFe7/zFU6DhaiN25zK1xDY0n2c+BSbooZL8zU9oX1OfkcfDkPnXeRsP3beBdtKJAnOzLtA4loecAmC2Nkfm7Yd2Afj/saJGILctv4jAV2P8G42ESMH5S0lYsxjxLQdpaf1uO7ol6P65PCXhd5d5z0f+BYm1bc5JDzfG2lWxFhZN5ZrBL3Ttr7CyFYYjbzXEelry1fB0rlO3wUYxF0x2HPh6l8MXorBDkX0JzE4aBAxmh3s6v/DrstPb41Bt+lrYweeiUFP3dqh/EuJkTEITPsFbnpOlevrp5cyQT8sqbtsTIssw6AHOUgtpz0sY/YGENB5gW7x5WCrjgNEdVfGkNsVQ/BfDF0HQ6YlsdvGUOeSCvtWMQR8ound887DUJHOVXkhhdfZGOKn1f80LrtX/xgaPf4D7Vxbk5u4Ev41U7XnwRS6cXmccZLdrexWps5s7Z59xCB7OMHIBzO3/PojQMIgyVzG4Hgma6ecoRECur9utT41XKHl9vnnLNjd/84imlxBO3q+Qh+uIAQYwqvinx29VBIPO5Vgk8WRaHQQ3MXfqBDaQvoQR3TfapgzluTxri0MWZrSMG/JgixjT+1ma5a0z7oLNlQT3IVBokv/iqP8XtwFdA/yX2i8uZdnBo5f7dkGsrG4k/19ELGnhgh9vELLjLG8+mv7vKRJoTypl8ftX/aSPN7+nX14/HKT7//0t7tF1dmnMYfUt5DRNH911/D2C3S+/OqG+09/OF72mXxb/L6Qt5a/SH3RiKtPbLIsv2cblgbJx4P0JmMPaUSLXm2+dWjzG2M7LgRc+F+a5y8CC8FDzrjoPt8mYi+/i+zlP8XxFpGbf4vuyo0Pz62tF7G1ZmkuOgWYbw9Ui1Dfnj1kIe3QhURnkG1o3tGOVO0KRTUgJpT+M2Vbyi+aN8hoEuTxYxuHgYDzpm4nDr3OsuCl0WDH4jTfN3q+LQS8gfBMgsV1CL+EELfNr7RHbmd7/kd1BXKrcSsHUQmpEfASN/0YJA9CDbcZ4y7H9cJSvuOWJXH4omGwjbCn+zind7ugNN4Tj1JtNK3jJFmyhGXlsSgMKVmvuXyfZ+wrbexBDvJRdDKOHmmW0+dOy4u9GNstlQNUqdx+OgSiOk7eN4KQax8HS8Ng4+0Bfxx3/05urPkdxG7b72zPAtCvP67b7rEKU6KTg7nHhgfttJcYHpAWHq6gk+TC9i2gOv97YHLHYl+i4po3AM7u+bCT/7UR/5e9rKTgc5BtgyjgB/ybVgreyzb8wlfqcVxWXYAUK07D/T9vY7wdbFKWUiUyCVGQxJuUb4Ycw5TLb4poEvNs5Vrs2MZRVLqdKeq1XXGusAVAO2x5etSqk6Rm1IJzRS1sgIlikQ1XTaGhJFjR5CYIv25KXTWiv+Msl/YEaqsz1WAlT253qrOO5tKzsGEUQIZRwPEsSGZSKbnIgVk3TSce+hHdN9DiGVXs/BgqlnuVVMcelunMaQC3xwA3cRrF6ea9WAAqYeYCLOAND9xtpcthMquu8/goee5gro6NiBiCOTaoGRJ/JiX7/Uo+BdFrUnxNiHbKj8jJmkgvPwarrVies63M7MXlGciNTiz1+0M92zJ4gMk0PBufyTQSLIYEd78L0kEJLuAJbpmZHslxB2fKxzpaJg97noteH01+a3F1zRedEzeRpc4hwfDgOxxsYq9v+c2P57RHQ4mwBhSJYwgSc2XQQCdiPtBdwl62VEwzpwoXEaFehE3hwoMrVIaLuWyAoaVMYoEhOrueZdA8gnNpHmqav6NhRqfV+npNnTA0aT1y/dWRac80WgcOsRCxDx/YawIpOo8BdJJhydJ1vNlynU5qAy+kZhusPILJrDawIbdBW+++o0MfnlPvplm7oGWyo6PIKAPMpU6I9MTBNujO9+bSHdEU80Z42rKf5zhvdMO36l7434dOig3Zxz4Psvy6WO4rEoEk2O/jUIo/xcnrLN6/4iOmiL1cMRDj99xkMUB+y4vhPOSw147RSFk4VC/KRx3NZ1o40tmTj880fCim7lVAaMSSSZLfydPZhK47p64Dk9kzJK/QqUePRsjzDVMlNFuCqpM1Tyz7OukAPR03M1rDxLV4/uvWX6c9P8BOnbk2DIA9y20c4yJD2gSxNRd5A3T25j2ZBABkYYI83wF28Wt7b8AkOtfzrkziuBafOiP5lfOESzYJNFE8au6WRnpq00y8jAmT3GNOmKbOhPyhmRAcmAk153uGkUTKTl1dR27n7NN2BiVQer+eq86oiNJVpSqtq6kyIKhTNm8SWN5QYKF/gHUeYOmM1JsE1uC5G7goYDm+RtXIIWsslgDQ+vKUrubGkqmCZySWhtEH3dh6LXUxllXRMXxSxelgCHsDIXwiNpGjrG7CVyIT93V0BJdjiQzkt+GPUHeVG88Wu9rPw2XIe284yfVulxT3yXhynMUR1Z2GW/q3opapgCDdx9/EArXd9p3hS2cDo690aPOy+KBkfjAGT8u79QomTYlVbcEolz29QgCjARUCpvFlvgoBqJNpmq4urURgKF7JEbsdtc9CrVQyrM0a7QNmI7+gTn5dbJnA8RrZ91QmwH0+D+K0OM2Y3HU8GoHXj8azVgpAnfW7+EqB8WrHlt2a27WJJkD0BcAzlw1Anem76LKBVyAfYctDxAUYesWv7fTa4Kx1A0gn9i69bmC8ERyvXbekrD7aegXdWWsIkIkDm7aG4HyRfmA97mxFBdJKpzxtA0xP25y8MDs+15g8qegtNO7PKc6GJAdZsD2NBRq0AHEsk6vOVuqNdCboeyxNqUY4eT482jrQsUhzqVddxCIXuIiFdIbixzSeZ1uH5cfi+wZspxMhP6btAHIt7ELbdX1Y/HpvwHYDiBkjfX6kXK6LVX8tQ/6KYr7TWfVOSPTS6nIs6uXVpe9cyNLQQmUDFliZ5g/l3xdAYSEXvtLTRAQ8Aa3T9FUSOrCj+UzPmJvItSE+dp7lzuGgxkNBbQ8EteRCPN/C5PBVFqcNTyt3ucCIWldlwm8rUXYo1pVaOqJ0M/MKKBrwjOMc8HptPJ8clmQgLCWj9Q8szwLLAU+F9sFyJFQu3bDKUKW+Y2eoXZVuap7yTIbFOi0pX2nCpX/Q7Y6nINTEko2keqCJmPmpuRZgFzcH7TuaPcbl2avtJdtKlrRuUTHXtmVZ/Pdfx69udeCUgjAvl8j3wZaWcOBogPZnjycj9vXtr53vb9E77mOQ3skbp4DyEgDjG6dcgzvN9sYpPEEZozaXGTm/mCZ9O0ulj7TX3JU+2G4DBcnH9UbXMxLX8oBLEMRO+dvqFgO7nnZ/59Ifgtyu9vNMPjDU0H+xpT+4s97lgkp/sE41f6mVeTGv1ZkmoBO5aiOX5BzDy7hkTf7pr3Xhm4f3mVZOcHgrLPr4fw== \ No newline at end of file diff --git a/docs/images/karmada-resource-relation.png b/docs/images/karmada-resource-relation.png deleted file mode 100644 index 53b99c9ea..000000000 Binary files a/docs/images/karmada-resource-relation.png and /dev/null differ diff --git a/docs/images/object-association-map.png b/docs/images/object-association-map.png deleted file mode 100644 index 2534e8200..000000000 Binary files a/docs/images/object-association-map.png and /dev/null differ diff --git a/docs/images/policy-controller-process.drawio b/docs/images/policy-controller-process.drawio deleted file mode 100644 index 30748e60c..000000000 --- a/docs/images/policy-controller-process.drawio +++ /dev/null @@ -1 +0,0 @@ -7VxZd5s6EP41fnQONovtxzhr971pn3oUkLFuBKJCduz++kogmUXEwQk2OLfpOQ0atIDmm08zI5GeeRasriiI5u+IB3FvaHirnnneGw4Hxtjhv4RkLSUT20olPkWelGWCL+gPVE2ldIE8GBcqMkIwQ1FR6JIwhC4ryACl5L5YbUZwcdQI+FATfHEB1qU3yGPzVDoejjL5NUT+XI08cCbpnQCoyvJN4jnwyH1OZF70zDNKCEuvgtUZxGL21LzcvFrf4Ld3ztXrT/Fv8G365uv77/20s8tdmmxegcKQPbnraThw/qynF6+Xljn7Ou1/Ort+0x/IvmO2VhMGPT5/skgomxOfhABfZNIpJYvQg6Jbg5eyOm8JibhwwIX/QcbWEgxgwQgXzVmA5d2a76OejSyoC7e9hIQVoD5kW+pJxYoXzGFDztYVJAFkdM0rUIgBQ8sigIDEob+pJ5ueUgrWuQoRQSGLcz1/FAJeQdrUBk/SovqTcUlxpQbmeHsDfpE+gyrlXiYTJWjYBRnpmEuAF3IiTj2PCy6WQmll0GSQEPq9nyMGv0Qg0do9J5ai+jG4hXgK3Ds/aXZGMKH8VkhCga4ZwliJekNzZot/XB4zSu5g7o6T/IgWJGQ5efqzM8yWkDK42goMddcpKWRgWFJyn1HMQOl5nqMXVa8KTTl9PkFdhqaTAxgyn1i6/iHan9iq+FN2lxTOV4XSOl/6CCniLw+pFDbMCuZRsYJtHQUrmBornEPMddgMMXTf9K3yrHfA9K1WLH+FWM7weeln7k5m9qKwzhX2bPROTaMfNG30z9KgfVTcfdyaNzuleUfj01fhjNCAz1IZEikJqhhmUFQmj00iUS9Y+SKOO5lhcu/OAWUnfMIDFAImKLKSf/dFlWZ5RRtsaDFHleNDMuVIm+02DK9h4I/b8nOepYqxpopvkQeaciSOPcIwyzFiB9yMge77vQDrmdR1GIadMp+JposbQu8+LSAvNrRweIhCl/3iRgkOvXKY5dDGqQC/cVDw2y8R/ArUj6Pf6hT61XPn3Sbu56TLB78Knrd6HIzVx+O2Sd08qrRR0/C36sLf6RT8TT03+75SjW+FG1SceoCRH/Jrl0+hCMWmAr7IBfhU3giQ56VahjH6A26T/sTkywwW79ye9uzzSnVsxZhmKJtNHzlKL7+vUmVAfeNkVLAgyQO7peayVJqqQmazGDJNL03ky1pZJo7Ovkbdsi/d0/3ZZftSq2Ej9mWO7K4blcJVOQuNQr9nXmqaeqr/C10UIxIe2vnVnASjbSdhoGeprkEoUDw0PkOOMReJawfzB5neUn7li6uPlLgwjp/ni3U/Vtd2A0cVaa7DKszW/eM2lqGml5NRzeXE6li0oqcdN+az2T37nxiLtn/WvrGYx5gVfnybw27aBmqeZnHGJUWldi2b7cHD1jPJ3Vff44cX7Lp67lhoqufJMAFe9ibiQeTbdzJHo63mTuuJd+dFruZm3Q1be9wthOvOMA/sCIVpuDXDyGXPXMYPhm27Kq9+2LMrL3JTyay7q2QNu4VtfVvpdoGwoO8pCj0RXx8HsK3WY2bVcQHYaYTsoaUKka/JvYAJ4f+VJ1okMtL6fPxcEyVd4LIEIyVBs7zGDIApBJ6YL7hCMV+D+SsZC7XdnlR25yD0oXeSGzPrTh/gVXGAkLBi5y4fMO2c7dTlZxiQpWhHaMSfKD/Ilk64UJ+N3JSdJurKHpfTNAMo5CxtJAcFegIZzu+FOMg/vQM0AB44QVwpl+l7eP1bDqzTtBoIBKiTISLCyV5MawgCGCeoH16K60019USy74c0WrIpjnpWylI9HMA9EPKpPCSGM7YtC1llsUU6PfjZh1HJmM2KENGyKox5Y+HNW7PugqUbjiI/yKN4ca6krMV9ndfSMsx7Y9U6itjbNvxk3Ee/rt6t3A8/vr/BEfU/eK/7VaTalrdQWrMbdB+suvsmzqhNb8HS0/CeSmeBIuV20WnQUlHtR3p2nVRU6J2Kz+IEA2AQx8jtVR7MVtfJvt/mqO4Dx3M3+4WF3cJs8/CB/UIPxPONGlvzoY2aPnROrXaFVpXsmQkx7fDScFIvIfb4Fx5mGXjp3Owttea8yODMrntgViGwI8GZrSc6OQGgoHd0KTWr9WNPth7obo/OtJl+LDzLhx/Jd5Mg5FGSchgfCDc8ylEU/os22ow2qlIHldHG/vK9+vGunb2AwsrcoEvwiEPQFlXbdam6JR/AtJ/qA5S50zQO7APoTn4FxV3yiBXz9ZsmxCXpKLyNoxwNccoKcyxZxX8RJRHwE5JM2a4vUjWUYCy+h6rgtVKf/+huZ7obVKzFlR9l7S234uhBz8vPrViTkh4qt2Iayq3wYvZXSlJiyP7Yi3nxFw== \ No newline at end of file diff --git a/docs/images/policy-controller-process.png b/docs/images/policy-controller-process.png deleted file mode 100644 index 3d3721d5b..000000000 Binary files a/docs/images/policy-controller-process.png and /dev/null differ diff --git a/docs/images/sample-nginx.svg b/docs/images/sample-nginx.svg deleted file mode 100644 index 7021b5a63..000000000 --- a/docs/images/sample-nginx.svg +++ /dev/null @@ -1 +0,0 @@ -[root@karmada-demokarmada]#[root@karmada-demokarmada]##AfterwefinishedKarmadainstallation,letusshareasample[root@karmada-demokarmada]##First,weneedtosetenv'KUBECONFIG'ordertooperateKarmadacontrolplaneconveniently[root@karmada-demokarmada]#e[root@karmada-demokarmada]#ex[root@karmada-demokarmada]#exp[root@karmada-demokarmada]#expo[root@karmada-demokarmada]#expor[root@karmada-demokarmada]#export[root@karmada-demokarmada]#exportK[root@karmada-demokarmada]#exportKU[root@karmada-demokarmada]#exportKUB[root@karmada-demokarmada]#exportKUBE[root@karmada-demokarmada]#exportKUBEC[root@karmada-demokarmada]#exportKUBECO[root@karmada-demokarmada]#exportKUBECON[root@karmada-demokarmada]#exportKUBECONF[root@karmada-demokarmada]#exportKUBECONFI[root@karmada-demokarmada]#exportKUBECONFIG[root@karmada-demokarmada]#exportKUBECONFIG=[root@karmada-demokarmada]#exportKUBECONFIG=/[root@karmada-demokarmada]#exportKUBECONFIG=/r[root@karmada-demokarmada]#exportKUBECONFIG=/ro[root@karmada-demokarmada]#exportKUBECONFIG=/root/[root@karmada-demokarmada]#exportKUBECONFIG=/root/.[root@karmada-demokarmada]#exportKUBECONFIG=/root/.k[root@karmada-demokarmada]#exportKUBECONFIG=/root/.ku[root@karmada-demokarmada]#exportKUBECONFIG=/root/.kube/[root@karmada-demokarmada]#exportKUBECONFIG=/root/.kube/karmada.config[root@karmada-demokarmada]##Letuscheckthememberclustersjoined[root@karmada-demokarmada]#k[root@karmada-demokarmada]#ku[root@karmada-demokarmada]#kubectl[root@karmada-demokarmada]#kubectlg[root@karmada-demokarmada]#kubectlge[root@karmada-demokarmada]#kubectlget[root@karmada-demokarmada]#kubectlgetclustersNAMEVERSIONMODEREADYAGEmember1v1.19.1PushTrue108mmember2v1.19.1PushTrue108mmember3v1.19.1PullTrue107m[root@karmada-demokarmada]##Thereare3memberclustersstartupby'hack/local-up-karmada.sh'[root@karmada-demokarmada]##Thenletuspropagateourapplicationin'samples/nginx/'[root@karmada-demokarmada]#c[root@karmada-demokarmada]#ca[root@karmada-demokarmada]#cat[root@karmada-demokarmada]#cats[root@karmada-demokarmada]#catsa[root@karmada-demokarmada]#catsamples/[root@karmada-demokarmada]#catsamples/n[root@karmada-demokarmada]#catsamples/ng[root@karmada-demokarmada]#catsamples/nginx/[root@karmada-demokarmada]#catsamples/nginx/deployment.yamlapiVersion:apps/v1kind:Deploymentmetadata:name:nginxlabels:app:nginxspec:replicas:2selector:matchLabels:app:nginxtemplate:metadata:labels:app:nginxspec:containers:-image:nginxname:nginx[root@karmada-demokarmada]#catsamples/nginx/propagationpolicy.yamlapiVersion:policy.karmada.io/v1alpha1kind:PropagationPolicyname:nginx-propagationresourceSelectors:-apiVersion:apps/v1kind:Deploymentname:nginxplacement:clusterAffinity:clusterNames:-member1-member2replicaScheduling:replicaDivisionPreference:WeightedreplicaSchedulingType:DividedweightPreference:staticWeightList:-targetCluster:clusterNames:-member1weight:1-member2[root@karmada-demokarmada]##Thereisaapplication(nginx)tobedeployedto2memberclusters,andeachhave1replica.[root@karmada-demokarmada]#kubectlc[root@karmada-demokarmada]#kubectlcr[root@karmada-demokarmada]#kubectlcre[root@karmada-demokarmada]#kubectlcrea[root@karmada-demokarmada]#kubectlcreat[root@karmada-demokarmada]#kubectlcreate[root@karmada-demokarmada]#kubectlcreate-[root@karmada-demokarmada]#kubectlcreate-f[root@karmada-demokarmada]#kubectlcreate-fs[root@karmada-demokarmada]#kubectlcreate-fsa[root@karmada-demokarmada]#kubectlcreate-fsamples/[root@karmada-demokarmada]#kubectlcreate-fsamples/n[root@karmada-demokarmada]#kubectlcreate-fsamples/ng[root@karmada-demokarmada]#kubectlcreate-fsamples/nginx/[root@karmada-demokarmada]#kubectlcreate-fsamples/nginx/deployment.yamldeployment.apps/nginxcreated[root@karmada-demokarmada]#kubectlcreate-fsamples/nginx/propagationpolicy.yamlpropagationpolicy.policy.karmada.io/nginx-propagationcreated[root@karmada-demokarmada]##CheckitinKarmadacontrolplane[root@karmada-demokarmada]#kubectlgetdeploymentNAMEREADYUP-TO-DATEAVAILABLEAGEnginx2/22220s[root@karmada-demokarmada]##Additionally,letuscheckwhathappenedinmember1andmember2[root@karmada-demokarmada]#exportKUBECONFIG=/root/.kube/members.config[root@karmada-demokarmada]##Switchtomember1[root@karmada-demokarmada]#kubectlco[root@karmada-demokarmada]#kubectlcon[root@karmada-demokarmada]#kubectlconf[root@karmada-demokarmada]#kubectlconfi[root@karmada-demokarmada]#kubectlconfig[root@karmada-demokarmada]#kubectlconfigu[root@karmada-demokarmada]#kubectlconfigus[root@karmada-demokarmada]#kubectlconfiguse[root@karmada-demokarmada]#kubectlconfiguse-[root@karmada-demokarmada]#kubectlconfiguse-c[root@karmada-demokarmada]#kubectlconfiguse-co[root@karmada-demokarmada]#kubectlconfiguse-con[root@karmada-demokarmada]#kubectlconfiguse-cont[root@karmada-demokarmada]#kubectlconfiguse-conte[root@karmada-demokarmada]#kubectlconfiguse-contex[root@karmada-demokarmada]#kubectlconfiguse-context[root@karmada-demokarmada]#kubectlconfiguse-contextm[root@karmada-demokarmada]#kubectlconfiguse-contextme[root@karmada-demokarmada]#kubectlconfiguse-contextmem[root@karmada-demokarmada]#kubectlconfiguse-contextmemb[root@karmada-demokarmada]#kubectlconfiguse-contextmembe[root@karmada-demokarmada]#kubectlconfiguse-contextmember[root@karmada-demokarmada]#kubectlconfiguse-contextmember1Switchedtocontext"member1".[root@karmada-demokarmada]#kubectlgetp[root@karmada-demokarmada]#kubectlgetpo[root@karmada-demokarmada]#kubectlgetpodNAMEREADYSTATUSRESTARTSAGEnginx-6799fc88d8-96ndx1/1Running040s[root@karmada-demokarmada]##Switchtomember2[root@karmada-demokarmada]#kubectlconfiguse-contextmember2Switchedtocontext"member2".nginx-6799fc88d8-k424s1/1Running056s[root@karmada-demokarmada]##Here,weshowhowtopropagateourapplicationtomultiplememberclusters[root@karmada-demokarmada]#exportKUBECONFIG=/root/.kube/k[root@karmada-demokarmada]#exportKUBECONFIG=/root/.kube/ka[root@karmada-demokarmada]#exportKUBECONFIG=/root/.kube/kar[root@karmada-demokarmada]#kubectlgetc[root@karmada-demokarmada]#kubectlgetcl[root@karmada-demokarmada]#kubectlgetclu[root@karmada-demokarmada]#catsamples/nginx/d[root@karmada-demokarmada]#catsamples/nginx/de[root@karmada-demokarmada]#catsamples/nginx/p[root@karmada-demokarmada]#kubectlcreate-fsamples/nginx/d[root@karmada-demokarmada]#kubectlcreate-fsamples/nginx/de[root@karmada-demokarmada]#kubectlcreate-fsamples/nginx/p[root@karmada-demokarmada]#kubectlgetd[root@karmada-demokarmada]#kubectlgetde[root@karmada-demokarmada]#kubectlgetdep[root@karmada-demokarmada]#kubectlgetdepl[root@karmada-demokarmada]#kubectlgetdeplo[root@karmada-demokarmada]#exportKUBECONFIG=/root/.kube/m[root@karmada-demokarmada]#exportKUBECONFIG=/root/.kube/me \ No newline at end of file diff --git a/docs/installation/fromsource.md b/docs/installation/fromsource.md deleted file mode 100644 index b15c71663..000000000 --- a/docs/installation/fromsource.md +++ /dev/null @@ -1,69 +0,0 @@ - - -**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* - -- [Installing Karmada on Cluster from Source](#installing-karmada-on-cluster-from-source) - - [Select a way to expose karmada-apiserver](#select-a-way-to-expose-karmada-apiserver) - - [1. expose by service with `LoadBalancer` type](#1-expose-by-service-with-loadbalancer-type) - - [2. expose by service with `ClusterIP` type](#2-expose-by-service-with-clusterip-type) - - [Install](#install) - - - -# Installing Karmada on Cluster from Source - -This document describes how you can use the `hack/remote-up-karmada.sh` script to install Karmada on -your clusters based on the codebase. - -## Select a way to expose karmada-apiserver - -The `hack/remote-up-karmada.sh` will install `karmada-apiserver` and provide two ways to expose the server: - -### 1. expose by `HostNetwork` type - -By default, the `hack/remote-up-karmada.sh` will expose `karmada-apiserver` by `HostNetwork`. - -No extra operations needed with this type. - -### 2. expose by service with `LoadBalancer` type - -If you don't want to use the `HostNetwork`, you can ask `hack/remote-up-karmada.sh` to expose `karmada-apiserver` -by a service with `LoadBalancer` type that *requires your cluster have deployed the `Load Balancer`*. -All you need to do is set an environment: -```bash -export LOAD_BALANCER=true -``` - -## Install -From the `root` directory the `karmada` repo, install Karmada by command: -```bash -hack/remote-up-karmada.sh -``` -- `kubeconfig` is your cluster's kubeconfig that you want to install to -- `context_name` is the name of context in 'kubeconfig' - -For example: -```bash -hack/remote-up-karmada.sh $HOME/.kube/config mycluster -``` - -If everything goes well, at the end of the script output, you will see similar messages as follows: -``` ------------------------------------------------------------------------------------------------------- -█████ ████ █████████ ███████████ ██████ ██████ █████████ ██████████ █████████ -░░███ ███░ ███░░░░░███ ░░███░░░░░███ ░░██████ ██████ ███░░░░░███ ░░███░░░░███ ███░░░░░███ -░███ ███ ░███ ░███ ░███ ░███ ░███░█████░███ ░███ ░███ ░███ ░░███ ░███ ░███ -░███████ ░███████████ ░██████████ ░███░░███ ░███ ░███████████ ░███ ░███ ░███████████ -░███░░███ ░███░░░░░███ ░███░░░░░███ ░███ ░░░ ░███ ░███░░░░░███ ░███ ░███ ░███░░░░░███ -░███ ░░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ███ ░███ ░███ -█████ ░░████ █████ █████ █████ █████ █████ █████ █████ █████ ██████████ █████ █████ -░░░░░ ░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░░░░░░ ░░░░░ ░░░░░ ------------------------------------------------------------------------------------------------------- -Karmada is installed successfully. - -Kubeconfig for karmada in file: /root/.kube/karmada.config, so you can run: - export KUBECONFIG="/root/.kube/karmada.config" -Or use kubectl with --kubeconfig=/root/.kube/karmada.config -Please use 'kubectl config use-context karmada-apiserver' to switch the cluster of karmada control plane -And use 'kubectl config use-context your-host' for debugging karmada installation -``` \ No newline at end of file diff --git a/docs/installation/install-binary.md b/docs/installation/install-binary.md deleted file mode 100644 index 610721c41..000000000 --- a/docs/installation/install-binary.md +++ /dev/null @@ -1,1069 +0,0 @@ -Step-by-step installation of binary high-availability `karmada` cluster. - -# Installing Karmada cluster - -## Prerequisites - -#### server - -3 servers required. E.g. - -```shell -+---------------+-----------------+-----------------+ -| HostName | Host IP | Public IP | -+---------------+-----------------+-----------------+ -| karmada-01 | 172.31.209.245 | 47.242.88.82 | -+---------------+-----------------+-----------------+ -| karmada-02 | 172.31.209.246 | | -+---------------+-----------------+-----------------+ -| karmada-03 | 172.31.209.247 | | -+---------------+-----------------+-----------------+ -``` - -> Public IP is not required. It is used to download some `karmada` dependent components from the public network and connect to `karmada` ApiServer through the public network - -#### hosts parsing - -Execute operations at `karmada-01` `karmada-02` `karmada-03`. - -```bash -vi /etc/hosts -172.31.209.245 karmada-01 -172.31.209.246 karmada-02 -172.31.209.247 karmada-03 -``` - -#### environment - -`karmada-01` requires the following environment. - -**Golang**: Compile the karmada binary -**GCC**: Compile nginx (ignore if using cloud load balancing) - - - - - - - -## Compile and download binaries - -Execute operations at `karmada-01`. - -#### kubernetes binaries - -Download the `kubernetes` binary package. - -```bash -wget https://dl.k8s.io/v1.23.3/kubernetes-server-linux-amd64.tar.gz -tar -zxvf kubernetes-server-linux-amd64.tar.gz -cd /root/kubernetes/server/bin -mv kube-apiserver kube-controller-manager kubectl /usr/local/sbin/ -``` - -#### etcd binaries - -Download the `etcd` binary package. - -```bash -wget https://github.com/etcd-io/etcd/releases/download/v3.5.1/etcd-v3.5.1-linux-amd64.tar.gz -tar -zxvf etcd-v3.5.1-linux-amd64.tar.gz -cd etcd-v3.5.1-linux-amd64/ -cp etcdctl etcd /usr/local/sbin/ -``` - -#### karmada binaries - -Compile the `karmada` binary from source. - -```bash -git clone https://github.com/karmada-io/karmada -cd karmada -make karmada-aggregated-apiserver -make karmada-controller-manager -make karmada-scheduler -make karmada-webhook -mv karmada-aggregated-apiserver karmada-controller-manager karmada-scheduler karmada-webhook /usr/local/sbin/ -``` - -#### nginx binaries - -Compile the `nginx` binary from source. - -```bash -wget http://nginx.org/download/nginx-1.21.6.tar.gz -tar -zxvf nginx-1.21.6.tar.gz -cd nginx-1.21.6 -./configure --with-stream --without-http --prefix=/usr/local/karmada-nginx --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module -make && make install -mv /usr/local/karmada-nginx/sbin/nginx /usr/local/karmada-nginx/sbin/karmada-nginx -``` - -#### Distribute binaries - -Upload the binary file to the `karmada-02` `karmada-03 ` server - -```bash -scp /usr/local/sbin/* karmada-02:/usr/local/sbin/ -scp /usr/local/sbin/* karmada-03:/usr/local/sbin/ -``` - - - -## Generate certificate - -Generated using the `openssl` command. Note yes `DNS` and `IP` when generating the certificate. - -Execute operations at `karmada-01`. - -#### create a temporary directory for certificates - -```bash -mkdir certs -cd certs -``` - -#### Create root certificate - -valid for 10 years - -```bash -openssl genrsa -out ca.key 2048 -openssl req -x509 -new -nodes -key ca.key -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=karmada/OU=System/CN=karmada" -days 3650 -out ca.crt -``` - -#### Create etcd certificate - -create `etcd server ` certificate - -```bash -openssl genrsa -out etcd-server.key 2048 -openssl req -new -nodes -sha256 -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=karmada/OU=System/CN=karmada-etcd" -key etcd-server.key -out etcd-server.csr -openssl x509 -req -days 3650 \ - -extfile <(printf "keyUsage=critical,Digital Signature, Key Encipherment\nextendedKeyUsage=serverAuth,clientAuth\nauthorityKeyIdentifier=keyid,issuer\nsubjectAltName=IP:172.31.209.245,IP:172.31.209.246,IP:172.31.209.247,IP:127.0.0.1,DNS:localhost") \ - -sha256 -CA ca.crt -CAkey ca.key -set_serial 01 -in etcd-server.csr -out etcd-server.crt -``` - -create `etcd peer ` certificate - -```bash -openssl genrsa -out etcd-peer.key 2048 -openssl req -new -nodes -sha256 -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=karmada/OU=System/CN=karmada-etcd-peer" -key etcd-peer.key -out etcd-peer.csr -openssl x509 -req -days 3650 \ - -extfile <(printf "keyUsage=critical,Digital Signature, Key Encipherment\nextendedKeyUsage=serverAuth,clientAuth\nauthorityKeyIdentifier=keyid,issuer\nsubjectAltName=IP:172.31.209.245,IP:172.31.209.246,IP:172.31.209.247,IP:127.0.0.1,DNS:localhost") \ - -sha256 -CA ca.crt -CAkey ca.key -set_serial 01 -in etcd-peer.csr -out etcd-peer.crt -``` - -create `etcd client ` certificate - -```bash -openssl genrsa -out karmada-etcd-client.key 2048 -openssl req -new -nodes -sha256 -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=karmada/OU=System/CN=karmada-etcd-client" -key karmada-etcd-client.key -out karmada-etcd-client.csr -openssl x509 -req -days 3650 \ - -extfile <(printf "keyUsage=critical,Digital Signature, Key Encipherment\nextendedKeyUsage=clientAuth\nauthorityKeyIdentifier=keyid,issuer") \ - -sha256 -CA ca.crt -CAkey ca.key -set_serial 01 -in karmada-etcd-client.csr -out karmada-etcd-client.crt -``` - -#### Create karmada certificate - -create `karmada-apiserver ` certificate. - ->Notice: -> ->If you need to access the `karmada apiserver` through the public `IP/DNS` or external `IP/DNS`, the certificate needs to be added to the `IP/DNS`. - -```bash -openssl genrsa -out karmada-apiserver.key 2048 -openssl req -new -nodes -sha256 -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=karmada/OU=System/CN=karmada" -key karmada-apiserver.key -out karmada-apiserver.csr -openssl x509 -req -days 3650 \ - -extfile <(printf "keyUsage=critical,Digital Signature, Key Encipherment\nextendedKeyUsage=serverAuth,clientAuth\nauthorityKeyIdentifier=keyid,issuer\nsubjectAltName=DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster,DNS:kubernetes.default.svc.cluster.local,IP:172.31.209.245,IP:172.31.209.246,IP:172.31.209.247,IP:10.254.0.1,IP:47.242.88.82") \ - -sha256 -CA ca.crt -CAkey ca.key -set_serial 01 -in karmada-apiserver.csr -out karmada-apiserver.crt -``` - -create `karmada admin ` certificate. - -```bash -openssl genrsa -out admin.key 2048 -openssl req -new -nodes -sha256 -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=system:masters/OU=System/CN=admin" -key admin.key -out admin.csr -openssl x509 -req -days 3650 \ - -extfile <(printf "keyUsage=critical,Digital Signature, Key Encipherment\nextendedKeyUsage=clientAuth\nauthorityKeyIdentifier=keyid,issuer") \ - -sha256 -CA ca.crt -CAkey ca.key -set_serial 01 -in admin.csr -out admin.crt -``` - -create `kube-controller-manager ` certificate. - -```bash -openssl genrsa -out kube-controller-manager.key 2048 -openssl req -new -nodes -sha256 -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=karmada/OU=System/CN=system:kube-controller-manager" -key kube-controller-manager.key -out kube-controller-manager.csr -openssl x509 -req -days 3650 \ - -extfile <(printf "keyUsage=critical,Digital Signature, Key Encipherment\nextendedKeyUsage=clientAuth\nauthorityKeyIdentifier=keyid,issuer") \ - -sha256 -CA ca.crt -CAkey ca.key -set_serial 01 -in kube-controller-manager.csr -out kube-controller-manager.crt -``` - -create `karmada components` certificate. - -```bash -openssl genrsa -out karmada.key 2048 -openssl req -new -nodes -sha256 -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=karmada/OU=System/CN=system:karmada" -key karmada.key -out karmada.csr -openssl x509 -req -days 3650 \ - -extfile <(printf "keyUsage=critical,Digital Signature, Key Encipherment\nextendedKeyUsage=serverAuth,clientAuth\nauthorityKeyIdentifier=keyid,issuer\nsubjectAltName=DNS:karmada-01,DNS:karmada-02,DNS:karmada-03,DNS:localhost,IP:172.0.0.1,IP:172.31.209.245,IP:172.31.209.246,IP:172.31.209.247,IP:10.254.0.1,IP:47.242.88.82") \ - -sha256 -CA ca.crt -CAkey ca.key -set_serial 01 -in karmada.csr -out karmada.crt -``` - -create `front-proxy-client` certificate. - -```bash -openssl genrsa -out front-proxy-client.key 2048 -openssl req -new -nodes -sha256 -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=karmada/OU=System/CN=front-proxy-client" -key front-proxy-client.key -out front-proxy-client.csr -openssl x509 -req -days 3650 \ - -extfile <(printf "keyUsage=critical,Digital Signature, Key Encipherment\nextendedKeyUsage=serverAuth,clientAuth\nauthorityKeyIdentifier=keyid,issuer") \ - -sha256 -CA ca.crt -CAkey ca.key -set_serial 01 -in front-proxy-client.csr -out front-proxy-client.crt -``` - -create `karmada-apiserver` SA key - -```bash -openssl genrsa -out sa.key 2048 -openssl rsa -in sa.key -pubout -out sa.pub -``` - -#### Check the certificate - -You can view the configuration of the certificate, take `etcd-server `as an example. - -```bash -openssl x509 -noout -text -in etcd-server.crt -``` - -#### Create the karmada configuration directory - -copy the certificate to the `/etc/karmada/pki` directory. - -```bash -mkdir -p /etc/karmada/pki -cp karmada.key tls.key -cp karmada.crt tls.crt -cp *.key *.crt sa.pub /etc/karmada/pki -``` - - - -## Create the karmada kubeconfig files and etcd encrypted file - -Execute operations at `karmada-01`. - -Define the karmada apiserver address. `172.31.209.245:5443` is the address of the `nginx` proxy `karmada-apiserver` ,we'll set it up later. - -```bash -export KARMADA_APISERVER="https://172.31.209.245:5443" -cd /etc/karmada/ -``` - -#### Create kubectl kubeconfig file - -which is kept at $HOME/.kube/config by default - -```bas -kubectl config set-cluster karmada \ - --certificate-authority=/etc/karmada/pki/ca.crt \ - --embed-certs=true \ - --server=${KARMADA_APISERVER} - -kubectl config set-credentials admin \ - --client-certificate=/etc/karmada/pki/admin.crt \ - --embed-certs=true \ - --client-key=/etc/karmada/pki/admin.key - -kubectl config set-context karmada \ - --cluster=karmada \ - --user=admin - -kubectl config use-context karmada -``` - -#### Create kube-controller-manager kubeconfig file - -```bash -kubectl config set-cluster karmada \ - --certificate-authority=/etc/karmada/pki/ca.crt \ - --embed-certs=true \ - --server=${KARMADA_APISERVER} \ - --kubeconfig=kube-controller-manager.kubeconfig - -kubectl config set-credentials system:kube-controller-manager \ - --client-certificate=/etc/karmada/pki/kube-controller-manager.crt \ - --client-key=/etc/karmada/pki/kube-controller-manager.key \ - --embed-certs=true \ - --kubeconfig=kube-controller-manager.kubeconfig - -kubectl config set-context system:kube-controller-manager \ - --cluster=karmada \ - --user=system:kube-controller-manager \ - --kubeconfig=kube-controller-manager.kubeconfig - -kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig -``` - -#### Create karmada kubeconfig file - -The components of karmada connect to the karmada apiserver through this file. - -```bash -kubectl config set-cluster karmada \ - --certificate-authority=/etc/karmada/pki/ca.crt \ - --embed-certs=true \ - --server=${KARMADA_APISERVER} \ - --kubeconfig=karmada.kubeconfig - -kubectl config set-credentials system:karmada \ - --client-certificate=/etc/karmada/pki/karmada.crt \ - --client-key=/etc/karmada/pki/karmada.key \ - --embed-certs=true \ - --kubeconfig=karmada.kubeconfig - -kubectl config set-context system:karmada\ - --cluster=karmada \ - --user=system:karmada \ - --kubeconfig=karmada.kubeconfig - -kubectl config use-context system:karmada --kubeconfig=karmada.kubeconfig -``` - -#### Create etcd encrypted file - -```bash -export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64) -cat > /etc/karmada/encryption-config.yaml <The parameters that `karmada-02` `karmada-03` need to change are: -> ->--name -> ->--initial-advertise-peer-urls -> ->--listen-peer-urls -> ->--listen-client-urls -> ->--advertise-client-urls - - - -#### Start etcd cluster - -3 servers have to execute. - -create etcd storage directory - -```bash -mkdir /var/lib/etcd/ -chmod 700 /var/lib/etcd -``` - -start etcd - -```bash -systemctl daemon-reload -systemctl enable etcd -systemctl start etcd -systemctl status etcd -``` - -#### Check etcd cluster status - -```bash -etcdctl --cacert=/etc/karmada/pki/ca.crt \ - --cert=/etc/karmada/pki/etcd-server.crt \ - --key=/etc/karmada/pki/etcd-server.key \ - --endpoints 172.31.209.245:2379,172.31.209.246:2379,172.31.209.247:2379 endpoint status --write-out="table" - -+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ -| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | -+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ -| 172.31.209.245:2379 | 689151f8cbf4ee95 | 3.5.1 | 20 kB | false | false | 2 | 9 | 9 | | -| 172.31.209.246:2379 | 5db4dfb6ecc14de7 | 3.5.1 | 20 kB | true | false | 2 | 9 | 9 | | -| 172.31.209.247:2379 | 7e59eef3c816aa57 | 3.5.1 | 20 kB | false | false | 2 | 9 | 9 | | -+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ -``` - - - -## Install Karmada APIServer - -#### configure nginx - -Execute operations at `karmada-01`. - -configure load balancing for `karmada apiserver` - -```bash -cat > /usr/local/karmada-nginx/conf/nginx.conf < - -**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* - -- [Installing Karmada](#installing-karmada) - - [Prerequisites](#prerequisites) - - [Karmada kubectl plugin](#karmada-kubectl-plugin) - - [Install Karmada by Karmada command-line tool](#install-karmada-by-karmada-command-line-tool) - - [Install Karmada on your own cluster](#install-karmada-on-your-own-cluster) - - [Offline installation](#offline-installation) - - [Deploy HA](#deploy-ha) - - [Install Karmada in Kind cluster](#install-karmada-in-kind-cluster) - - [Install Karmada by Helm Chart Deployment](#install-karmada-by-helm-chart-deployment) - - [Install Karmada from source](#install-karmada-from-source) - - [Install Karmada for development environment](#install-karmada-for-development-environment) - - - -# Installing Karmada - -## Prerequisites - -### Karmada kubectl plugin -`kubectl-karmada` is the Karmada command-line tool that lets you control the Karmada control plane, it presents as -the [kubectl plugin][1]. -For installation instructions see [installing kubectl-karmada](./install-kubectl-karmada.md). - -## Install Karmada by Karmada command-line tool - -### Install Karmada on your own cluster - -Assume you have put your cluster's `kubeconfig` file to `$HOME/.kube/config` or specify the path -with `KUBECONFIG` environment variable. Otherwise, you should specify the configuration file by -setting `--kubeconfig` flag to the following commands. - -> Note: The `init` command is available from v1.0. - -Run the following command to install: -```bash -kubectl karmada init -``` -It might take about 5 minutes and if everything goes well, you will see outputs similar to: -``` -I1216 07:37:45.862959 4256 cert.go:230] Generate ca certificate success. -I1216 07:37:46.000798 4256 cert.go:230] Generate etcd-server certificate success. -... -... ------------------------------------------------------------------------------------------------------- - █████ ████ █████████ ███████████ ██████ ██████ █████████ ██████████ █████████ -░░███ ███░ ███░░░░░███ ░░███░░░░░███ ░░██████ ██████ ███░░░░░███ ░░███░░░░███ ███░░░░░███ - ░███ ███ ░███ ░███ ░███ ░███ ░███░█████░███ ░███ ░███ ░███ ░░███ ░███ ░███ - ░███████ ░███████████ ░██████████ ░███░░███ ░███ ░███████████ ░███ ░███ ░███████████ - ░███░░███ ░███░░░░░███ ░███░░░░░███ ░███ ░░░ ░███ ░███░░░░░███ ░███ ░███ ░███░░░░░███ - ░███ ░░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ███ ░███ ░███ - █████ ░░████ █████ █████ █████ █████ █████ █████ █████ █████ ██████████ █████ █████ -░░░░░ ░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░░░░░░ ░░░░░ ░░░░░ ------------------------------------------------------------------------------------------------------- -Karmada is installed successfully. - -Register Kubernetes cluster to Karmada control plane. - -Register cluster with 'Push' mode - -Step 1: Use karmadactl join to register the cluster to Karmada control panel. --cluster-kubeconfig is members kubeconfig. -(In karmada)~# MEMBER_CLUSTER_NAME=`cat ~/.kube/config | grep current-context | sed 's/: /\n/g'| sed '1d'` -(In karmada)~# karmadactl --kubeconfig /etc/karmada/karmada-apiserver.config join ${MEMBER_CLUSTER_NAME} --cluster-kubeconfig=$HOME/.kube/config - -Step 2: Show members of karmada -(In karmada)~# kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get clusters - - -Register cluster with 'Pull' mode - -Step 1: Send karmada kubeconfig and karmada-agent.yaml to member kubernetes -(In karmada)~# scp /etc/karmada/karmada-apiserver.config /etc/karmada/karmada-agent.yaml {member kubernetes}:~ - -Step 2: Create karmada kubeconfig secret - Notice: - Cross-network, need to change the config server address. -(In member kubernetes)~# kubectl create ns karmada-system -(In member kubernetes)~# kubectl create secret generic karmada-kubeconfig --from-file=karmada-kubeconfig=/root/karmada-apiserver.config -n karmada-system - -Step 3: Create karmada agent -(In member kubernetes)~# MEMBER_CLUSTER_NAME="demo" -(In member kubernetes)~# sed -i "s/{member_cluster_name}/${MEMBER_CLUSTER_NAME}/g" karmada-agent.yaml -(In member kubernetes)~# kubectl apply -f karmada-agent.yaml - -Step 4: Show members of karmada -(In karmada)~# kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get clusters - -``` - -The components of Karmada are installed in `karmada-system` namespace by default, you can get them by: -```bash -kubectl get deployments -n karmada-system -NAME READY UP-TO-DATE AVAILABLE AGE -karmada-aggregated-apiserver 1/1 1 1 102s -karmada-apiserver 1/1 1 1 2m34s -karmada-controller-manager 1/1 1 1 116s -karmada-scheduler 1/1 1 1 119s -karmada-webhook 1/1 1 1 113s -kube-controller-manager 1/1 1 1 2m3s -``` -And the `karmada-etcd` is installed as the `StatefulSet`, get it by: -```bash -kubectl get statefulsets -n karmada-system -NAME READY AGE -etcd 1/1 28m -``` - -The configuration file of Karmada will be created to `/etc/karmada/karmada-apiserver.config` by default. - -#### Offline installation - -When installing Karmada, the `kubectl karmada init` will download the APIs(CRD) from the Karmada official release page -(e.g. `https://github.com/karmada-io/karmada/releases/tag/v0.10.1`) and load images from the official registry by default. - -If you want to install Karmada offline, maybe you have to specify the APIs tar file as well as the image. - -Use `--crds` flag to specify the CRD file. e.g. -```bash -kubectl karmada init --crds /$HOME/crds.tar.gz -``` - -The images of Karmada components could be specified, take `karmada-controller-manager` as an example: -```bash -kubectl karmada init --karmada-controller-manager-image=example.registry.com/library/karmada-controller-manager:1.0 -``` - -#### Deploy HA -Use `--karmada-apiserver-replicas` and `--etcd-replicas` flags to specify the number of the replicas (defaults to `1`). -```bash -kubectl karmada init --karmada-apiserver-replicas 3 --etcd-replicas 3 -``` - -### Install Karmada in Kind cluster - -> kind is a tool for running local Kubernetes clusters using Docker container "nodes". -> It was primarily designed for testing Kubernetes itself, not for production. - -Create a cluster named `host` by `hack/create-cluster.sh`: -```bash -hack/create-cluster.sh host $HOME/.kube/host.config -``` - -Install Karmada v1.2.0 by command `kubectl karmada init`: -```bash -kubectl karmada init --crds https://github.com/karmada-io/karmada/releases/download/v1.2.0/crds.tar.gz --kubeconfig=$HOME/.kube/host.config -``` - -Check installed components: -```bash -kubectl get pods -n karmada-system --kubeconfig=$HOME/.kube/host.config -NAME READY STATUS RESTARTS AGE -etcd-0 1/1 Running 0 2m55s -karmada-aggregated-apiserver-84b45bf9b-n5gnk 1/1 Running 0 109s -karmada-apiserver-6dc4cf6964-cz4jh 1/1 Running 0 2m40s -karmada-controller-manager-556cf896bc-79sxz 1/1 Running 0 2m3s -karmada-scheduler-7b9d8b5764-6n48j 1/1 Running 0 2m6s -karmada-webhook-7cf7986866-m75jw 1/1 Running 0 2m -kube-controller-manager-85c789dcfc-k89f8 1/1 Running 0 2m10s -``` - -## Install Karmada by Helm Chart Deployment -Please refer to [installing by Helm](https://github.com/karmada-io/karmada/tree/master/charts/karmada). - -## Install Karmada by binary -Please refer to [installing by binary](https://github.com/karmada-io/karmada/blob/master/docs/installation/binary-install.md). - -## Install Karmada from source - -Please refer to [installing from source](./fromsource.md). - -[1]: https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/ - -## Install Karmada for development environment - -If you want to try Karmada, we recommend that build a development environment by -`hack/local-up-karmada.sh` which will do following tasks for you: -- Start a Kubernetes cluster by [kind](https://kind.sigs.k8s.io/) to run the Karmada control plane, aka. the `host cluster`. -- Build Karmada control plane components based on a current codebase. -- Deploy Karmada control plane components on the `host cluster`. -- Create member clusters and join Karmada. - -**1. Clone Karmada repo to your machine:** -``` -git clone https://github.com/karmada-io/karmada -``` -or use your fork repo by replacing your `GitHub ID`: -``` -git clone https://github.com//karmada -``` - -**2. Change to the karmada directory:** -``` -cd karmada -``` - -**3. Deploy and run Karmada control plane:** - -run the following script: - -``` -hack/local-up-karmada.sh -``` -This script will do following tasks for you: -- Start a Kubernetes cluster to run the Karmada control plane, aka. the `host cluster`. -- Build Karmada control plane components based on a current codebase. -- Deploy Karmada control plane components on the `host cluster`. -- Create member clusters and join Karmada. - -If everything goes well, at the end of the script output, you will see similar messages as follows: -``` -Local Karmada is running. - -To start using your Karmada environment, run: - export KUBECONFIG="$HOME/.kube/karmada.config" -Please use 'kubectl config use-context karmada-host/karmada-apiserver' to switch the host and control plane cluster. - -To manage your member clusters, run: - export KUBECONFIG="$HOME/.kube/members.config" -Please use 'kubectl config use-context member1/member2/member3' to switch to the different member cluster. -``` - -**4. Check registered cluster** - -``` -kubectl get clusters --kubeconfig=/$HOME/.kube/karmada.config -``` - -You will get similar output as follows: -``` -NAME VERSION MODE READY AGE -member1 v1.23.4 Push True 7m38s -member2 v1.23.4 Push True 7m35s -member3 v1.23.4 Pull True 7m27s -``` - -There are 3 clusters named `member1`, `member2` and `member3` have registered with `Push` or `Pull` mode. diff --git a/docs/multi-cluster-ingress.md b/docs/multi-cluster-ingress.md deleted file mode 100644 index 89149b563..000000000 --- a/docs/multi-cluster-ingress.md +++ /dev/null @@ -1,289 +0,0 @@ -# Multi-cluster Ingress - -Users can use [MultiClusterIngress API](https://github.com/karmada-io/karmada/blob/master/pkg/apis/networking/v1alpha1/ingress_types.go) provided in Karmada to import external traffic to services in the member clusters. - -## Prerequisites - -### Karmada has been installed - -We can install Karmada by referring to [Quick Start](https://github.com/karmada-io/karmada#quick-start), or directly run `hack/local-up-karmada.sh` script which is also used to run our E2E cases. - -### Cluster Network - -Currently, we need to use the [Multi-cluster Service](https://github.com/karmada-io/karmada/blob/master/docs/multi-cluster-service.md#the-serviceexport-and-serviceimport-crds-have-been-installed) feature to import external traffic. - -So we need to ensure that the container networks between the **host cluster** and member clusters are connected. The **host cluster** indicates the cluster where the **Karmada Control Plane** is deployed. - -- If you use the `hack/local-up-karmada.sh` script to deploy Karmada, Karmada will have three member clusters, and the container networks between the **host cluster**, `member1` and `member2` are connected. -- You can use `Submariner` or other related open source projects to connected networks between clusters. - -## Example - -### Step 1: Deploy ingress-nginx on the host cluster - -We use [multi-cluster-ingress-nginx](https://github.com/karmada-io/multi-cluster-ingress-nginx) as the demo for demonstration. We've made some changes based on the latest version(controller-v1.1.1) of [ingress-nginx](https://github.com/kubernetes/ingress-nginx). - -#### Download code - -```shell -# for HTTPS -git clone https://github.com/karmada-io/multi-cluster-ingress-nginx.git -# for SSH -git clone git@github.com:karmada-io/multi-cluster-ingress-nginx.git -``` - -#### Build and deploy ingress-nginx - -Using the existing `karmada-host` kind cluster to build and deploy the ingress controller. - -```shell -export KUBECONFIG=~/.kube/karmada.config -export KIND_CLUSTER_NAME=karmada-host -kubectl config use-context karmada-host -cd multi-cluster-ingress-nginx -make dev-env -``` - -#### Apply kubeconfig secret - -Create a secret that contains the `karmada-apiserver` authentication credential: - -```shell -# get the 'karmada-apiserver' kubeconfig information and direct it to file /tmp/kubeconfig.yaml -kubectl -n karmada-system get secret kubeconfig --template={{.data.kubeconfig}} | base64 -d > /tmp/kubeconfig.yaml -# create secret with name 'kubeconfig' from file /tmp/kubeconfig.yaml -kubectl -n ingress-nginx create secret generic kubeconfig --from-file=kubeconfig=/tmp/kubeconfig.yaml -``` - -#### Edit ingress-nginx-controller deployment - -We want `nginx-ingress-controller` to access `karmada-apiserver` to listen to changes in resources(such as multiclusteringress, endpointslices, and service). Therefore, we need to mount the authentication credential of `karmada-apiserver` to the `nginx-ingress-controller`. - -```shell -kubectl -n ingress-nginx edit deployment ingress-nginx-controller -``` - -Edit as follows: - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - ... -spec: - ... - template: - spec: - containers: - - args: - - /nginx-ingress-controller - - --karmada-kubeconfig=/etc/kubeconfig # new line - ... - volumeMounts: - ... - - mountPath: /etc/kubeconfig # new line - name: kubeconfig # new line - subPath: kubeconfig # new line - volumes: - ... - - name: kubeconfig # new line - secret: # new line - secretName: kubeconfig # new line -``` - -### Step 2: Use the MCS feature to discovery service - -#### Install ServiceExport and ServiceImport CRDs - -Refer to [here](https://github.com/karmada-io/karmada/blob/master/docs/multi-cluster-service.md#the-serviceexport-and-serviceimport-crds-have-been-installed). - -#### Deploy web on member1 cluster - -deploy.yaml: - -
- -unfold me to see the yaml - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: web -spec: - replicas: 1 - selector: - matchLabels: - app: web - template: - metadata: - labels: - app: web - spec: - containers: - - name: hello-app - image: gcr.io/google-samples/hello-app:1.0 - ports: - - containerPort: 8080 - protocol: TCP ---- -apiVersion: v1 -kind: Service -metadata: - name: web -spec: - ports: - - port: 81 - targetPort: 8080 - selector: - app: web ---- -apiVersion: policy.karmada.io/v1alpha1 -kind: PropagationPolicy -metadata: - name: mcs-workload -spec: - resourceSelectors: - - apiVersion: apps/v1 - kind: Deployment - name: web - - apiVersion: v1 - kind: Service - name: web - placement: - clusterAffinity: - clusterNames: - - member1 -``` - -
- -```shell -kubectl --context karmada-apiserver apply -f deploy.yaml -``` - -#### Export web service from member1 cluster - -service_export.yaml: - -
- -unfold me to see the yaml - -```yaml -apiVersion: multicluster.x-k8s.io/v1alpha1 -kind: ServiceExport -metadata: - name: web ---- -apiVersion: policy.karmada.io/v1alpha1 -kind: PropagationPolicy -metadata: - name: web-export-policy -spec: - resourceSelectors: - - apiVersion: multicluster.x-k8s.io/v1alpha1 - kind: ServiceExport - name: web - placement: - clusterAffinity: - clusterNames: - - member1 -``` - -
- -```shell -kubectl --context karmada-apiserver apply -f service_export.yaml -``` - -#### Import web service to member2 cluster - -service_import.yaml: - -
- -unfold me to see the yaml - -```yaml -apiVersion: multicluster.x-k8s.io/v1alpha1 -kind: ServiceImport -metadata: - name: web -spec: - type: ClusterSetIP - ports: - - port: 81 - protocol: TCP ---- -apiVersion: policy.karmada.io/v1alpha1 -kind: PropagationPolicy -metadata: - name: web-import-policy -spec: - resourceSelectors: - - apiVersion: multicluster.x-k8s.io/v1alpha1 - kind: ServiceImport - name: web - placement: - clusterAffinity: - clusterNames: - - member2 -``` - -
- -```shell -kubectl --context karmada-apiserver apply -f service_import.yaml -``` - -### Step 3: Deploy multiclusteringress on karmada-controlplane - -mci-web.yaml: - -
- -unfold me to see the yaml - -```yaml -apiVersion: networking.karmada.io/v1alpha1 -kind: MultiClusterIngress -metadata: - name: demo-localhost - namespace: default -spec: - ingressClassName: nginx - rules: - - host: demo.localdev.me - http: - paths: - - backend: - service: - name: web - port: - number: 81 - path: /web - pathType: Prefix -``` - -
- -```shell -kubectl --context karmada-apiserver apply -f mci-web.yaml -``` - -### Step 4: Local testing - -Let's forward a local port to the ingress controller: - -```shell -kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80 -``` - -At this point, if you access http://demo.localdev.me:8080/web/, you should see an HTML page telling you: - -```html -Hello, world! -Version: 1.0.0 -Hostname: web-xxx-xxx -``` \ No newline at end of file diff --git a/docs/multi-cluster-service.md b/docs/multi-cluster-service.md deleted file mode 100644 index bec6a77b1..000000000 --- a/docs/multi-cluster-service.md +++ /dev/null @@ -1,202 +0,0 @@ -# Multi-cluster Service Discovery - -Users are able to **export** and **import** services between clusters with [Multi-cluster Service APIs](https://github.com/kubernetes-sigs/mcs-api). - -## Prerequisites - -### Karmada has been installed - -We can install Karmada by referring to [Quick Start](https://github.com/karmada-io/karmada#quick-start), or directly run `hack/local-up-karmada.sh` script which is also used to run our E2E cases. - -### Member Cluster Network - -Ensure that at least two clusters have been added to Karmada, and the container networks between member clusters are connected. - -- If you use the `hack/local-up-karmada.sh` script to deploy Karmada, Karmada will have three member clusters, and the container networks of the `member1` and `member2` will be connected. -- You can use `Submariner` or other related open source projects to connected networks between member clusters. - -### The ServiceExport and ServiceImport CRDs have been installed - -We need to install ServiceExport and ServiceImport in the member clusters. - -After ServiceExport and ServiceImport have been installed on the **Karmada Control Plane**, we can create `ClusterPropagationPolicy` to propagate those two CRDs to the member clusters. - -```yaml -# propagate ServiceExport CRD -apiVersion: policy.karmada.io/v1alpha1 -kind: ClusterPropagationPolicy -metadata: - name: serviceexport-policy -spec: - resourceSelectors: - - apiVersion: apiextensions.k8s.io/v1 - kind: CustomResourceDefinition - name: serviceexports.multicluster.x-k8s.io - placement: - clusterAffinity: - clusterNames: - - member1 - - member2 ---- -# propagate ServiceImport CRD -apiVersion: policy.karmada.io/v1alpha1 -kind: ClusterPropagationPolicy -metadata: - name: serviceimport-policy -spec: - resourceSelectors: - - apiVersion: apiextensions.k8s.io/v1 - kind: CustomResourceDefinition - name: serviceimports.multicluster.x-k8s.io - placement: - clusterAffinity: - clusterNames: - - member1 - - member2 -``` -## Example - -### Step 1: Deploy service on the `member1` cluster - -We need to deploy service on the `member1` cluster for discovery. - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: serve -spec: - replicas: 1 - selector: - matchLabels: - app: serve - template: - metadata: - labels: - app: serve - spec: - containers: - - name: serve - image: jeremyot/serve:0a40de8 - args: - - "--message='hello from cluster member1 (Node: {{env \"NODE_NAME\"}} Pod: {{env \"POD_NAME\"}} Address: {{addr}})'" - env: - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name ---- -apiVersion: v1 -kind: Service -metadata: - name: serve -spec: - ports: - - port: 80 - targetPort: 8080 - selector: - app: serve ---- -apiVersion: policy.karmada.io/v1alpha1 -kind: PropagationPolicy -metadata: - name: mcs-workload -spec: - resourceSelectors: - - apiVersion: apps/v1 - kind: Deployment - name: serve - - apiVersion: v1 - kind: Service - name: serve - placement: - clusterAffinity: - clusterNames: - - member1 -``` - -### Step 2: Export service to the `member2` cluster - -- Create a `ServiceExport` object on **Karmada Control Plane**, and then create a `PropagationPolicy` to propagate the `ServiceExport` object to the `member1` cluster. - - ```yaml - apiVersion: multicluster.x-k8s.io/v1alpha1 - kind: ServiceExport - metadata: - name: serve - --- - apiVersion: policy.karmada.io/v1alpha1 - kind: PropagationPolicy - metadata: - name: serve-export-policy - spec: - resourceSelectors: - - apiVersion: multicluster.x-k8s.io/v1alpha1 - kind: ServiceExport - name: serve - placement: - clusterAffinity: - clusterNames: - - member1 - ``` - -- Create a `ServiceImport` object on **Karmada Control Plane**, and then create a `PropagationPolicy` to propagate the `ServiceImport` object to the `member2` cluster. - - ```yaml - apiVersion: multicluster.x-k8s.io/v1alpha1 - kind: ServiceImport - metadata: - name: serve - spec: - type: ClusterSetIP - ports: - - port: 80 - protocol: TCP - --- - apiVersion: policy.karmada.io/v1alpha1 - kind: PropagationPolicy - metadata: - name: serve-import-policy - spec: - resourceSelectors: - - apiVersion: multicluster.x-k8s.io/v1alpha1 - kind: ServiceImport - name: serve - placement: - clusterAffinity: - clusterNames: - - member2 - ``` - -### Step 3: Access the service from `member2` cluster - -After the above steps, we can find the **derived service** which has the prefix `derived-` on the `member2` cluster. Then, we can access the **derived service** to access the service on the `member1` cluster. -```shell -# get the services in cluster member2, and we can find the service with the name 'derived-serve' -$ kubectl --kubeconfig ~/.kube/members.config --context member2 get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -derived-serve ClusterIP 10.13.205.2 80/TCP 81s -kubernetes ClusterIP 10.13.0.1 443/TCP 15m -``` - -Start a pod `request` on the `member2` cluster to access the ClusterIP of **derived service**: - -```shell -$ kubectl --kubeconfig ~/.kube/members.config --context member2 run -i --rm --restart=Never --image=jeremyot/request:0a40de8 request -- --duration={duration-time} --address={ClusterIP of derived service} -``` - -Eg, if we continue to access service for 3s, ClusterIP is `10.13.205.2`: - -```shell -# access the service of derived-serve, and the pod in member1 cluster returns a response -$ kubectl --kubeconfig ~/.kube/members.config --context member2 run -i --rm --restart=Never --image=jeremyot/request:0a40de8 request -- --duration=3s --address=10.13.205.2 -If you don't see a command prompt, try pressing enter. -2022/07/24 15:13:08 'hello from cluster member1 (Node: member1-control-plane Pod: serve-9b5b94f65-cp87p Address: 10.10.0.5)' -2022/07/24 15:13:09 'hello from cluster member1 (Node: member1-control-plane Pod: serve-9b5b94f65-cp87p Address: 10.10.0.5)' -pod "request" deleted -``` - diff --git a/docs/object-association-map.drawio b/docs/object-association-map.drawio deleted file mode 100644 index 2571348f9..000000000 --- a/docs/object-association-map.drawio +++ /dev/null @@ -1 +0,0 @@ -7V1de6O2Ev41vowefSJxucm2pxfbbtrtOW2v+mAj22wwuJh89dcfCYRBBtvYARsndi5ixIfE6J1XM5qRPCJ3i5f/JN5y/nPsy3CEof8yIp9HGCNIqPqnS15NCYdOXjJLAj8vg2XBt+BfWdxqSh8DX65MWV6UxnGYBku7cBJHkZykVpmXJPGzfdk0Dn2rYOnNZK3g28QL66V/BH46z0sF5mX5TzKYzYuakePmZxZecbF5k9Xc8+PnShH5YUTukjhO82+LlzsZaunZcvlxy9l1wxIZpW1uCP7G4S83X/43/R798uene8b//nV8g7Bp7pMXPppXNs1NXwsZzJL4cTkit0rGqRdEMlHFUB3XW2Aa9SSTVL409Y83Lh5aikCBR8YLmSav6rriLuICITjLbzbg4UaSz2VHEOECygRZf/Ir5tUuwRgISlWXMeFg6Ajzxp5BxmxdeSk89cXI7xBZCtFellMlS4N2xPqVpUMJQJuyRJQByJEgLhSQMpfzmmwRpEAwipzi4zYIF7oAuYxCjF0hGGS8N+HSBuE6YWpkaUnZ+ecxLk7crDIpf1IXILx8KU+qbzP9P/TGmrTyR6m25U/Lz9W6T/Ve5EvfKMDzPEjlt6U30WefFQOqsnm6CHWnFk++9SYPs+y2uziMtepEcSRtBAh9GIRhccUIE9+TYjpR5as0iR9k5YwzEXI8bYTMHlhuImm79lHgcrfseMc5ChyYgwIVlCOFw96wwXrChhdFceqlQRwNDyBMCp82AUTgMVE9dg6AGHKhELhVwh02fHAHzOI0oad4yjgpSn6Tq/gxmcjbIPKDaKbbXbvmLnxcpTKpXdoD/rwwmEXq+0QBRI/qmxibMv3XSEJ5p+eCqJSbQZjcaiwFyoz6ZKpYBL6v23jAQLcVfcy10cYw2DmQcdQ0cqm7BCfMITi7mfdlFvAOyKkRXgVC1AW/y8Uy9FLZC0r0k370FkGo5f2TDJ+k7tk6QdXQtIfeOsNa95DSDOYgLpADXW3etIOUywDLEAgRoi7FvY13TTZ7F4i6y5jnPomVW5ONevdxGExe+0HVkLpfCOBWvQhioYFwAei6VxkuzI0KHrCygzmsPIH3MZ79dyWTr+Pv2s/FMLdbrd73g6fjbZ0urWl9wRbMrIuzxnZZ9bLE7TLDLXjwkoXneyCIVX2Rt5CrDI61IbfFnTt1IAyih7wn2rahVY14HxHkOLWZo6KPq7m3lBnhptJWvvGalL8+pqr10pT7XvLwVbUvSDX0IYBaI2213kL6oZxqeawqLm0TAXTh5RxO7pgAUjE2uc31DgI5yxsntq7cxAWO9akrN6OgwvyKIw5WbnVY0e8Poe8DaP8kt7bfwgCbbLLtnc/KFOLKFC2YghPbr70YpmhnONKmCcouDMcPaDIypgYVhO1Z1RaWIuLKUmRDtw6zrt1CuIqWojc9qGHMyJ/Z9ZjRVPnhhK+ZvQOxDWk4yHnpbMMBwu9iPKAcA4eZWSet8wMaEN43GRzj6r2hja2V7mz6zAerz2RI+rzbE6QuBJVIJnU+kD5XIl2WLvYwAjfh3Fsuw0D6N8tQgWQhM4vxvQ6+/PIH32Nijx1PyrKm0bbFLKzOe6BEZyaZCOXp3C/el/v1R5w8nCZAmDHPfbwKcqYoTxTxvi8bF6zjflsDgsPy6uw4kKMMQAIrMZ0a5BoDQRhBgEv+5wV0r87cx3DmTLMSE6Ad5yH8owYLcd4p/nfiqTnCAQ1Zah/BsHsfit7Vc/aqpOU29vMuLWhhV/W7qaftC8JOyIleyentbueAyamdXeviBrt2w+yUkf9Jp+NrmzH0VqtgMjoqyUi+BOmfGTaYOfrLAEN///xSPXgtDiL14pWb9OFfpt7soLwtO7Luu5dJoCS3P+ndJGNtF1NhKaZeMpPpjguJ6TvpW2sT3pa81AQE88T7OMi8jwLLDmnMzSwcLyyASetkDHG8YdvmYjCPLLFWq4UgApxq6j4/pJZchrVaMkSvBXR8hpXblLN3UkSjoxANB4rogrCuiD4bolus8zknoo8HYRHU3g9CfgXheUGIIDo3CndbCidAYTEXdEXh+VDYYp1e9yjEFze8FwnsezEtBo5pAopHayfL7QnTO2vpG9PFeukKprctJNpEuuq0L/nczs6p/60T+IlUTrdZjKoBt9Qvmb02ux2xzw0++1blqWN1jwa/eXHsDQSEFqGkRIZeGjxJ64EtcFhcEk+nK9lT/55l5OTvmLPolbPOzVmkAdNdJHUfuXbytEwouiBCtG0u8hgipNBB5yPCayDkwFQ0e8o+iFZp8jiprlW/tFgnQtd4Qqt4gusASqvRTovVOWNW9oExFKo56C4FHFfX7+eXWHkyzWv4r8HOywl2KnswH4HCYCLPEurc04K3BDo75Z3Lz8g7Ce8IRSyoEsi0rUmB1Ok1XTBW5x2MOEBunbguPsliINmzRiduYtWlSbZv2sVaAuddr3gxObK7MwuE0reGNWUHZhZQFyDC3TJn9vClaR9XJS9fFS2/8aqK21XRgUBAXKaf92GU96KKbbeKxE3TNd5CSzwar5aZGKHORQ9jz6/NrhyVeG46/L2nnSMuQHWlA0EWeFwiQGXFQh07FLmgMpfocFrHjkCgiLdRDSJ8MHSu3tspvbdnpUhncdqaKx6Kr4avCfGtKIVAF7gQwTUt2OORyxDg2dqUfAe8BtPwXM5ay+GItImI+TP5zRyabefsno2TdB7P4kiNIHG8NIj4LtP01fS395jGo6OiaFv7cG8Mi7fNJUH4VIH3t+1X17jDZu/By4uLXQ6w35vDisrIBbu2f9XWCC7JhdnPbxu83FeLcACBxrElDhcb1fQdvSROkzncQfRy73ZzFa1Rg0Fq64k9qDTstWmK2kc1m4Y0m0UP2se4C3OZMiB43VUqduGhcM9EJFNjYxVYToO9fLLdq/kVSAMFEkUMkOr634EDifcIpC1bZ14xtg9jDnX09tTrD71orupg6+ErxE4LsQtjMVHPZO16P6dyruFyoZNNRHSBHSxAZdKa2BsaOsRVyKrtljJY7NTnBK7Y6Q87FGPgumTjp4WEywEq55mLeZrBYqYe1rj28ahtovJQCEIdlj+llvv45S/SkR/+Dw== \ No newline at end of file diff --git a/docs/object-association-map.md b/docs/object-association-map.md deleted file mode 100644 index 53b24b4c9..000000000 --- a/docs/object-association-map.md +++ /dev/null @@ -1,29 +0,0 @@ -# Karmada Object association mapping - -## Review - -![](./images/object-association-map.png) - -This picture is made by draw.io. If you need to update the **Review**, you can use the file [object-association-map.drawio](./object-association-map.drawio). - -## Label/Annotation information table - -> Note: -> These labels and annotations are managed by the Karmada. Please do not modify them. - -| Object | Tag | KeyName | Usage | -| ---------------------- | ---------- | ------------------------------------------------------------ | ------------------------------------------------------------ | -| ResourceTemplate | label | propagationpolicy.karmada.io/namespace propagationpolicy.karmada.io/name | The labels can be used to determine whether the current resource template is claimed by PropagationPolicy. | -| | label | clusterpropagationpolicy.karmada.io/name | The label can be used to determine whether the current resource template is claimed by ClusterPropagationPolicy. | -| ResourceBinding | label | propagationpolicy.karmada.io/namespace propagationpolicy.karmada.io/name | Through those two labels, logic can find the associated ResourceBinding from the PropagationPolicy or trace it back from the ResourceBinding to the corresponding PropagationPolicy. | -| | label | clusterpropagationpolicy.karmada.io/name | Through the label, logic can find the associated ResourceBinding from the ClusterPropagationPolicy or trace it back from the ResourceBinding to the corresponding ClusterPropagationPolicy. | -| | annotation | policy.karmada.io/applied-placement | Record applied placement declaration. The placement could be either PropagationPolicy's or ClusterPropagationPolicy's. | -| ClusterResourceBinding | label | clusterpropagationpolicy.karmada.io/name | Through the label, logic can find the associated ClusterResourceBinding from the ClusterPropagationPolicy or trace it back from the ClusterResourceBinding to the corresponding ClusterPropagationPolicy. | -| | annotation | policy.karmada.io/applied-placement | Record applied placement declaration. The placement could be either PropagationPolicy's or ClusterPropagationPolicy's. | -| Work | label | resourcebinding.karmada.io/namespace resourcebinding.karmada.io/name | Through those two labels, logic can find the associated WorkList from the ResourceBinding or trace it back from the Work to the corresponding ResourceBinding. | -| | label | clusterresourcebinding.karmada.io/name | Through the label, logic can find the associated WorkList from the ClusterResourceBinding or trace it back from the Work to the corresponding ClusterResourceBinding. | -| | label | propagation.karmada.io/instruction | Valid values includes: - suppressed: indicates that the resource should not be propagated. | -| | label | endpointslice.karmada.io/namespace endpointslice.karmada.io/name | Those labels are added to work object, which is report by member cluster, to specify service associated with EndpointSlice. | -| | annotation | policy.karmada.io/applied-overrides | Record override items, the overrides items should be sorted alphabetically in ascending order by OverridePolicy's name. | -| | annotation | policy.karmada.io/applied-cluster-overrides | Record override items, the overrides items should be sorted alphabetically in ascending order by ClusterOverridePolicy's name. | -| Workload | label | work.karmada.io/namespace work.karmada.io/name | The labels can be used to determine whether the current workload is managed by karmada. Through those labels, logic can find the associated Work or trace it back from the Work to the corresponding Workload. | diff --git a/docs/reserved-namespaces.md b/docs/reserved-namespaces.md deleted file mode 100644 index e1d89ce97..000000000 --- a/docs/reserved-namespaces.md +++ /dev/null @@ -1,10 +0,0 @@ -# Reserved Namespaces - -> Note: Avoid creating namespaces with the prefix `kube-` and `karmada-`, since they are reserved for Kubernetes -> and Karmada system namespaces. -> For now, resources under the following namespaces will not be propagated: - -- namespaces prefix `kube-`(including but not limited to `kube-system`, `kube-public`, `kube-node-lease`) -- karmada-system -- karmada-cluster -- karmada-es-* \ No newline at end of file diff --git a/docs/scheduler-estimator.md b/docs/scheduler-estimator.md deleted file mode 100644 index 8aa6829f4..000000000 --- a/docs/scheduler-estimator.md +++ /dev/null @@ -1,151 +0,0 @@ -# Cluster Accurate Scheduler Estimator - -Users could divide their replicas of a workload into different clusters in terms of available resources of member clusters. When some clusters are lack of resources, scheduler would not assign excessive replicas into these clusters by calling karmada-scheduler-estimator. - -## Prerequisites - -### Karmada has been installed - -We can install Karmada by referring to [quick-start](https://github.com/karmada-io/karmada#quick-start), or directly run `hack/local-up-karmada.sh` script which is also used to run our E2E cases. - -### Member cluster component is ready - -Ensure that all member clusters have been joined and their corresponding karmada-scheduler-estimator is installed into karmada-host. - -You could check by using the following command: - -```bash -# check whether the member cluster has been joined -$ kubectl get cluster -NAME VERSION MODE READY AGE -member1 v1.19.1 Push True 11m -member2 v1.19.1 Push True 11m -member3 v1.19.1 Pull True 5m12s - -# check whether the karmada-scheduler-estimator of a member cluster has been working well -$ kubectl --context karmada-host get pod -n karmada-system | grep estimator -karmada-scheduler-estimator-member1-696b54fd56-xt789 1/1 Running 0 77s -karmada-scheduler-estimator-member2-774fb84c5d-md4wt 1/1 Running 0 75s -karmada-scheduler-estimator-member3-5c7d87f4b4-76gv9 1/1 Running 0 72s -``` - -- If the cluster has not been joined, you could use `hack/deploy-agent-and-estimator.sh` to deploy both karmada-agent and karmada-scheduler-estimator. -- If the cluster has been joined already, you could use `hack/deploy-scheduler-estimator.sh` to only deploy karmada-scheduler-estimator. - -### Scheduler option '--enable-scheduler-estimator' - -After all member clusters have been joined and estimators are all ready, please specify the option `--enable-scheduler-estimator=true` to enable scheduler estimator. - -```bash -# edit the deployment of karmada-scheduler -$ kubectl --context karmada-host edit -n karmada-system deployments.apps karmada-scheduler -``` - -And then add the option `--enable-scheduler-estimator=true` into the command of container `karmada-scheduler`. - -## Example - -Now we could divide the replicas into different member clusters. Note that `propagationPolicy.spec.replicaScheduling.replicaSchedulingType` must be `Divided` and `propagationPolicy.spec.replicaScheduling.replicaDivisionPreference` must be `Aggregated`. The scheduler will try to divide the replicas aggregately in terms of all available resources of member clusters. - -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: PropagationPolicy -metadata: - name: aggregated-policy -spec: - resourceSelectors: - - apiVersion: apps/v1 - kind: Deployment - name: nginx - placement: - clusterAffinity: - clusterNames: - - member1 - - member2 - - member3 - replicaScheduling: - replicaSchedulingType: Divided - replicaDivisionPreference: Aggregated -``` - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: nginx - labels: - app: nginx -spec: - replicas: 5 - selector: - matchLabels: - app: nginx - template: - metadata: - labels: - app: nginx - spec: - containers: - - image: nginx - name: nginx - ports: - - containerPort: 80 - name: web-1 - resources: - requests: - cpu: "1" - memory: 2Gi -``` - -You will find all replicas have been assigned to as few clusters as possible. - -``` -$ kubectl get deployments.apps -NAME READY UP-TO-DATE AVAILABLE AGE -nginx 5/5 5 5 2m16s -$ kubectl get rb nginx-deployment -o=custom-columns=NAME:.metadata.name,CLUSTER:.spec.clusters -NAME CLUSTER -nginx-deployment [map[name:member1 replicas:5] map[name:member2] map[name:member3]] -``` - -After that, we change the resource request of the deployment to a large number and have a try again. - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: nginx - labels: - app: nginx -spec: - replicas: 5 - selector: - matchLabels: - app: nginx - template: - metadata: - labels: - app: nginx - spec: - containers: - - image: nginx - name: nginx - ports: - - containerPort: 80 - name: web-1 - resources: - requests: - cpu: "100" - memory: 200Gi -``` - -As any node of member clusters does not have so many cpu and memory resources, we will find workload scheduling failed. - -```bash -$ kubectl get deployments.apps -NAME READY UP-TO-DATE AVAILABLE AGE -nginx 0/5 0 0 2m20s -$ kubectl get rb nginx-deployment -o=custom-columns=NAME:.metadata.name,CLUSTER:.spec.clusters -NAME CLUSTER -nginx-deployment -``` diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md deleted file mode 100644 index bc6545465..000000000 --- a/docs/troubleshooting.md +++ /dev/null @@ -1,15 +0,0 @@ -# Troubleshooting - -## I can't access some resources when install Karmada - -- Pulling images from Google Container Registry(k8s.gcr.io) - -You may run the following command to change the image registry in the mainland China -```shell -sed -i'' -e "s#k8s.gcr.io#registry.aliyuncs.com/google_containers#g" artifacts/deploy/karmada-etcd.yaml -sed -i'' -e "s#k8s.gcr.io#registry.aliyuncs.com/google_containers#g" artifacts/deploy/karmada-apiserver.yaml -sed -i'' -e "s#k8s.gcr.io#registry.aliyuncs.com/google_containers#g" artifacts/deploy/kube-controller-manager.yaml -``` -- Download golang package in the mainland China, run the following command before installing -```shell -export GOPROXY=https://goproxy.cn \ No newline at end of file diff --git a/docs/upgrading/README.md b/docs/upgrading/README.md deleted file mode 100644 index 56ac98a74..000000000 --- a/docs/upgrading/README.md +++ /dev/null @@ -1,83 +0,0 @@ - - -**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* - -- [Upgrading Instruction](#upgrading-instruction) - - [Overview](#overview) - - [Regular Upgrading Process](#regular-upgrading-process) - - [Upgrading APIs](#upgrading-apis) - - [Manual Upgrade API](#manual-upgrade-api) - - [Upgrading Components](#upgrading-components) - - [Details Upgrading Instruction](#details-upgrading-instruction) - - [v0.8 to v0.9](#v08-to-v09) - - [v0.9 to v0.10](#v09-to-v010) - - [v0.10 to v1.0](#v010-to-v10) - - [v1.0 to v1.1](#v10-to-v11) - - [v1.1 to v1.2](#v11-to-v12) - - - -# Upgrading Instruction - -## Overview -Karmada uses the [semver versioning](https://semver.org/) and each version in the format of v`MAJOR`.`MINOR`.`PATCH`: -- The `PATCH` release does not introduce breaking changes. -- The `MINOR` release might introduce minor breaking changes with a workaround. -- The `Major` release might introduce backward-incompatible behavior changes. - -## Regular Upgrading Process -### Upgrading APIs -For releases that introduce API changes, the Karmada API(CRD) that Karmada components rely on must upgrade to keep consistent. - -Karmada CRD is composed of two parts: -- bases: The CRD definition generated via API structs. -- patches: conversion settings for the CRD. - -In order to support multiple versions of custom resources, the `patches` should be injected into `bases`. -To achieve this we introduced a `kustomization.yaml` configuration then use `kubectl kustomize` to build the final CRD. - -The `bases`,`patches`and `kustomization.yaml` now located at `charts/_crds` directory of the repo. - -#### Manual Upgrade API - -**Step 1: Get the Webhook CA certificate** - -The CA certificate will be injected into `patches` before building the final CRD. -We can retrieve it from the `MutatingWebhookConfiguration` or `ValidatingWebhookConfiguration` configurations, e.g: -```bash -kubectl get mutatingwebhookconfigurations.admissionregistration.k8s.io mutating-config -``` -Copy the `ca_string` from the yaml path `webhooks.name[x].clientConfig.caBundle`, then replace the `{{caBundle}}` from -the yaml files in `patches`. e.g: -```bash -sed -i'' -e "s/{{caBundle}}/${ca_string}/g" ./"charts/karmada/_crds/patches/webhook_in_resourcebindings.yaml" -sed -i'' -e "s/{{caBundle}}/${ca_string}/g" ./"charts/karmada/_crds/patches/webhook_in_clusterresourcebindings.yaml" -``` - -**Step2: Build final CRD** - -Generate the final CRD by `kubectl kustomize` command, e.g: -```bash -kubectl kustomize ./charts/karmada/_crds -``` -Or, you can apply to `karmada-apiserver` by: -```bash -kubectl apply -k ./charts/karmada/_crds -``` - -### Upgrading Components -Components upgrading is composed of image version update and possible command args changes. - -> For the argument changes please refer to `Details Upgrading Instruction` below. - -## Details Upgrading Instruction - -The following instructions are for minor version upgrades. Cross-version upgrades are not recommended. -And it is recommended to use the latest patch version when upgrading, for example, if you are upgrading from -v1.1.x to v1.2.x and the available patch versions are v1.2.0, v1.2.1 and v1.2.2, then select v1.2.2. - -### [v0.8 to v0.9](./v0.8-v0.9.md) -### [v0.9 to v0.10](./v0.9-v0.10.md) -### [v0.10 to v1.0](./v0.10-v1.0.md) -### [v1.0 to v1.1](./v1.0-v1.1.md) -### [v1.1 to v1.2](./v1.1-v1.2.md) diff --git a/docs/upgrading/v0.10-v1.0.md b/docs/upgrading/v0.10-v1.0.md deleted file mode 100644 index 7de79c1a8..000000000 --- a/docs/upgrading/v0.10-v1.0.md +++ /dev/null @@ -1,224 +0,0 @@ -# v0.10 to v1.0 - -Follow the [Regular Upgrading Process](./README.md). - -## Upgrading Notable Changes - -### Introduced `karmada-aggregated-apiserver` component - -In the releases before v1.0.0, we are using CRD to extend the -[Cluster API](https://github.com/karmada-io/karmada/tree/24f586062e0cd7c9d8e6911e52ce399106f489aa/pkg/apis/cluster), -and starts v1.0.0 we use -[API Aggregation](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)(AA) to -extend it. - -Based on the above change, perform the following operations during the upgrade: - -#### Step 1: Stop `karmada-apiserver` - -You can stop `karmada-apiserver` by updating its replica to `0`. - -#### Step 2: Remove Cluster CRD from ETCD - -Remove the `Cluster CRD` from ETCD directly by running the following command. - -``` -etcdctl --cert="/etc/kubernetes/pki/etcd/karmada.crt" \ ---key="/etc/kubernetes/pki/etcd/karmada.key" \ ---cacert="/etc/kubernetes/pki/etcd/server-ca.crt" \ -del /registry/apiextensions.k8s.io/customresourcedefinitions/clusters.cluster.karmada.io -``` - -> Note: This command only removed the `CRD` resource, all the `CR` (Cluster objects) not changed. -> That's the reason why we don't remove CRD by `karmada-apiserver`. - -#### Step 3: Prepare the certificate for the `karmada-aggregated-apiserver` - -To avoid [CA Reusage and Conflicts](https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/#ca-reusage-and-conflicts), -create CA signer and sign a certificate to enable the aggregation layer. - -Update `karmada-cert-secret` secret in `karmada-system` namespace: - -```diff -apiVersion: v1 -kind: Secret -metadata: - name: karmada-cert-secret - namespace: karmada-system -type: Opaque -data: - ... -+ front-proxy-ca.crt: | -+ {{front_proxy_ca_crt}} -+ front-proxy-client.crt: | -+ {{front_proxy_client_crt}} -+ front-proxy-client.key: | -+ {{front_proxy_client_key}} -``` - -Then update `karmada-apiserver` deployment's container command: - -```diff -- - --proxy-client-cert-file=/etc/kubernetes/pki/karmada.crt -- - --proxy-client-key-file=/etc/kubernetes/pki/karmada.key -+ - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt -+ - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key -- - --requestheader-client-ca-file=/etc/kubernetes/pki/server-ca.crt -+ - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt -``` - -After the update, restore the replicas of `karmada-apiserver` instances. - -#### Step 4: Deploy `karmada-aggregated-apiserver`: - -Deploy `karmada-aggregated-apiserver` instance to your `host cluster` by following manifests: -
-unfold me to see the yaml - -```yaml ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: karmada-aggregated-apiserver - namespace: karmada-system - labels: - app: karmada-aggregated-apiserver - apiserver: "true" -spec: - selector: - matchLabels: - app: karmada-aggregated-apiserver - apiserver: "true" - replicas: 1 - template: - metadata: - labels: - app: karmada-aggregated-apiserver - apiserver: "true" - spec: - automountServiceAccountToken: false - containers: - - name: karmada-aggregated-apiserver - image: swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-aggregated-apiserver:v1.0.0 - imagePullPolicy: IfNotPresent - volumeMounts: - - name: k8s-certs - mountPath: /etc/kubernetes/pki - readOnly: true - - name: kubeconfig - subPath: kubeconfig - mountPath: /etc/kubeconfig - command: - - /bin/karmada-aggregated-apiserver - - --kubeconfig=/etc/kubeconfig - - --authentication-kubeconfig=/etc/kubeconfig - - --authorization-kubeconfig=/etc/kubeconfig - - --karmada-config=/etc/kubeconfig - - --etcd-servers=https://etcd-client.karmada-system.svc.cluster.local:2379 - - --etcd-cafile=/etc/kubernetes/pki/server-ca.crt - - --etcd-certfile=/etc/kubernetes/pki/karmada.crt - - --etcd-keyfile=/etc/kubernetes/pki/karmada.key - - --tls-cert-file=/etc/kubernetes/pki/karmada.crt - - --tls-private-key-file=/etc/kubernetes/pki/karmada.key - - --audit-log-path=- - - --feature-gates=APIPriorityAndFairness=false - - --audit-log-maxage=0 - - --audit-log-maxbackup=0 - resources: - requests: - cpu: 100m - volumes: - - name: k8s-certs - secret: - secretName: karmada-cert-secret - - name: kubeconfig - secret: - secretName: kubeconfig ---- -apiVersion: v1 -kind: Service -metadata: - name: karmada-aggregated-apiserver - namespace: karmada-system - labels: - app: karmada-aggregated-apiserver - apiserver: "true" -spec: - ports: - - port: 443 - protocol: TCP - targetPort: 443 - selector: - app: karmada-aggregated-apiserver -``` -
- -Then, deploy `APIService` to `karmada-apiserver` by following manifests. - -
-unfold me to see the yaml - -```yaml -apiVersion: apiregistration.k8s.io/v1 -kind: APIService -metadata: - name: v1alpha1.cluster.karmada.io - labels: - app: karmada-aggregated-apiserver - apiserver: "true" -spec: - insecureSkipTLSVerify: true - group: cluster.karmada.io - groupPriorityMinimum: 2000 - service: - name: karmada-aggregated-apiserver - namespace: karmada-system - version: v1alpha1 - versionPriority: 10 ---- -apiVersion: v1 -kind: Service -metadata: - name: karmada-aggregated-apiserver - namespace: karmada-system -spec: - type: ExternalName - externalName: karmada-aggregated-apiserver.karmada-system.svc.cluster.local -``` - -
- -#### Step 5: check clusters status - -If everything goes well, you can see all your clusters just as before the upgrading. -```yaml -kubectl get clusters -``` - -### `karmada-agent` requires an extra `impersonate` verb - -In order to proxy user's request, the `karmada-agent` now request an extra `impersonate` verb. -Please check the `ClusterRole` configuration or apply the following manifest. - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: karmada-agent -rules: - - apiGroups: ['*'] - resources: ['*'] - verbs: ['*'] - - nonResourceURLs: ['*'] - verbs: ["get"] - -``` - -### MCS feature now supports `Kubernetes v1.21+` - -Since the `discovery.k8s.io/v1beta1` of `EndpointSlices` has been deprecated in favor of `discovery.k8s.io/v1`, in -[Kubernetes v1.21](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md), Karmada adopt -this change at release v1.0.0. -Now the [MCS](https://github.com/karmada-io/karmada/blob/master/docs/multi-cluster-service.md) feature requires -member cluster version no less than v1.21. diff --git a/docs/upgrading/v0.8-v0.9.md b/docs/upgrading/v0.8-v0.9.md deleted file mode 100644 index efda6aeee..000000000 --- a/docs/upgrading/v0.8-v0.9.md +++ /dev/null @@ -1,6 +0,0 @@ -# v0.8 to v0.9 - -Nothing special other than the [Regular Upgrading Process](./README.md). - -## Upgrading Notable Changes -Please refer to [v0.9.0 Release Notes](https://github.com/karmada-io/karmada/releases/tag/v0.9.0) for more details. diff --git a/docs/upgrading/v0.9-v0.10.md b/docs/upgrading/v0.9-v0.10.md deleted file mode 100644 index 5200a6dd2..000000000 --- a/docs/upgrading/v0.9-v0.10.md +++ /dev/null @@ -1,12 +0,0 @@ -# v0.9 to v0.10 - -Follow the [Regular Upgrading Process](./README.md). - -## Upgrading Notable Changes - -### karmada-scheduler - -The `--failover` flag has been removed and replaced by `--feature-gates`. -If you enable fail over feature by `--failover`, now should be change to `--feature-gates=Failover=true`. - -Please refer to [v0.10.0 Release Notes](https://github.com/karmada-io/karmada/releases/tag/v0.10.0) for more details. diff --git a/docs/upgrading/v1.0-v1.1.md b/docs/upgrading/v1.0-v1.1.md deleted file mode 100644 index aaf2e26d4..000000000 --- a/docs/upgrading/v1.0-v1.1.md +++ /dev/null @@ -1,43 +0,0 @@ -# v1.0 to v1.1 - -Follow the [Regular Upgrading Process](./README.md). - -## Upgrading Notable Changes - -The validation process for `Cluster` objects now has been moved from `karmada-webhook` to `karmada-aggregated-apiserver` -by [PR 1152](https://github.com/karmada-io/karmada/pull/1152), you have to remove the `Cluster` webhook configuration -from `ValidatingWebhookConfiguration`, such as: -```diff -diff --git a/artifacts/deploy/webhook-configuration.yaml b/artifacts/deploy/webhook-configuration.yaml -index 0a89ad36..f7a9f512 100644 ---- a/artifacts/deploy/webhook-configuration.yaml -+++ b/artifacts/deploy/webhook-configuration.yaml -@@ -69,20 +69,6 @@ metadata: - labels: - app: validating-config - webhooks: -- - name: cluster.karmada.io -- rules: -- - operations: ["CREATE", "UPDATE"] -- apiGroups: ["cluster.karmada.io"] -- apiVersions: ["*"] -- resources: ["clusters"] -- scope: "Cluster" -- clientConfig: -- url: https://karmada-webhook.karmada-system.svc:443/validate-cluster -- caBundle: {{caBundle}} -- failurePolicy: Fail -- sideEffects: None -- admissionReviewVersions: ["v1"] -- timeoutSeconds: 3 - - name: propagationpolicy.karmada.io - rules: - - operations: ["CREATE", "UPDATE"] -``` - -Otherwise, when joining clusters(or updating Cluster objects) the request will be rejected with following errors: -``` -Error: failed to create cluster(host) object. error: Internal error occurred: failed calling webhook "cluster.karmada.io": the server could not find the requested resource -``` - -Please refer to [v1.1.0 Release Notes](https://github.com/karmada-io/karmada/releases/tag/v1.1.0) for more details. diff --git a/docs/upgrading/v1.1-v1.2.md b/docs/upgrading/v1.1-v1.2.md deleted file mode 100644 index f04354613..000000000 --- a/docs/upgrading/v1.1-v1.2.md +++ /dev/null @@ -1,53 +0,0 @@ -# v1.1 to v1.2 - -Follow the [Regular Upgrading Process](./README.md). - -## Upgrading Notable Changes - -### karmada-controller-manager - -The `hpa` controller has been disabled by default now, if you are using this controller, please enable it as per [Configure Karmada controllers](https://github.com/karmada-io/karmada/blob/master/docs/userguide/configure-controllers.md#configure-karmada-controllers). - -### karmada-aggregated-apiserver - -The deprecated flags `--karmada-config` and `--master` in v1.1 have been removed from the codebase. -Please remember to remove the flags `--karmada-config` and `--master` in the `karmada-aggregated-apiserver` deployment yaml. - -### karmadactl - -We enable `karmadactl promote` command to support AA. For details info, please refer to [1795](https://github.com/karmada-io/karmada/pull/1795). - -In order to use AA by default, need to deploy some RBAC by following manifests. - -
-unfold me to see the yaml - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-proxy-admin -rules: -- apiGroups: - - 'cluster.karmada.io' - resources: - - clusters/proxy - verbs: - - '*' ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: cluster-proxy-admin -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-proxy-admin -subjects: - - kind: User - name: "system:admin" -``` - -
- -Please refer to [v1.2.0 Release Notes](https://github.com/karmada-io/karmada/releases/tag/v1.2.0) for more details. diff --git a/docs/userguide/aggregated-api-endpoint.md b/docs/userguide/aggregated-api-endpoint.md deleted file mode 100644 index 4579dc5ad..000000000 --- a/docs/userguide/aggregated-api-endpoint.md +++ /dev/null @@ -1,256 +0,0 @@ -# Aggregated Kubernetes API Endpoint - -The newly introduced [karmada-aggregated-apiserver](https://github.com/karmada-io/karmada/blob/master/cmd/aggregated-apiserver/main.go) component aggregates all registered clusters and allows users to access member clusters through Karmada by the proxy endpoint. - -For detailed discussion topic, see [here](https://github.com/karmada-io/karmada/discussions/1077). - -Here's a quick start. - -## Quick start - -To quickly experience this feature, we experimented with karmada-apiserver certificate. - -### Step1: Obtain the karmada-apiserver Certificate - -For Karmada deployed using `hack/local-up-karmada.sh`, you can directly copy it from the `$HOME/.kube/` directory. - -```shell -cp $HOME/.kube/karmada.config karmada-apiserver.config -``` - -### Step2: Grant permission to user `system:admin` - -`system:admin` is the user for karmada-apiserver certificate. We need to grant the `clusters/proxy` permission to it explicitly. - -Apply the following yaml file: - -cluster-proxy-rbac.yaml: - -
- -unfold me to see the yaml - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-proxy-clusterrole -rules: -- apiGroups: - - 'cluster.karmada.io' - resources: - - clusters/proxy - resourceNames: - - member1 - - member2 - - member3 - verbs: - - '*' ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: cluster-proxy-clusterrolebinding -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-proxy-clusterrole -subjects: - - kind: User - name: "system:admin" -``` - -
- -```shell -kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver apply -f cluster-proxy-rbac.yaml -``` - -### Step3: Access member clusters - -Run the below command (replace `{clustername}` with your actual cluster name): - -```shell -kubectl --kubeconfig karmada-apiserver.config get --raw /apis/cluster.karmada.io/v1alpha1/clusters/{clustername}/proxy/api/v1/nodes -``` - -Or append `/apis/cluster.karmada.io/v1alpha1/clusters/{clustername}/proxy` to the server address of karmada-apiserver.config, and then you can directly use: - -```shell -kubectl --kubeconfig karmada-apiserver.config get node -``` - -> Note: For a member cluster that joins Karmada in pull mode and allows only cluster-to-karmada access, we can [deploy apiserver-network-proxy (ANP)](../working-with-anp.md) to access it. - -## Unified authentication - -For one or a group of user subjects (users, groups, or service accounts) in a member cluster, we can import them into Karmada control plane and grant them the `clusters/proxy` permission, so that we can access the member cluster with permission of the user subject through Karmada. - -In this section, we use a serviceaccount named `tom` for the test. - -### Step1: Create ServiceAccount in member1 cluster (optional) - -If the serviceaccount has been created in your environment, you can skip this step. - -Create a serviceaccount that does not have any permission: - -```shell -kubectl --kubeconfig $HOME/.kube/members.config --context member1 create serviceaccount tom -``` - -### Step2: Create ServiceAccount in Karmada control plane - -```shell -kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver create serviceaccount tom -``` - -In order to grant serviceaccount the `clusters/proxy` permission, apply the following rbac yaml file: - -cluster-proxy-rbac.yaml: - -
- -unfold me to see the yaml - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: cluster-proxy-clusterrole -rules: -- apiGroups: - - 'cluster.karmada.io' - resources: - - clusters/proxy - resourceNames: - - member1 - verbs: - - '*' ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: cluster-proxy-clusterrolebinding -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-proxy-clusterrole -subjects: - - kind: ServiceAccount - name: tom - namespace: default - # The token generated by the serviceaccount can parse the group information. Therefore, you need to specify the group information below. - - kind: Group - name: "system:serviceaccounts" - - kind: Group - name: "system:serviceaccounts:default" -``` - -
- -```shell -kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver apply -f cluster-proxy-rbac.yaml -``` - -### Step3: Access member1 cluster - -Obtain token of serviceaccount `tom`: - -```shell -kubectl get secret `kubectl get sa tom -oyaml | grep token | awk '{print $3}'` -oyaml | grep token: | awk '{print $2}' | base64 -d -``` - -Then construct a kubeconfig file `tom.config` for `tom` serviceaccount: - -```yaml -apiVersion: v1 -clusters: -- cluster: - insecure-skip-tls-verify: true - server: {karmada-apiserver-address} # Replace {karmada-apiserver-address} with karmada-apiserver-address. You can find it in $HOME/.kube/karmada.config file. - name: tom -contexts: -- context: - cluster: tom - user: tom - name: tom -current-context: tom -kind: Config -users: -- name: tom - user: - token: {token} # Replace {token} with the token obtain above. -``` - -Run the command below to access member1 cluster: - -```shell -kubectl --kubeconfig tom.config get --raw /apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy/apis -``` - -We can find that we were able to access, but run the command below: - -```shell -kubectl --kubeconfig tom.config get --raw /apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy/api/v1/nodes -``` - -It will fail because serviceaccount `tom` does not have any permissions in the member1 cluster. - -### Step4: Grant permission to Serviceaccount in member1 cluster - -Apply the following YAML file: - -member1-rbac.yaml - -
- -unfold me to see the yaml - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: tom -rules: -- apiGroups: - - '*' - resources: - - '*' - verbs: - - '*' ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: tom -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: tom -subjects: - - kind: ServiceAccount - name: tom - namespace: default -``` - -
- -```shell -kubectl --kubeconfig $HOME/.kube/members.config --context member1 apply -f member1-rbac.yaml -``` - -Run the command that failed in the previous step again: - -```shell -kubectl --kubeconfig tom.config get --raw /apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy/api/v1/nodes -``` - -The access will be successful. - -Or we can append `/apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy` to the server address of tom.config, and then you can directly use: - -```shell -kubectl --kubeconfig tom.config get node -``` - -> Note: For a member cluster that joins Karmada in pull mode and allows only cluster-to-karmada access, we can [deploy apiserver-network-proxy (ANP)](../working-with-anp.md) to access it. \ No newline at end of file diff --git a/docs/userguide/cluster-registration.md b/docs/userguide/cluster-registration.md deleted file mode 100644 index 45ad16896..000000000 --- a/docs/userguide/cluster-registration.md +++ /dev/null @@ -1,106 +0,0 @@ - - -**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* - -- [Cluster Registration](#cluster-registration) - - [Overview of cluster mode](#overview-of-cluster-mode) - - [Push mode](#push-mode) - - [Pull mode](#pull-mode) - - [Register cluster with 'Push' mode](#register-cluster-with-push-mode) - - [Register cluster by CLI](#register-cluster-by-cli) - - [Check cluster status](#check-cluster-status) - - [Unregister cluster by CLI](#unregister-cluster-by-cli) - - [Register cluster with 'Pull' mode](#register-cluster-with-pull-mode) - - [Register cluster](#register-cluster) - - [Check cluster status](#check-cluster-status-1) - - [Unregister cluster](#unregister-cluster) - - - -# Cluster Registration - -## Overview of cluster mode - -Karmada supports both `Push` and `Pull` modes to manage the member clusters. -The main difference between `Push` and `Pull` modes is the way access to member clusters when deploying manifests. - -### Push mode -Karmada control plane will access member cluster's `kube-apiserver` directly to get cluster status and deploy manifests. - -### Pull mode -Karmada control plane will not access member cluster but delegate it to an extra component named `karmada-agent`. - -Each `karmada-agent` serves for a cluster and take responsibility for: -- Register cluster to Karmada(creates the `Cluster` object) -- Maintains cluster status and reports to Karmada(updates the status of `Cluster` object) -- Watch manifests from Karmada execution space(namespace, `karmada-es-`) and deploy to the cluster which serves for. - -## Register cluster with 'Push' mode - -You can use the [kubectl-karmada](./../installation/install-kubectl-karmada.md) CLI to `join`(register) and `unjoin`(unregister) clusters. - -### Register cluster by CLI - -Join cluster with name `member1` to Karmada by using the following command. -``` -kubectl karmada join member1 --kubeconfig= --cluster-kubeconfig= -``` -Repeat this step to join any additional clusters. - -The `--kubeconfig` specifies the Karmada's `kubeconfig` file and the CLI infers `karmada-apiserver` context -from the `current-context` field of the `kubeconfig`. If there are more than one context is configured in -the `kubeconfig` file, it is recommended to specify the context by the `--karmada-context` flag. For example: -``` -kubectl karmada join member1 --kubeconfig= --karmada-context=karmada --cluster-kubeconfig= -``` - -The `--cluster-kubeconfig` specifies the member cluster's `kubeconfig` and the CLI infers the member cluster's context -by the cluster name. If there is more than one context is configured in the `kubeconfig` file, or you don't want to use -the context name to register, it is recommended to specify the context by the `--cluster-context` flag. For example: - -``` -kubectl karmada join member1 --kubeconfig= --karmada-context=karmada \ ---cluster-kubeconfig= --cluster-context=member1 -``` -> Note: The registering cluster name can be different from the context with `--cluster-context` specified. - -### Check cluster status - -Check the status of the joined clusters by using the following command. -``` -kubectl get clusters - -NAME VERSION MODE READY AGE -member1 v1.20.7 Push True 66s -``` -### Unregister cluster by CLI - -You can unjoin clusters by using the following command. -``` -kubectl karmada unjoin member1 --kubeconfig= --cluster-kubeconfig= -``` -During unjoin process, the resources propagated to `member1` by Karmada will be cleaned up. -And the `--cluster-kubeconfig` is used to clean up the secret created at the `join` phase. - -Repeat this step to unjoin any additional clusters. - -## Register cluster with 'Pull' mode - -### Register cluster - -After `karmada-agent` be deployed, it will register cluster automatically at the start-up phase. - -### Check cluster status - -Check the status of the registered clusters by using the same command above. -``` -kubectl get clusters -NAME VERSION MODE READY AGE -member3 v1.20.7 Pull True 66s -``` -### Unregister cluster - -Undeploy the `karmada-agent` and then remove the `cluster` manually from Karmada. -``` -kubectl delete cluster member3 -``` diff --git a/docs/userguide/configure-controllers.md b/docs/userguide/configure-controllers.md deleted file mode 100644 index 7bcec05ef..000000000 --- a/docs/userguide/configure-controllers.md +++ /dev/null @@ -1,136 +0,0 @@ - - -**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* - -- [Configure Controllers](#configure-controllers) - - [Karmada Controllers](#karmada-controllers) - - [Configure Karmada Controllers](#configure-karmada-controllers) - - [Kubernetes Controllers](#kubernetes-controllers) - - [Required Controllers](#required-controllers) - - [namespace](#namespace) - - [garbagecollector](#garbagecollector) - - [serviceaccount-token](#serviceaccount-token) - - [Optinal Controllers](#optinal-controllers) - - [ttl-after-finished](#ttl-after-finished) - - - -# Configure Controllers - -Karmada maintains a bunch of controllers which are control loops that watch the state of your system, then make or -request changes where needed. Each controller tries to move the current state closer to the desired state. -See [Kubernetes Controller Concepts][1] for more details. - -## Karmada Controllers - -The controllers are embedded into components of `karmada-controller-manager` or `karmada-agent` and will be launched -along with components startup. Some controllers may be shared by `karmada-controller-manager` and `karmada-agent`. - -| Controller | In karmada-controller-manager | In karmada-agent | -|---------------|-------------------------------|------------------| -| cluster | Y | N | -| clusterStatus | Y | Y | -| binding | Y | N | -| execution | Y | Y | -| workStatus | Y | Y | -| namespace | Y | N | -| serviceExport | Y | Y | -| endpointSlice | Y | N | -| serviceImport | Y | N | -| unifiedAuth | Y | N | -| hpa | Y | N | - -### Configure Karmada Controllers - -You can use `--controllers` flag to specify the enabled controller list for `karmada-controller-manager` and -`karmada-agent`, or disable some of them in addition to the default list. - -E.g. Specify a controller list: -```bash ---controllers=cluster,clusterStatus,binding,xxx -``` - -E.g. Disable some controllers(remember to keep `*` if you want to keep the rest controllers in the default list): -```bash ---controllers=-hpa,-unifiedAuth,* -``` -Use `-foo` to disable the controller named `foo`. - -> Note: The default controller list might be changed in the future releases. The controllers enabled in the last release -> might be disabled or deprecated and new controllers might be introduced too. Users who are using this flag should -> check the release notes before system upgrade. - -## Kubernetes Controllers - -In addition to the controllers that are maintained by the Karmada community, Karmada also requires some controllers from -Kubernetes. These controllers run as part of `kube-controller-manager` and are maintained by the Kubernetes community. - -Users are recommended to deploy the `kube-controller-manager` along with Karmada components. And the installation -methods list in [installation guide][2] would help you deploy it as well as Karmada components. - -### Required Controllers - -Not all controllers in `kube-controller-manager` are necessary for Karmada, if you are deploying -Karmada using other tools, you might have to configure the controllers by `--controllers` flag just like what we did in -[example of kube-controller-manager deployment][3]. - -The following controllers are tested and recommended by Karmada. - -#### namespace - -The `namespace` controller runs as part of `kube-controller-manager`. It watches `Namespace` deletion and deletes -all resources in the given namespace. - -For the Karmada control plane, we inherit this behavior to keep a consistent user experience. More than that, we also -rely on this feature in the implementation of Karmada controllers, for example, when un-registering a cluster, -Karmada would delete the `execution namespace`(named `karmada-es-`) that stores all the resources -propagated to that cluster, to ensure all the resources could be cleaned up from both the Karmada control plane and the -given cluster. - -More details about the `namespace` controller, please refer to -[namespace controller sync logic](https://github.com/kubernetes/kubernetes/blob/v1.23.4/pkg/controller/namespace/deletion/namespaced_resources_deleter.go#L82-L94). - -#### garbagecollector - -The `garbagecollector` controller runs as part of `kube-controller-manager`. It is used to clean up garbage resources. -It manages [owner reference](https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/) and -deletes the resources once all owners are absent. - -For the Karmada control plane, we also use `owner reference` to link objects to each other. For example, each -`ResourceBinding` has an owner reference that link to the `resource template`. Once the `resource template` is removed, -the `ResourceBinding` will be removed by `garbagecollector` controller automatically. - -For more details about garbage collection mechanisms, please refer to -[Garbage Collection](https://kubernetes.io/docs/concepts/architecture/garbage-collection/). - -#### serviceaccount-token - -The `serviceaccount-token` controller runs as part of `kube-controller-manager`. -It watches `ServiceAccount` creation and creates a corresponding ServiceAccount token Secret to allow API access. - -For the Karmada control plane, after a `ServiceAccount` object is created by the administrator, we also need -`serviceaccount-token` controller to generate the ServiceAccount token `Secret`, which will be a relief for -administrator as he/she doesn't need to manually prepare the token. - -More details please refer to: -- [service account token controller](https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#token-controller) -- [service account tokens](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens) - -### Optinal Controllers - -#### ttl-after-finished - -The `ttl-after-finished` controller runs as part of `kube-controller-manager`. -It watches `Job` updates and limits the lifetime of finished `Jobs`. -The TTL timer starts when the Job finishes, and the finished Job will be cleaned up after the TTL expires. - -For the Karmada control plane, we also provide the capability to clean up finished `Jobs` automatically by -specifying the `.spec.ttlSecondsAfterFinished` field of a Job, which will be a relief for the control plane. - -More details please refer to: -- [ttl after finished controller](https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/#ttl-after-finished-controller) -- [clean up finished jobs automatically](https://kubernetes.io/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically) - -[1]: https://kubernetes.io/docs/concepts/architecture/controller/ -[2]: https://github.com/karmada-io/karmada/blob/master/docs/installation/installation.md -[3]: https://github.com/karmada-io/karmada/blob/master/artifacts/deploy/kube-controller-manager.yaml diff --git a/docs/userguide/customizing-resource-interpreter.md b/docs/userguide/customizing-resource-interpreter.md deleted file mode 100644 index c5acabd15..000000000 --- a/docs/userguide/customizing-resource-interpreter.md +++ /dev/null @@ -1,180 +0,0 @@ - - -**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* - -- [Customizing Resource Interpreter](#customizing-resource-interpreter) - - [Resource Interpreter Framework](#resource-interpreter-framework) - - [Interpreter Operations](#interpreter-operations) - - [Built-in Interpreter](#built-in-interpreter) - - [InterpretReplica](#interpretreplica) - - [ReviseReplica](#revisereplica) - - [Retain](#retain) - - [AggregateStatus](#aggregatestatus) - - [InterpretStatus](#interpretstatus) - - [InterpretDependency](#interpretdependency) - - [Customized Interpreter](#customized-interpreter) - - [What are interpreter webhooks?](#what-are-interpreter-webhooks) - - [Write an interpreter webhook server](#write-an-interpreter-webhook-server) - - [Deploy the admission webhook service](#deploy-the-admission-webhook-service) - - [Configure webhook on the fly](#configure-webhook-on-the-fly) - - - -# Customizing Resource Interpreter - -## Resource Interpreter Framework - -In the progress of propagating a resource from `karmada-apiserver` to member clusters, Karmada needs to know the -resource definition. Take `Propagating Deployment` as an example, at the phase of building `ResourceBinding`, the -`karmada-controller-manager` will parse the `replicas` from the deployment object. - -For Kubernetes native resources, Karmada knows how to parse them, but for custom resources defined by `CRD`(or extended -by something like `aggregated-apiserver`), as lack of the knowledge of the resource structure, they can only be treated -as normal resources. Therefore, the advanced scheduling algorithms cannot be used for them. - -The [Resource Interpreter Framework][1] is designed for interpreting resource structure. It consists of `built-in` and -`customized` interpreters: -- built-in interpreter: used for common Kubernetes native or well-known extended resources. -- customized interpreter: interprets custom resources or overrides the built-in interpreters. - -> Note: The major difference between `built-in` and `customized` interpreters is that the `built-in` interpreter is -> implemented and maintained by Karmada community and will be built into Karmada components, such as -> `karmada-controller-manager`. On the contrary, the `customized` interpreter is implemented and maintained by users. -> It should be registered to Karmada as an `Interpreter Webhook` (see below for more details). - -### Interpreter Operations - -When interpreting resources, we often get multiple pieces of information extracted. The `Interpreter Operations` -defines the interpreter request type, and the `Resource Interpreter Framework` provides services for each operation -type. - -For all operations designed by `Resource Interpreter Framework`, please refer to [Interpreter Operations][2]. - -> Note: Not all the designed operations are supported (see below for supported operations). - -> Note: At most one interpreter will be consulted to when interpreting a resource with specific `interpreter operation` -> and the `customized` interpreter has higher priority than `built-in` interpreter if they are both interpreting the same -> resource. -> For example, the `built-in` interpreter serves `InterpretReplica` for `Deployment` with version `apps/v1`. If there -> is a customized interpreter registered to Karmada for interpreting the same resource, the `customized` interpreter wins and the -> `built-in` interpreter will be ignored. - -## Built-in Interpreter - -For the common Kubernetes native or well-known extended resources, the interpreter operations are built-in, which means -the users usually don't need to implement customized interpreters. If you want more resources to be built-in, -please feel free to [file an issue][3] to let us know your user case. - -The built-in interpreter now supports following interpreter operations: - -### InterpretReplica - -Supported resources: -- Deployment(apps/v1) -- Job(batch/v1) - -### ReviseReplica - -Supported resources: -- Deployment(apps/v1) -- Job(batch/v1) - -### Retain - -Supported resources: -- Pod(v1) -- Service(v1) -- ServiceAccount(v1) -- PersistentVolumeClaim(v1) -- Job(batch/v1) - -### AggregateStatus - -Supported resources: -- Deployment(apps/v1) -- Service(v1) -- Ingress(extensions/v1beta1) -- Job(batch/v1) -- DaemonSet(apps/v1) -- StatefulSet(apps/v1) -- Pod(v1) -- PersistentVolumeClaim(v1) - -### InterpretStatus - -Supported resources: -- Deployment(apps/v1) -- Service(v1) -- Ingress(extensions/v1beta1) -- Job(batch/v1) -- DaemonSet(apps/v1) -- StatefulSet(apps/v1) - -### InterpretDependency - -Supported resources: -- Deployment(apps/v1) -- Job(batch/v1) -- CronJob(batch/v1) -- Pod(v1) -- DaemonSet(apps/v1) -- StatefulSet(apps/v1) - -## Customized Interpreter - -The customized interpreter is implemented and maintained by users, it developed as extensions and -run as webhooks at runtime. - -### What are interpreter webhooks? - -Interpreter webhooks are HTTP callbacks that receive interpret requests and do something with them. - -### Write an interpreter webhook server - -Please refer to the implementation of the [Example of Customize Interpreter][4] that is validated -in Karmada E2E test. The webhook handles the `ResourceInterpreterRequest` request sent by the -Karmada components (such as `karmada-controller-manager`), and sends back its decision as an -`ResourceInterpreterResponse`. - -### Deploy the admission webhook service - -The [Example of Customize Interpreter][4] is deployed in the host cluster for E2E and exposed by -a service as the front-end of the webhook server. - -You may also deploy your webhooks outside the cluster. You will need to update your webhook -configurations accordingly. - -### Configure webhook on the fly - -You can configure what resources and supported operations are subject to what interpreter webhook -via [ResourceInterpreterWebhookConfiguration][5]. - -The following is an example `ResourceInterpreterWebhookConfiguration`: -```yaml -apiVersion: config.karmada.io/v1alpha1 -kind: ResourceInterpreterWebhookConfiguration -metadata: - name: examples -webhooks: - - name: workloads.example.com - rules: - - operations: [ "InterpretReplica","ReviseReplica","Retain","AggregateStatus" ] - apiGroups: [ "workload.example.io" ] - apiVersions: [ "v1alpha1" ] - kinds: [ "Workload" ] - clientConfig: - url: https://karmada-interpreter-webhook-example.karmada-system.svc:443/interpreter-workload - caBundle: {{caBundle}} - interpreterContextVersions: [ "v1alpha1" ] - timeoutSeconds: 3 -``` - -You can config more than one webhook in a `ResourceInterpreterWebhookConfiguration`, each webhook -serves at least one operation. - -[1]: https://github.com/karmada-io/karmada/tree/master/docs/proposals/resource-interpreter-webhook -[2]: https://github.com/karmada-io/karmada/blob/master/pkg/apis/config/v1alpha1/resourceinterpreterwebhook_types.go#L71-L108 -[3]: https://github.com/karmada-io/karmada/issues/new?assignees=&labels=kind%2Ffeature&template=enhancement.md -[4]: https://github.com/karmada-io/karmada/tree/master/examples/customresourceinterpreter -[5]: https://github.com/karmada-io/karmada/blob/master/pkg/apis/config/v1alpha1/resourceinterpreterwebhook_types.go#L16 -[6]: https://github.com/karmada-io/karmada/blob/master/examples/customresourceinterpreter/webhook-configuration.yaml \ No newline at end of file diff --git a/docs/userguide/failover.md b/docs/userguide/failover.md deleted file mode 100644 index cd3788b75..000000000 --- a/docs/userguide/failover.md +++ /dev/null @@ -1,207 +0,0 @@ -# Failover Overview - -## Monitor the cluster health status - -Karmada supports both `Push` and `Pull` modes to manage member clusters. - -More details about cluster registration please refer to [Cluster Registration](./cluster-registration.md#cluster-registration). - -### Determining failures - -For clusters there are two forms of heartbeats: -- updates to the `.status` of a Cluster. -- `Lease` objects within the `karmada-cluster` namespace in karmada control plane. Each cluster has an associated `Lease` object. - -#### Cluster status collection - -For `Push` mode clusters, the cluster status controller in karmada control plane will continually collect cluster's status for a configured interval. - -For `Pull` mode clusters, the `karmada-agent` is responsible for creating and updating the `.status` of clusters with configured interval. - -The interval for `.status` updates to `Cluster` can be configured via `--cluster-status-update-frequency` flag(default is 10 seconds). - -Cluster might be set to the `NotReady` state with following conditions: -- cluster is unreachable(retry 4 times within 2 seconds). -- cluster's health endpoint responded without ok. -- failed to collect cluster status including the kubernetes’ version, installed APIs, resources usages, etc. - -#### Lease updates -Karmada will create a `Lease` object and a lease controller for each cluster when clusters are joined. - -Each lease controller is responsible for updating the related Leases. The lease renewing time can be configured via `--cluster-lease-duration` and `--cluster-lease-renew-interval-fraction` flags(default is 10 seconds). - -Lease’s updating process is independent with cluster’s status updating process, since cluster’s `.status` field is maintained by cluster status controller. - -The cluster controller in Karmada control plane would check the state of each cluster every `--cluster-monitor-period` period(default is 5 seconds). - -The cluster's `Ready` condition would be changed to `Unknown` when cluster controller has not heard from the cluster in the last `--cluster-monitor-grace-period`(default is 40 seconds). - -### Check cluster status -You can use `kubectl` to check a Cluster's status and other details: -``` -kubectl describe cluster -``` - -The `Ready` condition in `Status` field indicates the cluster is healthy and ready to accept workloads. -It will be set to `False` if the cluster is not healthy and is not accepting workloads, and `Unknown` if the cluster controller has not heard from the cluster in the last `cluster-monitor-grace-period`. - -The following example describes an unhealthy cluster: -``` -kubectl describe cluster member1 - -Name: member1 -Namespace: -Labels: -Annotations: -API Version: cluster.karmada.io/v1alpha1 -Kind: Cluster -Metadata: - Creation Timestamp: 2021-12-29T08:49:35Z - Finalizers: - karmada.io/cluster-controller - Resource Version: 152047 - UID: 53c133ab-264e-4e8e-ab63-a21611f7fae8 -Spec: - API Endpoint: https://172.23.0.7:6443 - Impersonator Secret Ref: - Name: member1-impersonator - Namespace: karmada-cluster - Secret Ref: - Name: member1 - Namespace: karmada-cluster - Sync Mode: Push -Status: - Conditions: - Last Transition Time: 2021-12-31T03:36:08Z - Message: cluster is not reachable - Reason: ClusterNotReachable - Status: False - Type: Ready -Events: -``` - -## Failover feature of Karmada -The failover feature is controlled by the `Failover` feature gate, users need to enable the `Failover` feature gate of karmada scheduler: -``` ---feature-gates=Failover=true -``` - -### Concept - -When it is determined that member clusters becoming unhealthy, the karmada scheduler will reschedule the reference application. -There are several constraints: -- For each rescheduled application, it still needs to meet the restrictions of PropagationPolicy, such as ClusterAffinity or SpreadConstraints. -- The application distributed on the ready clusters after the initial scheduling will remain when failover schedule. - -#### Duplicated schedule type -For `Duplicated` schedule policy, when the number of candidate clusters that meet the PropagationPolicy restriction is not less than the number of failed clusters, -it will be rescheduled to candidate clusters according to the number of failed clusters. Otherwise, no rescheduling. - -Take `Deployment` as example: -``` -apiVersion: apps/v1 -kind: Deployment -metadata: - name: nginx - labels: - app: nginx -spec: - replicas: 2 - selector: - matchLabels: - app: nginx - template: - metadata: - labels: - app: nginx - spec: - containers: - - image: nginx - name: nginx ---- -apiVersion: policy.karmada.io/v1alpha1 -kind: PropagationPolicy -metadata: - name: nginx-propagation -spec: - resourceSelectors: - - apiVersion: apps/v1 - kind: Deployment - name: nginx - placement: - clusterAffinity: - clusterNames: - - member1 - - member2 - - member3 - - member5 - spreadConstraints: - - maxGroups: 2 - minGroups: 2 - replicaScheduling: - replicaSchedulingType: Duplicated -``` - -Suppose there are 5 member clusters, and the initial scheduling result is in member1 and member2. When member2 fails, it triggers rescheduling. - -It should be noted that rescheduling will not delete the application on the ready cluster member1. In the remaining 3 clusters, only member3 and member5 match the `clusterAffinity` policy. - -Due to the limitations of spreadConstraints, the final result can be [member1, member3] or [member1, member5]. - -#### Divided schedule type -For `Divided` schedule policy, karmada scheduler will try to migrate replicas to the other health clusters. - -Take `Deployment` as example: -``` -apiVersion: apps/v1 -kind: Deployment -metadata: - name: nginx - labels: - app: nginx -spec: - replicas: 3 - selector: - matchLabels: - app: nginx - template: - metadata: - labels: - app: nginx - spec: - containers: - - image: nginx - name: nginx ---- -apiVersion: policy.karmada.io/v1alpha1 -kind: PropagationPolicy -metadata: - name: nginx-propagation -spec: - resourceSelectors: - - apiVersion: apps/v1 - kind: Deployment - name: nginx - placement: - clusterAffinity: - clusterNames: - - member1 - - member2 - replicaScheduling: - replicaDivisionPreference: Weighted - replicaSchedulingType: Divided - weightPreference: - staticWeightList: - - targetCluster: - clusterNames: - - member1 - weight: 1 - - targetCluster: - clusterNames: - - member2 - weight: 2 -``` - -Karmada scheduler will divide the replicas according the `weightPreference`. The initial schedule result is member1 with 1 replica and member2 with 2 replicas. - -When member1 fails, it triggers rescheduling. Karmada scheduler will try to migrate replicas to the other health clusters. The final result will be member2 with 3 replicas. diff --git a/docs/userguide/override-policy.md b/docs/userguide/override-policy.md deleted file mode 100644 index afcc4db0b..000000000 --- a/docs/userguide/override-policy.md +++ /dev/null @@ -1,400 +0,0 @@ -# Override Policy - -The [OverridePolicy][1] and [ClusterOverridePolicy][2] are used to declare override rules for resources when -they are propagating to different clusters. - -## Difference between OverridePolicy and ClusterOverridePolicy -ClusterOverridePolicy represents the cluster-wide policy that overrides a group of resources to one or more clusters while OverridePolicy will apply to resources in the same namespace as the namespace-wide policy. For cluster scoped resources, apply ClusterOverridePolicy by policies name in ascending. For namespaced scoped resources, first apply ClusterOverridePolicy, then apply OverridePolicy. - -## Resource Selector - -ResourceSelectors restricts resource types that this override policy applies to. If you ignore this field it means matching all resources. - -Resource Selector required `apiVersion` field which represents the API version of the target resources and `kind` which represents the Kind of the target resources. -The allowed selectors are as follows: -- `namespace`: namespace of the target resource. -- `name`: name of the target resource -- `labelSelector`: A label query over a set of resources. - -#### Examples -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: OverridePolicy -metadata: - name: example -spec: - resourceSelectors: - - apiVersion: apps/v1 - kind: Deployment - name: nginx - namespace: test - labelSelector: - matchLabels: - app: nginx - overrideRules: - ... -``` -It means override rules above will only be applied to `Deployment` which is named nginx in test namespace and has labels with `app: nginx`. - -## Target Cluster - -Target Cluster defines restrictions on the override policy that only applies to resources propagated to the matching clusters. If you ignore this field it means matching all clusters. - -The allowed selectors are as follows: -- `labelSelector`: a filter to select member clusters by labels. -- `fieldSelector`: a filter to select member clusters by fields. Currently only three fields of provider(cluster.spec.provider), zone(cluster.spec.zone), and region(cluster.spec.region) are supported. -- `clusterNames`: the list of clusters to be selected. -- `exclude`: the list of clusters to be ignored. - -### labelSelector - -#### Examples -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: OverridePolicy -metadata: - name: example -spec: - ... - overrideRules: - - targetCluster: - labelSelector: - matchLabels: - cluster: member1 - overriders: - ... -``` -It means override rules above will only be applied to those resources propagated to clusters which has `cluster: member1` label. - -### fieldSelector - -#### Examples -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: OverridePolicy -metadata: - name: example -spec: - ... - overrideRules: - - targetCluster: - fieldSelector: - matchExpressions: - - key: region - operator: In - values: - - cn-north-1 - overriders: - ... -``` -It means override rules above will only be applied to those resources propagated to clusters which has the `spec.region` field with values in [cn-north-1]. - -### fieldSelector - -#### Examples -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: OverridePolicy -metadata: - name: example -spec: - ... - overrideRules: - - targetCluster: - fieldSelector: - matchExpressions: - - key: region - operator: In - values: - - cn-north-1 - overriders: - ... -``` -It means override rules above will only be applied to those resources propagated to clusters which has the `spec.region` field with values in [cn-north-1]. - -### clusterNames - -#### Examples -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: OverridePolicy -metadata: - name: example -spec: - ... - overrideRules: - - targetCluster: - clusterNames: - - member1 - overriders: - ... -``` -It means override rules above will only be applied to those resources propagated to clusters whose clusterNames are member1. - -### exclude - -#### Examples -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: OverridePolicy -metadata: - name: example -spec: - ... - overrideRules: - - targetCluster: - exclude: - - member1 - overriders: - ... -``` -It means override rules above will only be applied to those resources propagated to clusters whose clusterNames are not member1. - -## Overriders - -Karmada offers various alternatives to declare the override rules: -- `ImageOverrider`: dedicated to override images for workloads. -- `CommandOverrider`: dedicated to override commands for workloads. -- `ArgsOverrider`: dedicated to override args for workloads. -- `PlaintextOverrider`: a general-purpose tool to override any kind of resources. - -### ImageOverrider -The `ImageOverrider` is a refined tool to override images with format `[registry/]repository[:tag|@digest]`(e.g.`/spec/template/spec/containers/0/image`) for workloads such as `Deployment`. - -The allowed operations are as follows: -- `add`: appends the registry, repository or tag/digest to the image from containers. -- `remove`: removes the registry, repository or tag/digest from the image from containers. -- `replace`: replaces the registry, repository or tag/digest of the image from containers. - -#### Examples -Suppose we create a deployment named `myapp`. -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: myapp - ... -spec: - template: - spec: - containers: - - image: myapp:1.0.0 - name: myapp -``` - -**Example 1: Add the registry when workloads are propagating to specific clusters.** -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: OverridePolicy -metadata: - name: example -spec: - ... - overrideRules: - - overriders: - imageOverrider: - - component: Registry - operator: add - value: test-repo -``` -It means `add` a registry`test-repo` to the image of `myapp`. - -After the policy is applied for `myapp`, the image will be: -```yaml - containers: - - image: test-repo/myapp:1.0.0 - name: myapp -``` - -**Example 2: replace the repository when workloads are propagating to specific clusters.** -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: OverridePolicy -metadata: - name: example -spec: - ... - overrideRules: - - overriders: - imageOverrider: - - component: Repository - operator: replace - value: myapp2 -``` -It means `replace` the repository from `myapp` to `myapp2`. - -After the policy is applied for `myapp`, the image will be: -```yaml - containers: - - image: myapp2:1.0.0 - name: myapp -``` - -**Example 3: remove the tag when workloads are propagating to specific clusters.** -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: OverridePolicy -metadata: - name: example -spec: - ... - overrideRules: - - overriders: - imageOverrider: - - component: Tag - operator: remove -``` -It means `remove` the tag of the image `myapp`. - -After the policy is applied for `myapp`, the image will be: -```yaml - containers: - - image: myapp - name: myapp -``` - -### CommandOverrider -The `CommandOverrider` is a refined tool to override commands(e.g.`/spec/template/spec/containers/0/command`) -for workloads, such as `Deployment`. - -The allowed operations are as follows: -- `add`: appends one or more flags to the command list. -- `remove`: removes one or more flags from the command list. - -#### Examples -Suppose we create a deployment named `myapp`. -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: myapp - ... -spec: - template: - spec: - containers: - - image: myapp - name: myapp - command: - - ./myapp - - --parameter1=foo - - --parameter2=bar -``` - -**Example 1: Add flags when workloads are propagating to specific clusters.** -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: OverridePolicy -metadata: - name: example -spec: - ... - overrideRules: - - overriders: - commandOverrider: - - containerName: myapp - operator: add - value: - - --cluster=member1 -``` -It means `add`(appending) a new flag `--cluster=member1` to the `myapp`. - -After the policy is applied for `myapp`, the command list will be: -```yaml - containers: - - image: myapp - name: myapp - command: - - ./myapp - - --parameter1=foo - - --parameter2=bar - - --cluster=member1 -``` - -**Example 2: Remove flags when workloads are propagating to specific clusters.** -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: OverridePolicy -metadata: - name: example -spec: - ... - overrideRules: - - overriders: - commandOverrider: - - containerName: myapp - operator: remove - value: - - --parameter1=foo -``` -It means `remove` the flag `--parameter1=foo` from the command list. - -After the policy is applied for `myapp`, the `command` will be: -```yaml - containers: - - image: myapp - name: myapp - command: - - ./myapp - - --parameter2=bar -``` - -### ArgsOverrider -The `ArgsOverrider` is a refined tool to override args(such as `/spec/template/spec/containers/0/args`) for workloads, -such as `Deployments`. - -The allowed operations are as follows: -- `add`: appends one or more args to the command list. -- `remove`: removes one or more args from the command list. - -Note: the usage of `ArgsOverrider` is similar to `CommandOverrider`, You can refer to the `CommandOverrider` examples. - -### PlaintextOverrider -The `PlaintextOverrider` is a simple overrider that overrides target fields according to path, operator and value, just like `kubectl patch`. - -The allowed operations are as follows: -- `add`: appends one or more elements to the resources. -- `remove`: removes one or more elements from the resources. -- `replace`: replaces one or more elements from the resources. - -Suppose we create a configmap named `myconfigmap`. -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: myconfigmap - ... -data: - example: 1 -``` - -**Example 1: replace data of the configmap when resources are propagating to specific clusters.** -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: OverridePolicy -metadata: - name: example -spec: - ... - overrideRules: - - overriders: - plaintext: - - path: /data/example - operator: replace - value: 2 -``` -It means `replace` data of the configmap from `example: 1` to the `example: 2`. - -After the policy is applied for `myconfigmap`, the configmap will be: -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: myconfigmap - ... -data: - example: 2 -``` - -[1]: https://github.com/karmada-io/karmada/blob/c37bedc1cfe5a98b47703464fed837380c90902f/pkg/apis/policy/v1alpha1/override_types.go#L13 -[2]: https://github.com/karmada-io/karmada/blob/c37bedc1cfe5a98b47703464fed837380c90902f/pkg/apis/policy/v1alpha1/override_types.go#L189 \ No newline at end of file diff --git a/docs/userguide/promote-legacy-workload.md b/docs/userguide/promote-legacy-workload.md deleted file mode 100644 index 4ce9c4404..000000000 --- a/docs/userguide/promote-legacy-workload.md +++ /dev/null @@ -1,58 +0,0 @@ -# Promote legacy workload - -Assume that there is a member cluster where a workload (like Deployment) is deployed but not managed by Karmada, we can use the `karmadactl promote` command to let Karmada take over this workload directly and not to cause its pods to restart. - -## Example - -### For member cluster in `Push` mode -There is an `nginx` Deployment that belongs to namespace `default` in member cluster `cluster1`. - -``` -[root@master1]# kubectl get cluster -NAME VERSION MODE READY AGE -cluster1 v1.22.3 Push True 24d -``` - -``` -[root@cluster1]# kubectl get deploy nginx -NAME READY UP-TO-DATE AVAILABLE AGE -nginx 1/1 1 1 66s - -[root@cluster1]# kubectl get pod -NAME READY STATUS RESTARTS AGE -nginx-6799fc88d8-sqjj4 1/1 Running 0 2m12s -``` - -We can promote it to Karmada by executing the command below on the Karmada control plane. - -``` -[root@master1]# karmadactl promote deployment nginx -n default -c member1 -Resource "apps/v1, Resource=deployments"(default/nginx) is promoted successfully -``` - -The nginx deployment has been adopted by Karmada. - -``` -[root@master1]# kubectl get deploy -NAME READY UP-TO-DATE AVAILABLE AGE -nginx 1/1 1 1 7m25s -``` - -And the pod created by the nginx deployment in the member cluster wasn't restarted. - -``` -[root@cluster1]# kubectl get pod -NAME READY STATUS RESTARTS AGE -nginx-6799fc88d8-sqjj4 1/1 Running 0 15m -``` - -### For member cluster in `Pull` mode -Most steps are same as those for clusters in `Push` mode. Only the flags of the `karmadactl promote` command are different. - -``` -karmadactl promote deployment nginx -n default -c cluster1 --cluster-kubeconfig= -``` - -For more flags and example about the command, you can use `karmadactl promote --help`. - -> Note: As the version upgrade of resources in Kubernetes is in progress, the apiserver of Karmada control plane cloud be different from member clusters. To avoid compatibility issues, you can specify the GVK of a resource, such as replacing `deployment` with `deployment.v1.apps`. \ No newline at end of file diff --git a/docs/userguide/propagate-dependencies.md b/docs/userguide/propagate-dependencies.md deleted file mode 100644 index 271ed73af..000000000 --- a/docs/userguide/propagate-dependencies.md +++ /dev/null @@ -1,117 +0,0 @@ -# Propagate dependencies -Deployment, Job, Pod, DaemonSet and StatefulSet dependencies (ConfigMaps, Secrets and ServiceAccounts) can be propagated to member -clusters automatically. This document demonstrates how to use this feature. For more design details, please refer to -[dependencies-automatically-propagation](../proposals/dependencies-automatically-propagation/README.md) - -##Prerequisites -### Karmada has been installed - -We can install Karmada by referring to [quick-start](https://github.com/karmada-io/karmada#quick-start), or directly run -`hack/local-up-karmada.sh` script which is also used to run our E2E cases. - -### Enable PropagateDeps feature -```bash -kubectl edit deployment karmada-controller-manager -n karmada-system -``` -Add `--feature-gates=PropagateDeps=true` option. - -## Example -Create a Deployment mounted with a ConfigMap -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: my-nginx - labels: - app: my-nginx -spec: - replicas: 2 - selector: - matchLabels: - app: my-nginx - template: - metadata: - labels: - app: my-nginx - spec: - containers: - - image: nginx - name: my-nginx - ports: - - containerPort: 80 - volumeMounts: - - name: configmap - mountPath: "/configmap" - volumes: - - name: configmap - configMap: - name: my-nginx-config ---- -apiVersion: v1 -kind: ConfigMap -metadata: - name: my-nginx-config -data: - nginx.properties: | - proxy-connect-timeout: "10s" - proxy-read-timeout: "10s" - client-max-body-size: "2m" -``` -Create a propagation policy with this Deployment and set `propagateDeps: true`. -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: PropagationPolicy -metadata: - name: my-nginx-propagation -spec: - propagateDeps: true - resourceSelectors: - - apiVersion: apps/v1 - kind: Deployment - name: my-nginx - placement: - clusterAffinity: - clusterNames: - - member1 - - member2 - replicaScheduling: - replicaDivisionPreference: Weighted - replicaSchedulingType: Divided - weightPreference: - staticWeightList: - - targetCluster: - clusterNames: - - member1 - weight: 1 - - targetCluster: - clusterNames: - - member2 - weight: 1 -``` -Upon successful policy execution, the Deployment and ConfigMap are properly propagated to the member cluster. -```bash -$ kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get propagationpolicy -NAME AGE -my-nginx-propagation 16s -$ kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get deployment -NAME READY UP-TO-DATE AVAILABLE AGE -my-nginx 2/2 2 2 22m -# member cluster1 -$ kubectl config use-context member1 -Switched to context "member1". -$ kubectl get deployment -NAME READY UP-TO-DATE AVAILABLE AGE -my-nginx 1/1 1 1 25m -$ kubectl get configmap -NAME DATA AGE -my-nginx-config 1 26m -# member cluster2 -$ kubectl config use-context member2 -Switched to context "member2". -$ kubectl get deployment -NAME READY UP-TO-DATE AVAILABLE AGE -my-nginx 1/1 1 1 27m -$ kubectl get configmap -NAME DATA AGE -my-nginx-config 1 27m -``` diff --git a/docs/userguide/resource-propagating.md b/docs/userguide/resource-propagating.md deleted file mode 100644 index 17b056431..000000000 --- a/docs/userguide/resource-propagating.md +++ /dev/null @@ -1,281 +0,0 @@ -# Resource Propagating - -The [PropagationPolicy](https://github.com/karmada-io/karmada/blob/master/pkg/apis/policy/v1alpha1/propagation_types.go#L13) and [ClusterPropagationPolicy](https://github.com/karmada-io/karmada/blob/master/pkg/apis/policy/v1alpha1/propagation_types.go#L292) APIs are provided to propagate resources. For the differences between the two APIs, please see [here](../frequently-asked-questions.md#what-is-the-difference-between-propagationpolicy-and-clusterpropagationpolicy). - -Here, we use PropagationPolicy as an example to describe how to propagate resources. - -## Before you start - -[Install Karmada](../installation/installation.md) and prepare the [karmadactl command-line](../installation/install-kubectl-karmada.md) tool. - -## Deploy a simplest multi-cluster Deployment - -### Create a PropagationPolicy object - -You can propagate a Deployment by creating a PropagationPolicy object defined in a YAML file. For example, this YAML - file describes a Deployment object named nginx under default namespace need to be propagated to member1 cluster: - -```yaml -# propagationpolicy.yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: PropagationPolicy -metadata: - name: example-policy # The default namespace is `default`. -spec: - resourceSelectors: - - apiVersion: apps/v1 - kind: Deployment - name: nginx # If no namespace is specified, the namespace is inherited from the parent object scope. - placement: - clusterAffinity: - clusterNames: - - member1 -``` - -1. Create a propagationPolicy base on the YAML file: -```shell -kubectl apply -f propagationpolicy.yaml -``` -2. Create a Deployment nginx resource: -```shell -kubectl create deployment nginx --image nginx -``` -> Note: The resource exists only as a template in karmada. After being propagated to a member cluster, the behavior of the resource is the same as that of a single kubernetes cluster. - -> Note: Resources and PropagationPolicy are created in no sequence. -3. Display information of the deployment: -```shell -karmadactl get deployment -``` -The output is similar to this: -```shell -The karmadactl get command now only supports the push mode. [ member3 ] is not running in push mode. - -NAME CLUSTER READY UP-TO-DATE AVAILABLE AGE ADOPTION -nginx member1 1/1 1 1 52s Y -``` -4. List the pods created by the deployment: -```shell -karmadactl get pod -l app=nginx -``` -The output is similar to this: -```shell -The karmadactl get command now only supports the push mode. [ member3 ] is not running in push mode. - -NAME CLUSTER READY STATUS RESTARTS AGE -nginx-6799fc88d8-s7vv9 member1 1/1 Running 0 52s -``` - -### Update PropagationPolicy - -You can update the propagationPolicy by applying a new YAML file. This YAML file propagates the Deployment to the member2 cluster. - -```yaml -# propagationpolicy-update.yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: PropagationPolicy -metadata: - name: example-policy -spec: - resourceSelectors: - - apiVersion: apps/v1 - kind: Deployment - name: nginx - placement: - clusterAffinity: - clusterNames: # Modify the selected cluster to propagate the Deployment. - - member2 -``` - -1. Apply the new YAML file: -```shell -kubectl apply -f propagationpolicy-update.yaml -``` -2. Display information of the deployment (the output is similar to this): -```shell -The karmadactl get command now only supports the push mode. [ member3 ] is not running in push mode. - -NAME CLUSTER READY UP-TO-DATE AVAILABLE AGE ADOPTION -nginx member2 1/1 1 1 5s Y -``` -3. List the pods of the deployment (the output is similar to this): -```shell -The karmadactl get command now only supports the push mode. [ member3 ] is not running in push mode. - -NAME CLUSTER READY STATUS RESTARTS AGE -nginx-6799fc88d8-8t8cc member2 1/1 Running 0 17s -``` -> Note: Updating the `.spec.resourceSelectors` field to change hit resources is currently not supported. - -### Update Deployment - -You can update the deployment template. The changes will be automatically synchronized to the member clusters. - -1. Update deployment replicas to 2 -2. Display information of the deployment (the output is similar to this): -```shell -The karmadactl get command now only supports the push mode. [ member3 ] is not running in push mode. - -NAME CLUSTER READY UP-TO-DATE AVAILABLE AGE ADOPTION -nginx member2 2/2 2 2 7m59s Y -``` -3. List the pods of the deployment (the output is similar to this): -```shell -The karmadactl get command now only supports the push mode. [ member3 ] is not running in push mode. - -NAME CLUSTER READY STATUS RESTARTS AGE -nginx-6799fc88d8-8t8cc member2 1/1 Running 0 8m12s -nginx-6799fc88d8-zpl4j member2 1/1 Running 0 17s -``` - -### Delete a propagationPolicy - -Delete the propagationPolicy by name: -```shell -kubectl delete propagationpolicy example-policy -``` -Deleting a propagationPolicy does not delete deployments propagated to member clusters. You need to delete deployments in the karmada control-plane: -```shell -kubectl delete deployment nginx -``` - -## Deploy deployment into a specified set of target clusters - -`.spec.placement.clusterAffinity` field of PropagationPolicy represents scheduling restrictions on a certain set of clusters, without which any cluster can be scheduling candidates. - -It has four fields to set: -- LabelSelector -- FieldSelector -- ClusterNames -- ExcludeClusters - -### LabelSelector - -LabelSelector is a filter to select member clusters by labels. It uses `*metav1.LabelSelector` type. If it is non-nil and non-empty, only the clusters match this filter will be selected. - -PropagationPolicy can be configured as follows: - -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: PropagationPolicy -metadata: - name: test-propagation -spec: - ... - placement: - clusterAffinity: - labelSelector: - matchLabels: - location: us - ... -``` - -PropagationPolicy can also be configured as follows: - -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: PropagationPolicy -metadata: - name: test-propagation -spec: - ... - placement: - clusterAffinity: - labelSelector: - matchExpressions: - - key: location - operator: In - values: - - us - ... -``` - -For a description of `matchLabels` and `matchExpressions`, you can refer to [Resources that support set-based requirements](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements). - -### FieldSelector - -FieldSelector is a filter to select member clusters by fields. If it is non-nil and non-empty, only the clusters match this filter will be selected. - -PropagationPolicy can be configured as follows: - -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: PropagationPolicy -metadata: - name: nginx-propagation -spec: - ... - placement: - clusterAffinity: - fieldSelector: - matchExpressions: - - key: provider - operator: In - values: - - huaweicloud - - key: region - operator: NotIn - values: - - cn-south-1 - ... -``` - -If multiple `matchExpressions` are specified in the `fieldSelector`, the cluster must match all `matchExpressions`. - -The `key` in `matchExpressions` now supports three values: `provider`, `region`, and `zone`, which correspond to the `.spec.provider`, `.spec.region`, and `.spec.zone` fields of the Cluster object, respectively. - -The `operator` in `matchExpressions` now supports `In` and `NotIn`. - -### ClusterNames - -Users can set the `ClusterNames` field to specify the selected clusters. - -PropagationPolicy can be configured as follows: - -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: PropagationPolicy -metadata: - name: nginx-propagation -spec: - ... - placement: - clusterAffinity: - clusterNames: - - member1 - - member2 - ... -``` - -### ExcludeClusters - -Users can set the `ExcludeClusters` fields to specify the clusters to be ignored. - -PropagationPolicy can be configured as follows: - -```yaml -apiVersion: policy.karmada.io/v1alpha1 -kind: PropagationPolicy -metadata: - name: nginx-propagation -spec: - ... - placement: - clusterAffinity: - exclude: - - member1 - - member3 - ... -``` - -## Configuring Multi-Cluster HA for Deployment - - -## Multi-Cluster Failover - -Please refer to [Failover feature of Karmada](failover.md#failover-feature-of-karmada). - -## Propagate specified resources to clusters - - -## Adjusting the instance propagation policy of Deployment in clusters \ No newline at end of file diff --git a/docs/working-with-anp.md b/docs/working-with-anp.md deleted file mode 100644 index 8762d10db..000000000 --- a/docs/working-with-anp.md +++ /dev/null @@ -1,310 +0,0 @@ -# Deploy apiserver-network-proxy (ANP) - -## Purpose - -For a member cluster that joins Karmada in the pull mode, you need to provide a method to connect the network between the Karmada control plane and the member cluster, so that karmada-aggregated-apiserver can access this member cluster. - -Deploying ANP to achieve appeal is one of the methods. This article describes how to deploy ANP in Karmada. - -## Environment - -Karmada can be deployed using the kind tool. - -You can directly use `hack/local-up-karmada.sh` to deploy Karmada. - -## Actions - -### Step 1: Download code - -To facilitate demonstration, the code is modified based on ANP v0.0.24 to support access to the front server through HTTP. Here is the code repository address: https://github.com/mrlihanbo/apiserver-network-proxy/tree/v0.0.24/dev. - -```shell -git clone -b v0.0.24/dev https://github.com/mrlihanbo/apiserver-network-proxy.git -cd apiserver-network-proxy/ -``` - -### Step 2: Compile images - -Compile the proxy-server and proxy-agent images. - -```shell -docker build . --build-arg ARCH=amd64 -f artifacts/images/agent-build.Dockerfile -t swr.ap-southeast-1.myhuaweicloud.com/karmada/proxy-agent:0.0.24 - -docker build . --build-arg ARCH=amd64 -f artifacts/images/server-build.Dockerfile -t swr.ap-southeast-1.myhuaweicloud.com/karmada/proxy-server:0.0.24 -``` - -### Step 3: Generate a certificate - -Run the command to check the IP address of karmada-host-control-plane: - -```shell -docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' karmada-host-control-plane -``` - -Run the `make certs` command to generate a certificate and specify PROXY_SERVER_IP as the IP address obtained in the preceding command. - -```shell -make certs PROXY_SERVER_IP=x.x.x.x -``` - -The certificate is generated in the `certs` folder. - -### Step 4: Deploy proxy-server - -Save the `proxy-server.yaml` file in the root directory of the ANP code repository. - -
-unfold me to see the yaml - -```yaml -# proxy-server.yaml - -apiVersion: apps/v1 -kind: Deployment -metadata: - name: proxy-server - namespace: karmada-system -spec: - replicas: 1 - selector: - matchLabels: - app: proxy-server - template: - metadata: - labels: - app: proxy-server - spec: - containers: - - command: - - /proxy-server - args: - - --health-port=8092 - - --cluster-ca-cert=/var/certs/server/cluster-ca-cert.crt - - --cluster-cert=/var/certs/server/cluster-cert.crt - - --cluster-key=/var/certs/server/cluster-key.key - - --mode=http-connect - - --proxy-strategies=destHost - - --server-ca-cert=/var/certs/server/server-ca-cert.crt - - --server-cert=/var/certs/server/server-cert.crt - - --server-key=/var/certs/server/server-key.key - image: swr.ap-southeast-1.myhuaweicloud.com/karmada/proxy-server:0.0.24 - imagePullPolicy: IfNotPresent - livenessProbe: - failureThreshold: 3 - httpGet: - path: /healthz - port: 8092 - scheme: HTTP - initialDelaySeconds: 10 - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 60 - name: proxy-server - volumeMounts: - - mountPath: /var/certs/server - name: cert - restartPolicy: Always - hostNetwork: true - volumes: - - name: cert - secret: - secretName: proxy-server-cert ---- -apiVersion: v1 -kind: Secret -metadata: - name: proxy-server-cert - namespace: karmada-system -type: Opaque -data: - server-ca-cert.crt: | - {{server_ca_cert}} - server-cert.crt: | - {{server_cert}} - server-key.key: | - {{server_key}} - cluster-ca-cert.crt: | - {{cluster_ca_cert}} - cluster-cert.crt: | - {{cluster_cert}} - cluster-key.key: | - {{cluster_key}} -``` - -
- -Save the `replace-proxy-server.sh` file in the root directory of the ANP code repository. - -
-unfold me to see the shell - -```shell -#!/bin/bash - -cert_yaml=proxy-server.yaml - -SERVER_CA_CERT=$(cat certs/frontend/issued/ca.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g) -sed -i'' -e "s/{{server_ca_cert}}/${SERVER_CA_CERT}/g" ${cert_yaml} - -SERVER_CERT=$(cat certs/frontend/issued/proxy-frontend.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g) -sed -i'' -e "s/{{server_cert}}/${SERVER_CERT}/g" ${cert_yaml} - -SERVER_KEY=$(cat certs/frontend/private/proxy-frontend.key | base64 | tr "\n" " "|sed s/[[:space:]]//g) -sed -i'' -e "s/{{server_key}}/${SERVER_KEY}/g" ${cert_yaml} - -CLUSTER_CA_CERT=$(cat certs/agent/issued/ca.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g) -sed -i'' -e "s/{{cluster_ca_cert}}/${CLUSTER_CA_CERT}/g" ${cert_yaml} - -CLUSTER_CERT=$(cat certs/agent/issued/proxy-frontend.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g) -sed -i'' -e "s/{{cluster_cert}}/${CLUSTER_CERT}/g" ${cert_yaml} - - -CLUSTER_KEY=$(cat certs/agent/private/proxy-frontend.key | base64 | tr "\n" " "|sed s/[[:space:]]//g) -sed -i'' -e "s/{{cluster_key}}/${CLUSTER_KEY}/g" ${cert_yaml} -``` - -
- -Run the following commands to run the script: - -```shell -chmod +x replace-proxy-server.sh -bash replace-proxy-server.sh -``` - -Deploy the proxy-server on the Karmada control plane: - -```shell -kind load docker-image swr.ap-southeast-1.myhuaweicloud.com/karmada/proxy-server:0.0.24 --name karmada-host -export KUBECONFIG=/root/.kube/karmada.config -kubectl --context=karmada-host apply -f proxy-server.yaml -``` - -### Step 5: Deploy proxy-agent - -Save the `proxy-agent.yaml` file in the root directory of the ANP code repository. - -
-unfold me to see the yaml - -```yaml -# proxy-agent.yaml - -apiVersion: apps/v1 -kind: Deployment -metadata: - labels: - app: proxy-agent - name: proxy-agent - namespace: karmada-system -spec: - replicas: 1 - selector: - matchLabels: - app: proxy-agent - template: - metadata: - labels: - app: proxy-agent - spec: - containers: - - command: - - /proxy-agent - args: - - '--ca-cert=/var/certs/agent/ca.crt' - - '--agent-cert=/var/certs/agent/proxy-agent.crt' - - '--agent-key=/var/certs/agent/proxy-agent.key' - - '--proxy-server-host={{proxy_server_addr}}' - - '--proxy-server-port=8091' - - '--agent-identifiers=host={{identifiers}}' - image: swr.ap-southeast-1.myhuaweicloud.com/karmada/proxy-agent:0.0.24 - imagePullPolicy: IfNotPresent - name: proxy-agent - livenessProbe: - httpGet: - scheme: HTTP - port: 8093 - path: /healthz - initialDelaySeconds: 15 - timeoutSeconds: 60 - volumeMounts: - - mountPath: /var/certs/agent - name: cert - volumes: - - name: cert - secret: - secretName: proxy-agent-cert ---- -apiVersion: v1 -kind: Secret -metadata: - name: proxy-agent-cert - namespace: karmada-system -type: Opaque -data: - ca.crt: | - {{proxy_agent_ca_crt}} - proxy-agent.crt: | - {{proxy_agent_crt}} - proxy-agent.key: | - {{proxy_agent_key}} -``` - -
- -Save the `replace-proxy-agent.sh` file in the root directory of the ANP code repository. - -
-unfold me to see the shell - -```shell -#!/bin/bash - -cert_yaml=proxy-agent.yaml - -karmada_controlplan_addr=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' karmada-host-control-plane) -member3_cluster_addr=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' member3-control-plane) -sed -i'' -e "s/{{proxy_server_addr}}/${karmada_controlplan_addr}/g" ${cert_yaml} -sed -i'' -e "s/{{identifiers}}/${member3_cluster_addr}/g" ${cert_yaml} - -PROXY_AGENT_CA_CRT=$(cat certs/agent/issued/ca.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g) -sed -i'' -e "s/{{proxy_agent_ca_crt}}/${PROXY_AGENT_CA_CRT}/g" ${cert_yaml} - -PROXY_AGENT_CRT=$(cat certs/agent/issued/proxy-agent.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g) -sed -i'' -e "s/{{proxy_agent_crt}}/${PROXY_AGENT_CRT}/g" ${cert_yaml} - -PROXY_AGENT_KEY=$(cat certs/agent/private/proxy-agent.key | base64 | tr "\n" " "|sed s/[[:space:]]//g) -sed -i'' -e "s/{{proxy_agent_key}}/${PROXY_AGENT_KEY}/g" ${cert_yaml} -``` - -
- -Run the following commands to run the script: - -```shell -chmod +x replace-proxy-agent.sh -bash replace-proxy-agent.sh -``` - -Deploy the proxy-agent in the pull mode for a member cluster (in this example, the `member3` cluster is in the pull mode.): - -```shell -kind load docker-image swr.ap-southeast-1.myhuaweicloud.com/karmada/proxy-agent:0.0.24 --name member3 -kubectl --kubeconfig=/root/.kube/members.config --context=member3 apply -f proxy-agent.yaml -``` - -**The ANP deployment is complete now.** - -### Step 6: Add command flags for the karmada-agent deployment - -After deploying the ANP deployment, you need to add extra command flags `--cluster-api-endpoint` and `--proxy-server-address` for the `karmada-agent` deployment in the `member3` cluster. - -Where `--cluster-api-endpoint` is the APIEndpoint of the cluster. You can obtain it from the KubeConfig file of the `member3` cluster. - -Where `--proxy-server-address` is the address of the proxy server that is used to proxy the cluster. In current case, you can set `--proxy-server-address` to `http://:8088`. Get `karmada_controlplan_addr` value through the following command: - -```shell -docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' karmada-host-control-plane -``` - -Set port `8088` by modifying the code in ANP: https://github.com/mrlihanbo/apiserver-network-proxy/blob/v0.0.24/dev/cmd/server/app/server.go#L267. You can also modify it to a different value. diff --git a/docs/working-with-argocd.md b/docs/working-with-argocd.md deleted file mode 100644 index c4c0d9478..000000000 --- a/docs/working-with-argocd.md +++ /dev/null @@ -1,96 +0,0 @@ -# Working with Argo CD - -This topic walks you through how to use the [Argo CD](https://github.com/argoproj/argo-cd/) to manage your workload -`across clusters` with `Karmada`. - -## Prerequisites -### Argo CD Installation -You have installed Argo CD following the instructions in [Getting Started](https://argo-cd.readthedocs.io/en/stable/getting_started/#getting-started). - -### Karmada Installation -In this example, we are using a Karmada environment with at lease `3` member clusters joined. - -You can set up the environment by `hack/local-up-karmada.sh`, which is also used to run our E2E cases. - -```bash -# kubectl get clusters -NAME VERSION MODE READY AGE -member1 v1.19.1 Push True 18h -member2 v1.19.1 Push True 18h -member3 v1.19.1 Pull True 17h -``` - -## Registering Karmada to Argo CD -This step registers Karmada control plane to Argo CD. - -First list the contexts of all clusters in your current kubeconfig: -```bash -kubectl config get-contexts -o name -``` - -Choose the context of the Karmada control plane from the list and add it to `argocd cluster add CONTEXTNAME`. -For example, for `karmada-apiserver` context, run: -```bash -argocd cluster add karmada-apiserver -``` - -If everything goes well, you can see the registered Karmada control plane from the Argo CD UI, e.g.: - -![](./images/argocd-register-karmada.png) - -## Creating Apps Via UI - -### Preparing Apps -Take the [guestbook](https://github.com/argoproj/argocd-example-apps/tree/53e28ff20cc530b9ada2173fbbd64d48338583ba/guestbook) -as example. - -First, fork the [argocd-example-apps](https://github.com/argoproj/argocd-example-apps) repo and create a branch -`karmada-demo`. - -Then, create a [PropagationPolicy manifest](https://github.com/RainbowMango/argocd-example-apps/blob/e499ea5c6f31b665366bfbe5161737dc8723fb3b/guestbook/propagationpolicy.yaml) under the `guestbook` directory. - -### Creating Apps - -Click the `+ New App` button as shown below: - -![](./images/argocd-new-app.png) - -Give your app the name `guestbook-multi-cluster`, use the project `default`, and leave the sync policy as `Manual`: - -![](./images/argocd-new-app-name.png) - -Connect the `forked repo` to Argo CD by setting repository url to the github repo url, set revision as `karmada-demo`, -and set the path to `guestbook`: - -![](./images/argocd-new-app-repo.png) - -For Destination, set cluster to `karmada` and namespace to `default`: - -![](./images/argocd-new-app-cluster.png) - -### Syncing Apps -You can sync your applications via UI by simply clicking the SYNC button and following the pop-up instructions, e.g.: - -![](./images/argocd-sync-apps.png) - -More details please refer to [argocd guide: sync the application](https://argo-cd.readthedocs.io/en/stable/getting_started/#7-sync-deploy-the-application). - -## Checking Apps Status -For deployment running in more than one clusters, you don't need to create applications for each -cluster. You can get the overall and detailed status from one `Application`. - -![](./images/argocd-status-overview.png) - -The `svc/guestbook-ui`, `deploy/guestbook-ui` and `propagationpolicy/guestbook` in the middle of the picture are the -resources created by the manifest in the forked repo. And the `resourcebinding/guestbook-ui-service` and -`resourcebinding/guestbook-ui-deployment` in the right of the picture are the resources created by Karmada. - -### Checking Detailed Status -You can obtain the Deployment's detailed status by `resourcebinding/guestbook-ui-deployment`. - -![](./images/argocd-status-resourcebinding.png) - -### Checking Aggregated Status -You can obtain the aggregated status of the Deployment from UI by `deploy/guestbook-ui`. - -![](./images/argocd-status-aggregated.png) \ No newline at end of file diff --git a/docs/working-with-filebeat.md b/docs/working-with-filebeat.md deleted file mode 100644 index bf948615b..000000000 --- a/docs/working-with-filebeat.md +++ /dev/null @@ -1,240 +0,0 @@ -# Use Filebeat to collect logs of Karmada member clusters - -[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them either to [Elasticsearch](https://www.elastic.co/products/elasticsearch) or [kafka](https://github.com/apache/kafka) for indexing. - -This document demonstrates how to use the `Filebeat` to collect logs of Karmada member clusters. - -## Start up Karmada clusters - -You just need to clone Karmada repo, and run the following script in Karmada directory. - -```bash -hack/local-up-karmada.sh -``` - -## Start Filebeat - -1. Create resource objects of Filebeat, the content is as follows. You can specify a list of inputs in the `filebeat.inputs` section of the `filebeat.yml`. Inputs specify how Filebeat locates and processes input data, also you can configure Filebeat to write to a specific output by setting options in the `Outputs` section of the `filebeat.yml` config file. The example will collect the log information of each container and write the collected logs to a file. More detailed information about the input and output configuration, please refer to: https://github.com/elastic/beats/tree/master/filebeat/docs - - ```yaml - apiVersion: v1 - kind: Namespace - metadata: - name: logging - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: filebeat - namespace: logging - labels: - k8s-app: filebeat - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: filebeat - rules: - - apiGroups: [""] # "" indicates the core API group - resources: - - namespaces - - pods - verbs: - - get - - watch - - list - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: filebeat - subjects: - - kind: ServiceAccount - name: filebeat - namespace: kube-system - roleRef: - kind: ClusterRole - name: filebeat - apiGroup: rbac.authorization.k8s.io - --- - apiVersion: v1 - kind: ConfigMap - metadata: - name: filebeat-config - namespace: logging - labels: - k8s-app: filebeat - kubernetes.io/cluster-service: "true" - data: - filebeat.yml: |- - filebeat.inputs: - - type: container - paths: - - /var/log/containers/*.log - processors: - - add_kubernetes_metadata: - host: ${NODE_NAME} - matchers: - - logs_path: - logs_path: "/var/log/containers/" - # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this: - #filebeat.autodiscover: - # providers: - # - type: kubernetes - # node: ${NODE_NAME} - # hints.enabled: true - # hints.default_config: - # type: container - # paths: - # - /var/log/containers/*${data.kubernetes.container.id}.log - - processors: - - add_cloud_metadata: - - add_host_metadata: - - #output.elasticsearch: - # hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}'] - # username: ${ELASTICSEARCH_USERNAME} - # password: ${ELASTICSEARCH_PASSWORD} - output.file: - path: "/tmp/filebeat" - filename: filebeat - --- - apiVersion: apps/v1 - kind: DaemonSet - metadata: - name: filebeat - namespace: logging - labels: - k8s-app: filebeat - spec: - selector: - matchLabels: - k8s-app: filebeat - template: - metadata: - labels: - k8s-app: filebeat - spec: - serviceAccountName: filebeat - terminationGracePeriodSeconds: 30 - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/master - containers: - - name: filebeat - image: docker.elastic.co/beats/filebeat:8.0.0-beta1-amd64 - imagePullPolicy: IfNotPresent - args: [ "-c", "/usr/share/filebeat/filebeat.yml", "-e",] - env: - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - securityContext: - runAsUser: 0 - resources: - limits: - memory: 200Mi - requests: - cpu: 100m - memory: 100Mi - volumeMounts: - - name: config - mountPath: /usr/share/filebeat/filebeat.yml - readOnly: true - subPath: filebeat.yml - - name: inputs - mountPath: /usr/share/filebeat/inputs.d - readOnly: true - - name: data - mountPath: /usr/share/filebeat/data - - name: varlibdockercontainers - mountPath: /var/lib/docker/containers - readOnly: true - - name: varlog - mountPath: /var/log - readOnly: true - volumes: - - name: config - configMap: - defaultMode: 0600 - name: filebeat-config - - name: varlibdockercontainers - hostPath: - path: /var/lib/docker/containers - - name: varlog - hostPath: - path: /var/log - - name: inputs - configMap: - defaultMode: 0600 - name: filebeat-config - # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart - - name: data - hostPath: - path: /var/lib/filebeat-data - type: DirectoryOrCreate - ``` - -2. Run the below command to execute Karmada PropagationPolicy and ClusterPropagationPolicy. - - ``` - cat < 0 - msg := sprintf("you must provide labels: %v", [missing]) - } - ``` - -### Create k8srequiredlabels constraint - - ```yaml - apiVersion: constraints.gatekeeper.sh/v1beta1 - kind: K8sRequiredLabels - metadata: - name: ns-must-have-gk - spec: - match: - kinds: - - apiGroups: [""] - kinds: ["Namespace"] - parameters: - labels: ["gatekeepers"] - ``` - -### Create a bad namespace - - ```console - kubectl create ns test - Error from server ([ns-must-have-gk] you must provide labels: {"gatekeepers"}): admission webhook "validation.gatekeeper.sh" denied the request: [ns-must-have-gk] you must provide labels: {"gatekeepers"} - ``` - -## Reference - -- https://github.com/open-policy-agent/gatekeeper \ No newline at end of file diff --git a/docs/working-with-istio-on-flat-network.md b/docs/working-with-istio-on-flat-network.md deleted file mode 100644 index 2a8c7b572..000000000 --- a/docs/working-with-istio-on-flat-network.md +++ /dev/null @@ -1,467 +0,0 @@ -# Use Istio on Karmada - -This document uses an example to demonstrate how to use [Istio](https://istio.io/) on Karmada. - -Follow this guide to install the Istio control plane on `karmada-host` (the primary cluster) and configure `member1` and `member2` (the remote cluster) to use the control plane in `karmada-host`. All clusters reside on the network1 network, meaning there is direct connectivity between the pods in both clusters. - - - -## Install Karmada - -### Install karmada control plane - -Following the steps [Install karmada control plane](https://github.com/karmada-io/karmada#install-karmada-control-plane) in Quick Start, you can get a Karmada. - -## Deploy Istio - -*** -If you are testing multicluster setup on `kind` you can use [MetalLB](https://metallb.universe.tf/installation/) to make use of `EXTERNAL-IP` for `LoadBalancer` services. -*** - -### Install istioctl -Please refer to the [istioctl](https://istio.io/latest/docs/setup/getting-started/#download) Installation. - -### Prepare CA certificates - -Following the steps [plug-in-certificates-and-key-into-the-cluster](https://istio.io/latest/docs/tasks/security/cert-management/plugin-ca-cert/#plug-in-certificates-and-key-into-the-cluster) to configure Istio CA. - -Replace the cluster name `cluster1` with `primary`, the output will looks like as follwing: -```bash -root@karmada-demo istio-on-karmada# tree certs -certs -├── primary -│   ├── ca-cert.pem -│   ├── ca-key.pem -│   ├── cert-chain.pem -│   └── root-cert.pem -├── root-ca.conf -├── root-cert.csr -├── root-cert.pem -├── root-cert.srl -└── root-key.pem -``` -### Install Istio on karmada-apiserver - -Export `KUBECONFIG` and switch to `karmada apiserver`: - -``` -# export KUBECONFIG=$HOME/.kube/karmada.config - -# kubectl config use-context karmada-apiserver -``` - -Create a secret `cacerts` in `istio-system` namespace: -```bash -kubectl create namespace istio-system -kubectl create secret generic cacerts -n istio-system \ - --from-file=certs/primary/ca-cert.pem \ - --from-file=certs/primary/ca-key.pem \ - --from-file=certs/primary/root-cert.pem \ - --from-file=certs/primary/cert-chain.pem -``` - -Create a propagation policy for `cacert` secret: -```bash -cat < kind-karmada.yaml -``` - -```bash -kubectl create secret generic istio-kubeconfig --from-file=config=kind-karmada.yaml -nistio-system -``` - -3. Install istio control plane - -```bash -cat < istio-remote-secret-member1.yaml -``` - -### Prepare member2 cluster secret - -1. Export `KUBECONFIG` and switch to `karmada member2`: -```bash -export KUBECONFIG="$HOME/.kube/members.config" -kubectl config use-context member2 -``` - -2. Create istio remote secret for member1: -```bash -istioctl x create-remote-secret --name=member2 > istio-remote-secret-member2.yaml -``` - -### Apply istio remote secret - -Export `KUBECONFIG` and switch to `karmada apiserver`: - -``` -# export KUBECONFIG=$HOME/.kube/karmada.config - -# kubectl config use-context karmada-apiserver -``` - -Apply istio remote secret: -```bash -kubectl apply -f istio-remote-secret-member1.yaml - -kubectl apply -f istio-remote-secret-member2.yaml -``` - - -### Install istio remote - -1. Install istio remote member1 - -Export `KUBECONFIG` and switch to `karmada member1`: -```bash -export KUBECONFIG="$HOME/.kube/members.config" -kubectl config use-context member1 -``` - -```bash -cat < - -*** -The reason for deploying `istiod` on the `member1` is that `kiali` needs to be deployed on the same cluster as `istiod` -. If `istiod` and `kiali` are deployed on the `karmada-host`,`kiali` will not find the namespace created by `karmada`. It -cannot implement the function of service topology for application deployed by `karmada`. I will continue to provide a new -solution later that deploys `istiod` on the `karmada-host`. -*** - -## Install Karmada - -### Install karmada control plane - -Following the steps [Install karmada control plane](https://github.com/karmada-io/karmada#install-karmada-control-plane) -in Quick Start, you can get a Karmada. - -## Deploy Istio - -*** -If you are testing multicluster setup on `kind` you can use [MetalLB](https://metallb.universe.tf/installation/) to make use of `EXTERNAL-IP` for `LoadBalancer` services. -*** - -### Install istioctl - -Please refer to the [istioctl](https://istio.io/latest/docs/setup/getting-started/#download) Installation. - -### Prepare CA certificates - -Following the -steps [plug-in-certificates-and-key-into-the-cluster](https://istio.io/latest/docs/tasks/security/cert-management/plugin-ca-cert/#plug-in-certificates-and-key-into-the-cluster) -to configure Istio CA. - -Replace the cluster name `cluster1` with `primary`, the output will looks like as following: - -```bash -[root@vm1-su-001 istio-1.12.6]# tree certs/ -certs/ -├── primary -│   ├── ca-cert.pem -│   ├── ca-key.pem -│   ├── cert-chain.pem -│   └── root-cert.pem -├── root-ca.conf -├── root-cert.csr -├── root-cert.pem -├── root-cert.srl -└── root-key.pem -``` - -### Install Istio on karmada-apiserver - -Export `KUBECONFIG` and switch to `karmada apiserver`: - -```bash -export KUBECONFIG=$HOME/.kube/karmada.config -kubectl config use-context karmada-apiserver -``` - -Create a secret `cacerts` in `istio-system` namespace: - -```bash -kubectl create namespace istio-system -kubectl create secret generic cacerts -n istio-system \ - --from-file=certs/primary/ca-cert.pem \ - --from-file=certs/primary/ca-key.pem \ - --from-file=certs/primary/root-cert.pem \ - --from-file=certs/primary/cert-chain.pem -``` - -Create a propagation policy for `cacerts` secret: - -```bash -cat < istio-remote-secret-member2.yaml -``` - -Switch to `member1`: - -```bash -kubectl config use-context member1 -``` - -Apply istio remote secret - -```bash -kubectl apply -f istio-remote-secret-member2.yaml -``` - -2. Configure member2 as a remote - -Save the address of `member1`’s east-west gateway - -```bash -export DISCOVERY_ADDRESS=$(kubectl -n istio-system get svc istio-eastwestgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') -``` - -Create a remote configuration on `member2`. - -Switch to `member2`: - -```bash -kubectl config use-context member2 -``` - -```bash -cat < -``` - -And then you can find deployment nginx will be restored successfully. -```shell -# kubectl get deployment.apps/nginx -NAME READY UP-TO-DATE AVAILABLE AGE -nginx 2/2 2 2 21s -``` - -### Backup and restore of kubernetes resources through Velero combined with karmada - -In Karmada control plane, we need to install velero crds but do not need controllers to reconcile them. They are treated as resource templates, not specific resource instances.Based on work API here, they will be encapsulated as a work object deliverd to member clusters and reconciled by velero controllers in member clusters finally. - -Create velero crds in Karmada control plane: -remote velero crd directory: `https://github.com/vmware-tanzu/helm-charts/tree/main/charts/velero/crds/` - -Create a backup in `karmada-apiserver` and Distributed to `member1` cluster through PropagationPolicy - -```shell -# create backup policy -cat <