commit
237fc9aa48
|
@ -1,312 +0,0 @@
|
|||
---
|
||||
title: "An Introduction to KubeVela Addons: Build Your Own Addon"
|
||||
authors:
|
||||
- name: Charlie Chiang
|
||||
title: KubeVela Team
|
||||
url: https://github.com/charlie0129
|
||||
image_url: https://github.com/charlie0129.png
|
||||
tags: [ "KubeVela", "addon", "extensibility" ]
|
||||
description: Introduction to Building Addons
|
||||
image: https://raw.githubusercontent.com/oam-dev/KubeVela.io/main/docs/resources/KubeVela-03.png
|
||||
hide_table_of_contents: false
|
||||
---
|
||||
|
||||
As we all know, KubeVela is a highly extensible platform, on which users can build their own customizations using [Definitions](https://kubevela.io/docs/platform-engineers/oam/x-definition). KubeVela addons are a convenient way to pack all these customizations and their dependencies together as a bundle, to extend the capabilities of KubeVela.
|
||||
|
||||
This blog introduces the main concepts of an addon and guides you to quickly start building an addon. Finally, we'll show you the experience of the end user, and how the capabilities provided by addon are glued.
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
## Why use addons
|
||||
|
||||
We typically use addons with [addon catalog](https://github.com/kubevela/catalog), which contains addons with all kinds of customizations that the KubeVela community carefully crafted. You can download and install these addons with just one click. For example, you can provide the ability to run Helm Components in your KubeVela Application to your cluster by installing FluxCD addon.
|
||||
|
||||
Contrary to the convenience of one-key installation, without KubeVela addons, you have to install FluxCD in this way:
|
||||
|
||||
> Actually, this is how we install FluxCD before KubeVela v1.1
|
||||
|
||||
1. Install FluxCD using Helm Charts or downloaded yaml manifests
|
||||
2. Manually download FluxCD-related Definition manifests and apply them
|
||||
|
||||
Although it seems like only 2 steps is needed, we found it to be quite troublesome:
|
||||
|
||||
1. Complicated installation: users are required to refer to documentations to manually install FluxCD and tackle possible errors
|
||||
2. Scattered resources: users need to obtain different resource files from different places
|
||||
3. Hard distribution: users having to download manifests manually makes it hard to distribute all these resources in a uniformed way
|
||||
4. Lack multi-cluster support: KubeVela emphasizes multi-cluster delivery a lot. However, manual installation makes multi-cluster support hard to carry out.
|
||||
5. No version management: users need to manage the relationship between definitions, controllers, and corresponding versions by themselves.
|
||||
|
||||
KubeVela addons are born to solve these problems.
|
||||
|
||||
## How KubeVela addons work
|
||||
|
||||
**KubeVela addons are intrinsically OAM Applications** plus Definitions and other extensions. Since Definitions usually depend on other resources (e.g. certain operators), these dependencies are described as OAM Applications and stored in files called *Application description files*.
|
||||
|
||||
Let's assume we want to build a Redis addon, which makes it possible to use Redis Components in Applications, to create a Redis cluster. Such addon will at least include a Redis Operator (to create Redis clusters) and a ComponentDefinition (to define what is a `redis` Component).
|
||||
|
||||
The installation process of an addon includes installing Applications (which includes a Redis Operator), Definitions and etc.
|
||||
|
||||

|
||||
|
||||
## Create your own addon
|
||||
|
||||
:::tip
|
||||
This guide only applies to KubeVela v1.5 and later
|
||||
:::
|
||||
|
||||
We will take Redis addon as an example, and guide you through the whole process of making an addon. The full source code of this guide is located at [catalog/redis-operator](https://github.com/kubevela/catalog/tree/master/experimental/addons/redis-operator). As an introduction, we will not dive too deep into the details. If you are interested in official documentations, refer to [Make Your Own Addon](https://kubevela.io/docs/platform-engineers/addon/intro).
|
||||
|
||||
**First, we will need to consider what our addon can do for users.** Let's say our Redis addon can provide a Component called `redis-failover`, which will create a whole Redis cluster with just one Component. **Then we will figure out how to achieve this.** To define a `redis-failover` Component, we write a ComponentDefinition. To be able to create a Redis Cluster, we use a [Redis Operator](https://github.com/spotahome/redis-operator).
|
||||
|
||||
Now our goals are clear:
|
||||
|
||||
- Write an OAM Application, which includes the installation of a Redis Operator. (refer to `template.cue` and `resources` directory in the full source code)
|
||||
- Write a [ComponentDefinition](https://kubevela.io/docs/platform-engineers/components/custom-component), which defines Components called `redis-failover`. (refer to `definitions` directory in the full source code)
|
||||
|
||||
But before we start coding, we need to understand the directory structure of an addon (`vela addon init` can help you create the directories and files). We will describe each file later, just have a basic understanding of what files are needed for now.
|
||||
|
||||
```shell
|
||||
redis-operator/ # directory name is the same as the addon name
|
||||
├── definitions # stores Definitions, including TraitDefinition, ComponentDefinition, and etc.
|
||||
│ └── redis-failover.cue # ComponentDefinition that defines `redis-failover` Components
|
||||
├── resources # resource files, we will refer to them template.cue
|
||||
│ ├── crd.yaml # CRD that comes with Redis Operator (yaml file will be applied directly)
|
||||
│ ├── redis-operator.cue # a web-service Component, which installs a Redis Operator
|
||||
│ └── topology.cue # (Optional) to help KubeVela build the relationships of Application resources
|
||||
├── metadata.yaml # addon metadata, including addon name, version, and etc.
|
||||
├── parameter.cue # addon parameters, which users can use to customize their addon installation
|
||||
├── README.md # guides the user to quickly start using your addon
|
||||
└── template.cue # Application description file, which defines an OAM Application
|
||||
```
|
||||
|
||||
:::tip
|
||||
We will use CUE extensively when writing addons, so [CUE Basic](https://kubevela.io/docs/platform-engineers/cue/basic) might be useful.
|
||||
:::
|
||||
|
||||
### parameter.cue
|
||||
|
||||
```cue
|
||||
parameter: {
|
||||
//+usage=Redis Operator image.
|
||||
image: *"quay.io/spotahome/redis-operator:v1.1.0" | string
|
||||
// ...omitted
|
||||
```
|
||||
|
||||
The parameters defined in `parameter.cue` are customizable by users when installing addons, just like Helm Values. You can access the values of these parameters in CUE later by `parameter.<parameter-name>`. In our example, `image` is provided to let the user customize Redis Operator image and can be accessed in `redis-operator.cue` by `parameter.image`.
|
||||
|
||||
Apart from customizing some fields, you can also do something creative, such as parameterized installation of addons. For example, `fluxcd` addon have a parameter called [`onlyHelmComponents`](https://github.com/kubevela/catalog/blob/958a770a9adb3268e56ca4ec2ce99d2763617b15/addons/fluxcd/parameter.cue#L9). Since `fluxcd` addon is a big one, which includes 5 different controllers, not all users want such a heavy installation. So if the `onlyHelmComponents` parameter is set to `true` by the user, only 2 controllers will be installed, making it a relatively light installation. (If you are insterested in how this is achieved, you can refer to [fluxcd code](https://github.com/kubevela/catalog/blob/958a770a9adb3268e56ca4ec2ce99d2763617b15/addons/fluxcd/template.cue#L25) here after you finished reading this blog.)
|
||||
|
||||
When you design what parameters are provided to the user, there are several things to consider:
|
||||
|
||||
- Do not provide every possible parameter to your user and let them figure out how to dial dozens of knobs by themselves. Abstract fine-grained knobs into some broad parameters, so that the user can input a few parameters and get a usable but customized addon.
|
||||
- Decide on sane defaults for starters. Let the user get started with your addon
|
||||
even if they don't provide parameters (if possible).
|
||||
- Provide usage for each parameter.
|
||||
- Keep parameters consistent across versions to avoid incompatibilities during upgrades.
|
||||
|
||||
### template.cue and resources directory
|
||||
|
||||
OAM Application is stored here, which contains the actual installation process of addons. In our case, we will create an Application that includes a Redis Operator to give our cluster the ability to create Redis clusters.
|
||||
|
||||
`template.cue` and `resources/` directory both serve the same purpose -- to define an Application. Aside from historical reasons, the reason why we split it into two places is readability. When we have a ton of resource in template.cue, it will eventually be too long to read. So we typically put the Application scaffold in `template.cue`, and Application Components in `resources` directory.
|
||||
|
||||
#### template.cue
|
||||
|
||||
|
||||
```cue
|
||||
// template.cue contains the main Application
|
||||
|
||||
// package name should be the same as the CUE files in resources directory,
|
||||
// so that we can refer to the files in resources/.
|
||||
package main
|
||||
|
||||
// Most of the contents here are boilerplates, you only need to pay attention to spec.components
|
||||
|
||||
output: {
|
||||
// This is just a plain old OAM Application
|
||||
apiVersion: "core.oam.dev/v1beta1"
|
||||
kind: "Application"
|
||||
// No metadata required
|
||||
spec: {
|
||||
components: [
|
||||
// Create a component that includes a Redis Operator
|
||||
redisOperator // defined in resources/redis-operator.cue
|
||||
]
|
||||
policies: [
|
||||
// What namespace to install, whether install to sub-clusters
|
||||
// Again, these are boilerplates. No need to remember them. Just refer to the full source code.
|
||||
// https://github.com/kubevela/catalog/blob/master/experimental/addons/redis-operator/template.cue
|
||||
// Documentation: https://kubevela.io/docs/end-user/policies/references
|
||||
]
|
||||
}
|
||||
}
|
||||
// Resource topology, which can help KubeVela glue resources toghther.
|
||||
// We will discuss this in detail later.
|
||||
// Documentation: https://kubevela.io/docs/reference/topology-rule
|
||||
outputs: topology: resourceTopology // defined in resources/topology.cue
|
||||
```
|
||||
|
||||
#### resources directory
|
||||
|
||||
Here we define Application Components that will be referred to in `template.cue`. We will use a `web-service` Component to install Redis Operator. Of course, if you are comfortable with extra dependencies (FluxCD addon), you can use `helm` Components to install the Redis Operator Helm Chart directly. But one of the principles of writing an addon is to reduce external dependencies, so we use the `web-service` Component, which is a built-in Component of KubeVela, instead of `helm`.
|
||||
|
||||
```cue
|
||||
// resources/redis-operator.cue
|
||||
|
||||
// package name is the same as the one in template.cue. So we can use `redisOperator` below in template.cue
|
||||
package main
|
||||
|
||||
redisOperator: {
|
||||
// an OAM Application Component, which will create a Redis Operator
|
||||
// https://kubevela.io/docs/end-user/components/references
|
||||
name: "redis-operator"
|
||||
type: "webservice"
|
||||
properties: {
|
||||
// Redis Operator container image (parameter.image is defined in parameter.cue)
|
||||
image: parameter.image
|
||||
imagePullPolicy: "IfNotPresent"
|
||||
}
|
||||
traits: [
|
||||
// ...omitted
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Highlights of the glue capability provided by KubeVela
|
||||
|
||||
One of the notable features is [*Topology Rules*](https://kubevela.io/docs/reference/topology-rule) (or Resource Topologies, Resource Relationships). Although not required, it can help KubeVela build the topological relationships of the resources managed by a KubeVela Application. This is how KubeVela can glue all kinds of resources together into an Applicatiob. It is especially helpful when we use CRs (which is exactly the case in this example).
|
||||
|
||||
In our case, the `redis-failover` Component will create a CR, named `RedisFailover`. If we don't have *Topology Rules*, although we know `RedisFailover` is managing several Deployments and Services, KubeVela doesn't magically know this. So we can *tell* KubeVela our understanding with *Topology Rules*. Once KubeVela know what's inside a `RedisFailover`, it will know the relationships of all the resources inside an Application. So what's the benefits? For example: (refer to [Run our addon](#Run our addon))
|
||||
|
||||
- Resource topology graphs in VelaUX
|
||||
- Execute commands in containers of an Application using `vela exec`
|
||||
- Forward ports of containers of an Application using `vela port-forward`
|
||||
- Get log of containers of an Application using `vela log`
|
||||
- Get Pods or endpoints in an Application using `vela status --pod/--endpoint`
|
||||
|
||||
```cue
|
||||
// resources/topology.cue
|
||||
|
||||
package main
|
||||
|
||||
import "encoding/json"
|
||||
|
||||
resourceTopology: {
|
||||
apiVersion: "v1"
|
||||
kind: "ConfigMap"
|
||||
metadata: {
|
||||
name: "redis-operator-topology"
|
||||
namespace: "vela-system"
|
||||
labels: {
|
||||
"rules.oam.dev/resources": "true"
|
||||
"rules.oam.dev/resource-format": "json"
|
||||
}
|
||||
}
|
||||
data: rules: json.Marshal([{
|
||||
parentResourceType: {
|
||||
group: "databases.spotahome.com"
|
||||
kind: "RedisFailover"
|
||||
}
|
||||
// RedisFailover CR manages the three resource below
|
||||
childrenResourceType: [
|
||||
{
|
||||
apiVersion: "apps/v1"
|
||||
kind: "StatefulSet"
|
||||
},
|
||||
// Topologies of Deployment and etc. are built-in.
|
||||
// So we don't need to go deeper to Pods.
|
||||
{
|
||||
apiVersion: "apps/v1"
|
||||
kind: "Deployment"
|
||||
},
|
||||
{
|
||||
apiVersion: "v1"
|
||||
kind: "Service"
|
||||
},
|
||||
]
|
||||
}])
|
||||
}
|
||||
```
|
||||
|
||||
### definitions directory
|
||||
|
||||
Definitions directory contains KubeVela Definitions, including ComponentDefinitions, TraitDefinitions, and etc. This is the most important part of an addon as it provides the real capability to end users. With the Component and Trait types defined here, users can use them in their applications.
|
||||
|
||||
Writing Definitions for an addon is the same as writing regular Definitions, including [Component Definition](https://kubevela.io/docs/platform-engineers/components/custom-component), [Trait Definition](https://kubevela.io/docs/platform-engineers/traits/customize-trait), [Policy Definition](https://kubevela.io/docs/platform-engineers/policy/custom-policy), and [Workflow Step Definition](https://kubevela.io/docs/platform-engineers/workflow/workflow). We will refer to [Component Definition](https://kubevela.io/docs/platform-engineers/components/custom-component) and [Redis Operator](https://github.com/spotahome/redis-operator/blob/master/README.md) to write our ComponentDefinition.
|
||||
|
||||
The Component Definition we are going to write is called `redis-failover`. It will create a CR called RedisFailover. With RedisFailover created, the Redis Operator in our addon Application will create a Redis cluster for us. You can refer to the [source code](https://github.com/kubevela/catalog/blob/master/experimental/addons/redis-operator/definitions/redis-failover.cue).
|
||||
|
||||
### metadata.yaml
|
||||
|
||||
Just the name implies, these are all metadata of an addon, including addon name, version, system requirements, and etc. (per [documentation](https://kubevela.io/docs/platform-engineers/addon/intro#basic-information-file)). One thing to notice: we are focusing on the new addon format introduced in KubeVela v1.5, so you should avoid using old incompatible fields. The example listed below contains all the fields available to you.
|
||||
|
||||
:::tip
|
||||
For example, the old `deployTo.runtimeCluster` has an alternative in the new addon format (using topology Policy) and should be avoided. You can refer to [`template.cue`](https://github.com/kubevela/catalog/blob/958a770a9adb3268e56ca4ec2ce99d2763617b15/experimental/addons/redis-operator/template.cue#L28) in the full source code.
|
||||
:::
|
||||
|
||||
|
||||
```yaml
|
||||
# addon name, the same as our directory name
|
||||
name: redis-operator
|
||||
# addon description
|
||||
description: Redis Operator creates/configures/manages high availability redis with sentinel automatic failover atop Kubernetes.
|
||||
# tags to show in VelaUX
|
||||
tags:
|
||||
- redis
|
||||
# addon version
|
||||
version: 0.0.1
|
||||
# addon icon
|
||||
icon: https://xxx.com
|
||||
# the webpage of this addon
|
||||
url: https://github.com/spotahome/redis-operator
|
||||
# other addon dependencies, e.g. fluxcd
|
||||
dependencies: []
|
||||
|
||||
# system version requirements
|
||||
system:
|
||||
vela: ">=v1.5.0"
|
||||
kubernetes: ">=1.19"
|
||||
```
|
||||
|
||||
## Run our addon
|
||||
|
||||
Now we have finished most of our work. It is time to run it! However, there is still some details that we skipped, so you can download the full [source code](https://github.com/kubevela/catalog/tree/master/experimental/addons/redis-operator) to complete them.
|
||||
|
||||
After we get the full `redis-operator` directory, executing `vela addon enable redis-operator` will enable the addon inside our downloaded directory. Then, we can refer the [README](https://github.com/kubevela/catalog/tree/master/experimental/addons/redis-operator/README.md) to start using the addon.
|
||||
|
||||
> The README.md is very important as it guides users to start using an unfamiliar addon.
|
||||
|
||||
By using this addon, users only need to write **4** lines of yaml to get a Redis cluster with 3 nodes! Contrary to manually installing Redis Operator, even manually managing Redis instances, the use of an addon greatly improves user experience.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: redis-operator-sample
|
||||
spec:
|
||||
components:
|
||||
# This component is provided by redis-operator addon.
|
||||
# In this example, 2 redis instance and 2 sentinel instance
|
||||
# will be created.
|
||||
- type: redis-failover
|
||||
name: ha-redis
|
||||
properties:
|
||||
# You can increase/decrease this later to add/remove instances.
|
||||
replicas: 3
|
||||
```
|
||||
|
||||
A whole tree of complicated resources are created with just a few lines of yaml, as shown in the figure below. Since we have written *Topology Rules* for our addon, users can easily see the topology of all the resources (Pods, Services) of a Redis Cluster. They are now not limited to the level of observability of KubeVela Applications, instead, they can have a glance on the statuses of low-level resource. For example, we can see certain Redis Pods are still not ready in our figure:
|
||||
|
||||

|
||||
|
||||
Users can also choose the low-level resources of our sample Application, i.e., 3 Redis Pods and 3 Sentinal Pods, when executing `vela exec/log/port-forward`. At the first glance, it may not seem that useful to `exec` into a Pod if you are running a single cluster. However, if you are running multi-cluster installations, having the option to choose resource spanning across multiple clusters is a huge time-saver:
|
||||
|
||||

|
||||
|
||||
`vela status` can get the status of an Application. With `Topology Rules`, we can take one step further -- find out endpoints in an Application directly. In our example, users can get endpoints of Redis Sentinel to connect to by:
|
||||
|
||||

|
||||
|
||||
## Wrap up
|
||||
|
||||
By the end of this guide, you probably already have a good grasp of what addons do and how to make addons. If you actually built your own addon, everyone is welcomed to contribute their addon to [addon catalog](https://github.com/kubevela/catalog). So everyone else can discover your addon and use it!
|
|
@ -13,6 +13,8 @@ You may have learned from [this blog](2022-06-27-terraform-integrate-with-vela.m
|
|||
|
||||
Sometimes we already have some Terraform cloud resources which may be created and managed by the Terraform binary or something else. In order to have [the benefits of using KubeVela to manage the cloud resources](2022-06-27-terraform-integrate-with-vela.md#part-1-glue-terraform-module-as-kubevela-capability) or just maintain consistency in the way you manage cloud resources, we may want to import these existing Terraform cloud resources into KubeVela and use vela to manage them. But if we just create an application which describes these cloud resources, the cloud resources will be recreated and may lead to errors. To fix this problem, we made [a simple `backup_restore` tool](https://github.com/kubevela/terraform-controller/tree/master/hack/tool/backup_restore). This blog will show you how to use the `backup_restore` tool to import your existing Terraform cloud resources into KubeVela.
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
## Step 1: Create Terraform Cloud Resources
|
||||
|
||||
Since we are going to demonstrate how to import an existing cloud resource into KubeVela, we need to create one first. If you already have such resources, you can skip this step.
|
||||
|
@ -23,7 +25,6 @@ Before start, make sure you have:
|
|||
- Have a Cloud Service credentials, in this article, we will use aws as example.
|
||||
- Learn the basic knowledge of [how to use terraform](https://www.terraform.io/language).
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
Let's get started!
|
||||
|
||||
|
|
|
@ -10,7 +10,9 @@ image: https://raw.githubusercontent.com/oam-dev/KubeVela.io/main/docs/resources
|
|||
hide_table_of_contents: false
|
||||
---
|
||||
|
||||
> [KubeVela 1.5](https://github.com/kubevela/kubevela/releases/tag/v1.5.0) was released recently. This release brings more convenient application delivery capabilities to the community, including system observability, CloudShell terminals that move the Vela CLI to the browser, enhanced canary releases, and optimized multi-environment application delivery workflows. It also improved KubeVela's high extensibility as an application delivery platform. The community has started to promote the project to the CNCF Incubation stage. It has absorbed the practice sharing of multiple benchmark users in many community meetings, which proves the community's healthy development. The project is now mature to some extent, and its adoption has made periodical achievements, thanks to the contributions of more than 200 developers in the community.
|
||||
[KubeVela 1.5](https://github.com/kubevela/kubevela/releases/tag/v1.5.0) was released recently. This release brings more convenient application delivery capabilities to the community, including system observability, CloudShell terminals that move the Vela CLI to the browser, enhanced canary releases, and optimized multi-environment application delivery workflows. It also improved KubeVela's high extensibility as an application delivery platform. The community has started to promote the project to the CNCF Incubation stage. It has absorbed the practice sharing of multiple benchmark users in many community meetings, which proves the community's healthy development. The project is now mature to some extent, and its adoption has made periodical achievements, thanks to the contributions of more than 200 developers in the community.
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
KubeVela has released five major versions over the past year. Each iteration is a leap forward. The release of version 1.1 brought the ability to connect multiple clusters. Version 1.2/1.3 brought an extended system and a more developer-friendly experience. Version 1.4 introduced a security mechanism in the complete process. The release of 1.5 today brings us closer to KubeVela's vision of *making application delivery and management easier.* Along the way, we have stayed true to the same design philosophy and built a platform that automates the complexity of the underlying differentiated infrastructure without losing scalability. It helps application developers upgrade from business development to cloud-native R&D at a low cost. Technically, it focuses on the complete process from code to cloud and from application delivery to management. It refines the framework capabilities of connecting infrastructure based on the Open Application Model (OAM). As shown in Figure 1, KubeVela has covered the complete capabilities of application definition, delivery, O&M, and management, all of which are based on OAM scalability (OAM Definition) to connect to ecological projects in an addon way. **In essence, each definition converts the experience of a specific capability into a reusable best practice module, which can be shared by enterprises or communities through addon packaging.**
|
||||
|
||||
|
|
|
@ -0,0 +1,380 @@
|
|||
---
|
||||
title: "An Introduction to KubeVela Addons: Extend Your Own Platform Capability"
|
||||
authors:
|
||||
- name: Charlie Chiang
|
||||
title: KubeVela Team
|
||||
url: https://github.com/charlie0129
|
||||
image_url: https://github.com/charlie0129.png
|
||||
tags: [ "KubeVela", "addon", "extensibility" ]
|
||||
description: Introduction to building KubeVela addons.
|
||||
image: https://raw.githubusercontent.com/oam-dev/KubeVela.io/main/docs/resources/KubeVela-03.png
|
||||
hide_table_of_contents: false
|
||||
---
|
||||
|
||||
As we all know, KubeVela is a highly extensible platform, on which users can build their own customizations using [Definitions](https://kubevela.io/docs/platform-engineers/oam/x-definition). KubeVela addons are a convenient way to pack all these customizations and their dependencies together as a bundle, to extend your own platform capabilities!
|
||||
|
||||
This blog introduces the main concepts of an addon and guides you to quickly start building one. Finally, we'll show you the experience of the end user, and how the capabilities provided are glued in a consistent user-friendly experience.
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
## Why use addons
|
||||
|
||||
We typically use addons with [addon catalog](https://github.com/kubevela/catalog), which is a registry contains addons that including all kinds of customizations that the KubeVela community carefully crafted. You can download and install these addons with just one click. For example, you can provide the ability to deploy Helm Chart in your KubeVela Application to your cluster by installing [`fluxcd`](https://github.com/kubevela/catalog/tree/master/addons/fluxcd) addon.
|
||||
|
||||
Contrary to the convenience of one-key installation, without KubeVela addons, you have to manually install [FluxCD](https://fluxcd.io/) in this way:
|
||||
|
||||
1. Install FluxCD using Helm Charts or downloaded yaml manifests
|
||||
2. Glue FluxCD CRD with your system manually, it can be done by adding Component or Trait Definitions in KubeVela
|
||||
|
||||
Actually, this is how we install FluxCD before KubeVela v1.1.
|
||||
Although it seems like only 2 steps needed, we found it to be quite troublesome:
|
||||
|
||||
1. **Complicated installation**: users are required to refer to documentations to manually install FluxCD and tackle possible errors
|
||||
2. **Scattered resources**: users need to obtain different resource files from different places
|
||||
3. **Hard distribution**: users having to download manifests manually makes it hard to distribute all these resources in a uniformed way
|
||||
4. **Lack multi-cluster support**: KubeVela emphasizes multi-cluster delivery a lot. However, manual installation makes multi-cluster support hard to carry out.
|
||||
5. **No version management**: users need to manage the relationship between definitions, controllers, and corresponding versions by themselves.
|
||||
|
||||
KubeVela addons are born to solve these problems.
|
||||
|
||||
## How KubeVela addons work
|
||||
|
||||
KubeVela addons mainly contain two parts:
|
||||
|
||||
- One is the installation of the capability provider, which is usually a CRD Operator. The installation process are **intrinsically leveraging KubeVela Application** to work.
|
||||
- Another one is the glue layer, which is OAM Definitions and other extensions. These OAM Definitions usually depend on the capability provider and provide user-friendly abstraction from best practices.
|
||||
|
||||

|
||||
|
||||
The whole working mechanism of addon is shown as above. KubeVela Application has the multi-clusters capability that help deliver CRD operators of addon into these clusters. The definition files are only used by KubeVela control plane, so they will only existing in the control plane clusters.
|
||||
|
||||
:::tip
|
||||
Once installed, an application object will be created and all related resources will mark ownerReference to this application. When we want to uninstall an addon, we just need to delete the application, the ownerReference mechanism of Kubernetes will help clean up all the other resources.
|
||||
:::
|
||||
|
||||
Let's build a Redis addon as example, which makes it possible to use Redis Components in Applications, to create a Redis cluster. Such addon will at least include a Redis Operator (to create Redis clusters) and a ComponentDefinition (to define what is a `redis` Component).
|
||||
|
||||
The installation process of an addon includes installing Applications (which includes a Redis Operator), Definitions and etc.
|
||||
|
||||
## Create your own addon
|
||||
|
||||
:::note
|
||||
Make sure you're using KubeVela v1.5+ to have all these capabilities mentioned below.
|
||||
:::
|
||||
|
||||
We will guide you through the whole process of making an redis addon. The full source code of this guide is located at [catalog/redis-operator](https://github.com/kubevela/catalog/tree/master/experimental/addons/redis-operator).
|
||||
|
||||
:::tip
|
||||
As an introduction, we won't cover all features. If you are interested to build addon by your own, you'd better refer to the ["Make Your Own Addon"](https://kubevela.io/docs/platform-engineers/addon/intro) documentation for details.
|
||||
:::
|
||||
|
||||
**First, we need to consider what our addon can provide to end users.** Let's say our Redis addon can provide a Component called `redis-failover`, which will create a whole Redis cluster when declared in application.
|
||||
|
||||
**Then we will figure out how to achieve this.** To define a `redis-failover` Component, we need to write a ComponentDefinition. To be able to create a Redis Cluster, we use a [Redis Operator](https://github.com/spotahome/redis-operator) as the capability provider.
|
||||
|
||||
Now our goals are clear:
|
||||
|
||||
- Write an OAM Application, which includes the installation of a Redis Operator. (Refer to [`template.cue`](https://github.com/kubevela/catalog/blob/master/experimental/addons/redis-operator/template.cue) and [`resources/`](https://github.com/kubevela/catalog/tree/master/experimental/addons/redis-operator/resources) in the full source code.)
|
||||
- Write a [ComponentDefinition](https://kubevela.io/docs/platform-engineers/components/custom-component), which defines Components called `redis-failover`. (Refer to [`definitions/`](https://github.com/kubevela/catalog/tree/master/experimental/addons/redis-operator/definitions) folder in the full source code.)
|
||||
|
||||
But before we start coding, we need to understand the structure of an addon directory. We will describe each file later, just have a basic understanding of what files are needed for now.
|
||||
|
||||
:::tip
|
||||
The command `vela addon init` can help you create the scaffold addon directories and files as an initialization.
|
||||
:::
|
||||
|
||||
```shell
|
||||
redis-operator/
|
||||
├── definitions
|
||||
│ └── redis-failover.cue
|
||||
├── resources
|
||||
│ ├── crd.yaml
|
||||
│ ├── redis-operator.cue
|
||||
│ └── topology.cue # (Optional)
|
||||
├── metadata.yaml
|
||||
├── parameter.cue
|
||||
├── README.md
|
||||
└── template.cue
|
||||
```
|
||||
|
||||
Let's explain all these files and folders one by one:
|
||||
|
||||
1. `redis-operator/` is the directory name, which is the same as the addon name.
|
||||
2. `definitions/` folder stores Definitions, including TraitDefinition, ComponentDefinition, and etc.
|
||||
3. `redis-failover.cue` defines the ComponentDefinition that how the `redis-failover` component can be used, and how it integrates with the underlying resources.
|
||||
4. `resources/` folder contain resource files which will be composed as a whole application in `template.cue`.
|
||||
5. `crd.yaml` inside the `resources/` folder is the CRD that comes with Redis Operator. Yaml file inside this folder will be applied to Kubernetes directly.
|
||||
6. `redis-operator.cue` defines a web-service Component, which installs the Redis Operator.
|
||||
7. `topology.cue` is optional, it helps KubeVela build the relationships between Application and the underlying resources.
|
||||
8. `metadata.yaml` defines metadata of this addon, including name, version, tags, maintainers and etc. This information can be an overview of addon exposed to any registry.
|
||||
9. `parameter.cue` defines the parameters of this addon, end users can use them to customize their addon installation.
|
||||
10. `README.md` guides the user to quickly start using this addon.
|
||||
11. `template.cue` is the template of the addon which will form the whole Application when installed.
|
||||
|
||||
Now let's go through the deep of how to write them one by one.
|
||||
|
||||
:::tip
|
||||
We will use CUE language extensively in the following section, so [CUE Basic](https://kubevela.io/docs/platform-engineers/cue/basic) might be useful.
|
||||
:::
|
||||
|
||||
### parameter.cue
|
||||
|
||||
```cue
|
||||
parameter: {
|
||||
//+usage=Redis Operator image.
|
||||
image: *"quay.io/spotahome/redis-operator:v1.1.0" | string
|
||||
// ...omitted
|
||||
```
|
||||
|
||||
The parameters defined in `parameter.cue` are customizable by users when installing addons, just like Helm Values. You can access the values of these parameters in CUE later by `parameter.<parameter-name>`. In our example, `image` is provided to let the user customize Redis Operator image and can be accessed in `redis-operator.cue` by `parameter.image`.
|
||||
|
||||
Apart from customizing some fields, you can also do something creative, such as parameterized installation of addons. For example, `fluxcd` addon have a parameter called [`onlyHelmComponents`](https://github.com/kubevela/catalog/blob/master/addons/fluxcd/parameter.cue). Since `fluxcd` addon is a big one, which includes 5 different controllers, not all users want such a heavy installation. So if the `onlyHelmComponents` parameter is set to `true` by the user, only 2 controllers will be installed, making it a relatively light installation. If you are interested in how this is achieved, you can refer to [fluxcd addon](https://github.com/kubevela/catalog/blob/master/addons/fluxcd/template.cue#L25) for details.
|
||||
|
||||
When you design what parameters are provided to the user, you can follow our best practices to make it user friendly.
|
||||
|
||||
:::tip best practices
|
||||
- Do not provide every possible parameter to your user and let them figure out how to dial dozens of knobs by themselves. Abstract fine-grained knobs into some broad parameters, so that the user can input a few parameters and get a usable but customized addon.
|
||||
- Decide on sane defaults for starters. Let the user get started with your addon
|
||||
even if they don't provide parameters (if possible).
|
||||
- Provide usage for each parameter, you can mark annotation above the parameter as the example does.
|
||||
- Keep parameters consistent across versions to avoid incompatibilities during upgrades.
|
||||
:::
|
||||
|
||||
### `template.cue` and `resources/` folder
|
||||
|
||||
OAM Application is stored here, which contains the actual installation process of addons. In our case, we will create an Application that includes a Redis Operator to give our cluster the ability to create Redis clusters.
|
||||
|
||||
`template.cue` and `resources/` directory both serve the same purpose -- to define an Application. Aside from historical reasons, the reason why we split it into two places is readability. When we have a ton of resource in template.cue, it will eventually be too long to read. So we typically put the Application scaffold in `template.cue`, and Application Components in `resources` directory.
|
||||
|
||||
#### template.cue
|
||||
|
||||
`template.cue` defines the framework of one Application. Most of the contents here can be boilerplates, they're explained as annotations in the code block.
|
||||
|
||||
```cue
|
||||
// package name should be the same as the CUE files in resources directory,
|
||||
// so that we can refer to the files in resources/.
|
||||
package main
|
||||
|
||||
output: {
|
||||
// This is just a plain OAM Application
|
||||
apiVersion: "core.oam.dev/v1beta1"
|
||||
kind: "Application"
|
||||
// No metadata required
|
||||
spec: {
|
||||
components: [
|
||||
// Create a component that includes a Redis Operator
|
||||
redisOperator // defined in resources/redis-operator.cue
|
||||
]
|
||||
policies: [
|
||||
// What namespace to install, whether install to sub-clusters
|
||||
// Again, these are boilerplates. No need to remember them. Just refer to the full source code.
|
||||
// https://github.com/kubevela/catalog/blob/master/experimental/addons/redis-operator/template.cue
|
||||
// Documentation: https://kubevela.io/docs/end-user/policies/references
|
||||
]
|
||||
}
|
||||
}
|
||||
// Resource topology, which can help KubeVela glue resources together.
|
||||
// We will discuss this in detail later.
|
||||
// Documentation: https://kubevela.io/docs/reference/topology-rule
|
||||
outputs: topology: resourceTopology // defined in resources/topology.cue
|
||||
```
|
||||
|
||||
The `output` field is the keyword of this template, it contains the Application that will be deployed. In side the application, the `spec.components` field will reference the objects defined in the `resources/` folder.
|
||||
|
||||
The `outputs` field is another keyword that can be used to define any auxiliary resources you want to be deployed along with this addon. These resources **MUST** follow Kubernetes API.
|
||||
|
||||
|
||||
#### `resources/` folder
|
||||
|
||||
Here we define Application Components that will be referred to in `template.cue`. We will use a `webservice` Component to install Redis Operator. Of course, if you are comfortable with extra dependencies (FluxCD addon), you can use `helm` Components to install the Redis Operator Helm Chart directly. But one of the principles of writing an addon is to reduce external dependencies, so we use the `webservice` Component, which is a built-in Component of KubeVela, instead of `helm`.
|
||||
|
||||
```cue
|
||||
// resources/redis-operator.cue
|
||||
|
||||
// package name is the same as the one in template.cue. So we can use `redisOperator` below in template.cue
|
||||
package main
|
||||
|
||||
redisOperator: {
|
||||
// an OAM Application Component, which will create a Redis Operator
|
||||
// https://kubevela.io/docs/end-user/components/references
|
||||
name: "redis-operator"
|
||||
type: "webservice"
|
||||
properties: {
|
||||
// Redis Operator container image (parameter.image is defined in parameter.cue)
|
||||
image: parameter.image
|
||||
imagePullPolicy: "IfNotPresent"
|
||||
}
|
||||
traits: [
|
||||
// ...omitted
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
You can refer to the annotation inside the code block to learn the detail usage for every field.
|
||||
|
||||
#### Highlights of the glue capability provided by KubeVela
|
||||
|
||||
One of the notable features is [*Topology Rules*](https://kubevela.io/docs/reference/topology-rule) (or Resource Topologies, Resource Relationships). Although not required, it can help KubeVela build the topological relationships of the resources managed by a KubeVela Application. This is how KubeVela can glue all kinds of resources together into an Application. It is especially helpful when we use Kubernetes Custom Resource(CR) (which is exactly the case in this example).
|
||||
|
||||
|
||||
```cue
|
||||
// resources/topology.cue
|
||||
|
||||
package main
|
||||
|
||||
import "encoding/json"
|
||||
|
||||
resourceTopology: {
|
||||
apiVersion: "v1"
|
||||
kind: "ConfigMap"
|
||||
metadata: {
|
||||
name: "redis-operator-topology"
|
||||
namespace: "vela-system"
|
||||
labels: {
|
||||
"rules.oam.dev/resources": "true"
|
||||
"rules.oam.dev/resource-format": "json"
|
||||
}
|
||||
}
|
||||
data: rules: json.Marshal([{
|
||||
parentResourceType: {
|
||||
group: "databases.spotahome.com"
|
||||
kind: "RedisFailover"
|
||||
}
|
||||
// RedisFailover CR manages the three resource below
|
||||
childrenResourceType: [
|
||||
{
|
||||
apiVersion: "apps/v1"
|
||||
kind: "StatefulSet"
|
||||
},
|
||||
// Topologies of Deployment and etc. are built-in.
|
||||
// So we don't need to go deeper to Pods.
|
||||
{
|
||||
apiVersion: "apps/v1"
|
||||
kind: "Deployment"
|
||||
},
|
||||
{
|
||||
apiVersion: "v1"
|
||||
kind: "Service"
|
||||
},
|
||||
]
|
||||
}])
|
||||
}
|
||||
```
|
||||
|
||||
In our case, the `redis-failover` Component will create a CR, named `RedisFailover`. If we don't have *Topology Rules*, although we know `RedisFailover` is managing several Deployments and Services, KubeVela doesn't magically know this. So we can *tell* KubeVela our understanding with *Topology Rules*. Once KubeVela know what's inside a `RedisFailover`, it will know the relationships of all the resources inside an Application.
|
||||
|
||||
:::tip
|
||||
This can bring huge benefits and give us a consistent experience for all extended resources:
|
||||
|
||||
- Resource topology graphs from application to underlying pods can be provided in VelaUX
|
||||
- Consistent execute commands into pods for all kinds of Application components using `vela exec`
|
||||
- Consistent forward ports to pods for all kinds of Application components using `vela port-forward`
|
||||
- Consistent get logs from pods for all kinds of Application components using `vela log`
|
||||
- Consistent get Pods or access endpoints for all kinds of Application components using `vela status --pod/--endpoint`
|
||||
:::
|
||||
|
||||
You can refer to [run our addon](#run-our-addon) to see the real user experience.
|
||||
|
||||
### `definitions/` folder
|
||||
|
||||
Definitions folder contains KubeVela [Definitions](https://kubevela.io/docs/getting-started/definition), including ComponentDefinitions, TraitDefinitions, and etc. **This is the most important part of an addon as it provides the real capability to end users.** With the Component and Trait types defined here, users can use them in their applications.
|
||||
|
||||
Writing Definitions for an addon is the same as writing regular OAM Definitions, this is a huge topic so we won't go to deep in our addon introduction, you can refer to following docs to learn how to write each kinds of definitions.
|
||||
|
||||
- [Component Definition](https://kubevela.io/docs/platform-engineers/components/custom-component)
|
||||
- [Trait Definition](https://kubevela.io/docs/platform-engineers/traits/customize-trait)
|
||||
- [Policy Definition](https://kubevela.io/docs/platform-engineers/policy/custom-policy)
|
||||
- [Workflow Step Definition](https://kubevela.io/docs/platform-engineers/workflow/workflow).
|
||||
|
||||
In our redis addon example, we should refer to [Component Definition](https://kubevela.io/docs/platform-engineers/components/custom-component) and [Redis Operator](https://github.com/spotahome/redis-operator/blob/master/README.md) to write our ComponentDefinition. We'll name this component type as `redis-failover`. It will create a CR called RedisFailover. With RedisFailover created, the Redis Operator in our addon Application will create a Redis cluster for us.
|
||||
|
||||
You can refer to the source code [here](https://github.com/kubevela/catalog/blob/master/experimental/addons/redis-operator/definitions/redis-failover.cue).
|
||||
|
||||
### metadata.yaml
|
||||
|
||||
Just the name implies, these are all metadata of an addon, including addon name, version, system requirements, and etc. You can refer to this [documentation](https://kubevela.io/docs/platform-engineers/addon/intro#basic-information-file) for the detail list. The example listed below contains all the fields available.
|
||||
|
||||
:::tip
|
||||
There're some legacy supports, while we are focusing on the new addon format introduced in KubeVela v1.5+. For example, the old `deployTo.runtimeCluster` should be deprecated and using `topology policy define in application` as its alternative. You can refer to [`template.cue`](https://github.com/kubevela/catalog/blob/master/experimental/addons/redis-operator/template.cue#L28) in the full source code.
|
||||
:::
|
||||
|
||||
|
||||
```yaml
|
||||
# addon name, the same as our directory name
|
||||
name: redis-operator
|
||||
# addon description
|
||||
description: Redis Operator creates/configures/manages high availability redis with sentinel automatic failover atop Kubernetes.
|
||||
# tags to show in VelaUX
|
||||
tags:
|
||||
- redis
|
||||
# addon version
|
||||
version: 0.0.1
|
||||
# addon icon
|
||||
icon: https://xxx.com
|
||||
# the webpage of this addon
|
||||
url: https://github.com/spotahome/redis-operator
|
||||
# other addon dependencies, e.g. fluxcd
|
||||
dependencies: []
|
||||
|
||||
# system version requirements
|
||||
system:
|
||||
vela: ">=v1.5.0"
|
||||
kubernetes: ">=1.19"
|
||||
```
|
||||
|
||||
## Run our addon
|
||||
|
||||
Now we have finished most of our work. It is time to run it! If there's any details that were skipped, you can download the full [source code](https://github.com/kubevela/catalog/tree/master/experimental/addons/redis-operator) to complete them.
|
||||
|
||||
After we finish all the `redis-operator` addon, we can use command `vela addon enable redis-operator/` to enable it locally. It can help us debug and write docs about the end user experience.
|
||||
|
||||
As to say our example addon, you can refer to the [README](https://github.com/kubevela/catalog/tree/master/experimental/addons/redis-operator/README.md) to learn how should be introduced.
|
||||
|
||||
:::caution
|
||||
The README.md is very important as it guides users to start using an unfamiliar addon.
|
||||
:::
|
||||
|
||||
By using this addon, users only need to write **4** lines of yaml to get a Redis cluster with 3 nodes! Contrary to manually installing Redis Operator, even manually managing Redis instances, the use of an addon greatly improves user experience.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: redis-operator-sample
|
||||
spec:
|
||||
components:
|
||||
# This component is provided by redis-operator addon.
|
||||
# In this example, 2 redis instance and 2 sentinel instance
|
||||
# will be created.
|
||||
- type: redis-failover
|
||||
name: ha-redis
|
||||
properties:
|
||||
# You can increase/decrease this later to add/remove instances.
|
||||
replicas: 3
|
||||
```
|
||||
|
||||
A whole tree of complicated resources are created with just a few lines of yaml, as shown in the figure below. Since we have written *Topology Rules* for our addon, users can easily see the topology of all the resources (Pods, Services) of a Redis Cluster. They are now not limited to the level of observability of KubeVela Applications, instead, they can have a glance on the statuses of low-level resource. For example, we can see certain Redis Pods are still not ready in our figure:
|
||||
|
||||

|
||||
|
||||
Users can also choose the low-level resources of our sample Application, i.e., 3 Redis Pods and 3 Sentinal Pods, when executing `vela exec/log/port-forward`.
|
||||
|
||||

|
||||
|
||||
:::tip
|
||||
At the first glance, it may not seem that useful to `exec` into a Pod if you are running a single cluster. However, if you are running multi-cluster installations, having the option to choose resource spanning across multiple clusters is a huge time-saver.
|
||||
:::
|
||||
|
||||
`vela status` can get the status of an Application. With `Topology Rules`, we can take one step further -- find out endpoints in an Application directly. In our example, users can get endpoints of Redis Sentinel to connect to by:
|
||||
|
||||

|
||||
|
||||
## Wrap up
|
||||
|
||||
By the end of this guide, you probably already have a good grasp of what addons do and how to make addons. As a conclusion, you'll get the following benefits:
|
||||
|
||||
1. Extend platform capability into a bundle that can be easy to use and share with the whole community.
|
||||
2. Orchestrate and template all the infrastructure resources into multi-clusters in a flexible way powered by KubeVela application and CUE.
|
||||
3. Provide consistent experience for end users with all kinds of extended capabilities.
|
||||
|
||||
At last, if you have successfully built your own addon, you're extremely welcomed to contribute it to [the addon catalog](https://github.com/kubevela/catalog). As a result, everyone in the KubeVela community can discover and benefit from extending your powerful platform capabilities!
|
|
@ -1,300 +0,0 @@
|
|||
---
|
||||
title: "KubeVela 插件介绍与编写入门"
|
||||
authors:
|
||||
- name: 姜洪烨
|
||||
title: KubeVela Team
|
||||
url: https://github.com/charlie0129
|
||||
image_url: https://github.com/charlie0129.png
|
||||
tags: [ "KubeVela", "addon", "extensibility" ]
|
||||
description: Introduction to Building Addons
|
||||
image: https://raw.githubusercontent.com/oam-dev/KubeVela.io/main/docs/resources/KubeVela-03.png
|
||||
hide_table_of_contents: false
|
||||
---
|
||||
|
||||
KubeVela 插件(addon)可以方便地扩展 KubeVela 的能力。正如我们所知,KubeVela 是一个高度可扩展的平台,用户可以通过 [模块定义(Definition)](https://kubevela.net/zh/docs/platform-engineers/oam/x-definition)扩展 KubeVela 的能力,而 KubeVela 插件正是方便将这些**自定义扩展**及其**依赖**打包并分发的功能。
|
||||
|
||||
这篇博客将会简要介绍 KubeVela 插件的机制和如何自行编写插件。并展示最终用户使用插件的体验、以及插件提供的功能是如何融合的。
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
## 为什么要使用 KubeVela 插件
|
||||
|
||||
用户使用插件的一个典型方法是通过 KubeVela 团队维护的 [addon catalog](https://github.com/kubevela/catalog) ,它包含了 KubeVela 团队与社区开发者精心编写的扩展,并以插件的形式发布于 catalog 中,这样你可以一键下载并安装这些插件。例如安装 FluxCD 可以快速给你的 KubeVela Application 提供部署 Helm Component 等 FluxCD 提供的能力。
|
||||
|
||||
相较于一键安装的便利性,如果不使用插件就必须这么安装(实际上,这也是 KubeVela v1.1 及之前的安装方法):
|
||||
|
||||
1. 通过 Helm 或者下载 yaml 文件手动安装 FluxCD (包括数个 Controller 和 CRD)
|
||||
2. 下载 FluxCD 相关的模块定义文件并安装
|
||||
|
||||
我们不难发现这样安装有以下这些问题:
|
||||
|
||||
1. 操作繁琐:用户需要手动查阅文档如何安装 FluxCD 并处理可能发生的错误
|
||||
2. 资源分散:用户需要从所处下载不同的文件,既需要安装 Helm 安装 FluxCD 还需要下载模块定义
|
||||
3. 难以分发:用户需要手动下载模块定义就注定了这些资源难以以一个统一的方式分发给用户
|
||||
4. 缺少多集群支持:KubeVela 注重多集群交付,而这样的手动安装方式显然是难以维护多集群的环境的
|
||||
5. 无版本管理:用户需要手动管理模块定义和 Controller 之间的版本
|
||||
|
||||
而 KubeVela 插件就是为了逐一解决这些问题而生。
|
||||
|
||||
## KubeVela 插件是如何工作的
|
||||
|
||||
**KubeVela 插件本质上就是一个 OAM Application** 加上模块定义和其他能力扩展。因为一个模块定义一般需要依赖其他资源(例如需要一个 Operator ),这些依赖的资源将存放在 *应用描述文件* 中,通过 OAM Application 进行描述。例如一个 Redis 插件,它能让用户在自己的 Application 中使用 Redis 集群类型的 Component ,这样可以快速创建 Redis 集群。那么这个插件至少会包括一个 Redis Operator 来提供创建 Redis 集群的能力(通过 Application 描述),还有一个组件定义 (ComponentDefinition) 来提供 Redis 集群类型的 Component。
|
||||
|
||||
安装插件工作流程可见下图,即下发插件中定义的 Application 、组件定义和 UI 扩展等:
|
||||
|
||||

|
||||
|
||||
## 创建自己的插件
|
||||
|
||||
:::tip
|
||||
以下内容适用于 KubeVela v1.5 及更新的版本
|
||||
:::
|
||||
|
||||
我们将以 Redis 插件为例,讲解如何从头创建一个 KubeVela 插件的实际过程。本次完整的 Redis 插件代码见 [catalog/redis-operator](https://github.com/kubevela/catalog/tree/master/experimental/addons/redis-operator),在这里我们会避免讨论过深的细节,文档可以参考[自定义插件](https://kubevela.net/zh/docs/platform-engineers/addon/intro)。
|
||||
|
||||
**首先我们需要思考我们要创建的插件有什么作用?** 例如我们假设 Redis 插件可以提供 `redis-failover` 类型的 Component,这样用户只需在 Application 中定义一个 `redis-failover` Component 即可快速创建 Redis 集群。**然后考虑如何达到这个目的?** 要提供 `redis-failover` 类型的 Component 我们需要定义一个 ComponentDefinition ;要提供创建 Redis 集群的能力支持,我们可以使用 [Redis Operator](https://github.com/spotahome/redis-operator) 。
|
||||
|
||||
那至此我们的大目标就明确了:
|
||||
|
||||
- 编写插件的应用描述文件(OAM Application),这将会用于安装 Redis Operator (见完整代码的 `template.cue` 及 `resources` 目录)
|
||||
- 编写 `redis-failover` 类型的 [ComponentDefinition](https://kubevela.net/zh/docs/platform-engineers/components/custom-component) (见完整代码的 definitions 目录)
|
||||
|
||||
不过在开始编写之前,我们首先需要了解一个 KubeVela 插件的目录结构( `vela addon init` 可以帮助你创建目录结构)。后续我们会在编写的过程中详细说明每个文件的作用,在这里只需大致了解有哪些文件即可。
|
||||
|
||||
```shell
|
||||
redis-operator/ # 目录名为插件名称
|
||||
├── definitions # 用于存放模块定义, 例如 TraitDefinition 和 ComponentDefinition
|
||||
│ └── redis-failover.cue # 需要编写的 redis-failover 类型的 ComponentDefinition
|
||||
├── resources # 用于存放资源文件, 之后会在 template.cue 中使用他们
|
||||
│ ├── crd.yaml # Redis Operator 的 CRD (yaml 类型的文件将会被直接 apply)
|
||||
│ ├── redis-operator.cue # 一个 web-service 类型的 Component ,用于安装 Redis Operator
|
||||
│ └── topology.cue # (可选)帮助 KubeVela 建立应用所纳管资源的拓扑关系
|
||||
├── metadata.yaml # 插件元数据,包含插件名称、版本等
|
||||
├── parameter.cue # 插件参数定义,用户可以利用这些参数自定义插件安装
|
||||
├── README.md # 提供给用户阅读,包含插件使用指南等
|
||||
└── template.cue # 应用描述文件,包含一个 OAM Application
|
||||
```
|
||||
|
||||
:::tip
|
||||
同时,在插件中我们会大量使用 CUE ,你可能需要先查阅[入门指南](https://kubevela.net/zh/docs/platform-engineers/cue/basic)。
|
||||
:::
|
||||
|
||||
### parameter.cue
|
||||
|
||||
```cue
|
||||
parameter: {
|
||||
//+usage=Redis Operator image.
|
||||
image: *"quay.io/spotahome/redis-operator:v1.1.0" | string
|
||||
// 其余省略
|
||||
}
|
||||
```
|
||||
|
||||
在 parameter.cue 中定义的参数都是用户可以自定义的(类似于 Helm Values),后续在 template.cue 或者 resources 中可以通过 `parameter.<parameter-name>` 访问参数。在我们的例子中,用户可以自定义 `image` ,这样后续我们创建 Redis Operator 的时候可以使用用户指定的容器镜像。
|
||||
|
||||
在设计提供什么参数供用户自定义时也有一些注意点:
|
||||
|
||||
- 不要在 parameter.cue 中提供大量的细节参数,将大量细节抽象出少量参数供用户调节是一个更好的做法
|
||||
- 为参数提供默认值(如样例中的 image 参数)或将参数标记为可选(如样例的 clusters 参数),确保用户仅使用默认值可以得到一个可用的配置
|
||||
- 为参数提供使用说明(通过注释标记实现,见样例)
|
||||
- 尽量保持插件不同版本间的参数一致,防止因为升级导致不兼容
|
||||
|
||||
### template.cue 和 resources 目录
|
||||
|
||||
这是存放我们应用描述文件的地方,即一个 OAM Application 。这描述了实际的插件安装过程。我们主要会在这里包含 Redis Operator ,给集群提供管理 Redis 集群的能力。
|
||||
|
||||
template.cue 和 resource 目录本质上是相同的,他们共同组成一个 Application 。那为什么需要 resources 目录呢?除去历史原因,这主要是为了可读性的考虑,在 Application 中包含大量资源的时候 template.cue 可能变得很长,这时我们可以把资源放置在 resource 中增加可读性。一般来说,我们将 Application 的框架放在 template.cue 中,将 Application 内部的 Components 放在 resource 目录中。
|
||||
|
||||
#### template.cue
|
||||
|
||||
```cue
|
||||
// template.cue 应用描述文件
|
||||
|
||||
// package 名称需要与 resources 目录中 cue 的 package 一致,方便引用 resources 目录中的内容
|
||||
package main
|
||||
|
||||
// Application 模板中多数字段均为固定写法,你需要注意的只有 spec.components
|
||||
|
||||
output: {
|
||||
// 这是一个经典的 OAM Application
|
||||
apiVersion: "core.oam.dev/v1beta1"
|
||||
kind: "Application"
|
||||
// 不需要 metadata
|
||||
spec: {
|
||||
components: [
|
||||
// 创建 Redis Operator
|
||||
redisOperator // 定义于 resources/redis-operator.cue 中
|
||||
]
|
||||
policies: [
|
||||
// 这里会指定安装插件的 namespace ,是否安装至子集群等
|
||||
// 多为固定写法,无需记忆,可查阅本次样例的完整代码
|
||||
// https://github.com/kubevela/catalog/blob/master/experimental/addons/redis-operator/template.cue
|
||||
// 文档可参照 https://kubevela.net/zh/docs/end-user/policies/references
|
||||
]
|
||||
}
|
||||
}
|
||||
// 定义资源关联规则,用于将资源粘合在一起。后续会着重介绍
|
||||
// Documentation: https://kubevela.net/zh/docs/reference/topology-rule
|
||||
outputs: topology: resourceTopology // 定义于 resources/topology.cue 中
|
||||
```
|
||||
#### resources 资源文件
|
||||
|
||||
我们这里使用一个 `web-service` 类型的 Component 来安装 Redis Operator。当然,如果你可以接受依赖 FluxCD 的话,你也可以使用 `helm` 类型的 Component 直接安装一个 Helm Chart(因为 `helm` 类型的 Component 主要由 FluxCD 插件提供)。不过编写 addon 的一个原则是尽量减少外部依赖,所以我们这里使用 KubeVela 内置的 `web-service` 类型,而不是 `helm`。
|
||||
|
||||
```cue
|
||||
// resources/redis-operator.cue
|
||||
|
||||
// package 名称与 template.cue 一致,方便在 template.cue 中引用以下的 redisOperator
|
||||
package main
|
||||
|
||||
redisOperator: {
|
||||
// 这是 OAM Application 中的 Component ,它将会创建一个 Redis Operator
|
||||
// https://kubevela.net/zh/docs/end-user/components/references
|
||||
name: "redis-operator"
|
||||
type: "webservice"
|
||||
properties: {
|
||||
// Redis Operator 镜像名称,parameter.image 即在 parameter.cue 中用户可自定义的参数
|
||||
image: parameter.image
|
||||
imagePullPolicy: "IfNotPresent"
|
||||
}
|
||||
traits: [
|
||||
// 略
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### KubeVela 提供的资源粘合能力
|
||||
|
||||
值得注意的一个功能是 [*资源关联规则 (Resource Topology)*](https://kubevela.net/zh/docs/reference/topology-rule) 。虽然它不是必须的,但是它能帮助 KubeVela 建立应用所纳管资源的拓扑关系。这就是 KubeVela 如何将各种各样的资源粘合成 Application 的。这在我们使用了 CR 的时候特别有用。
|
||||
|
||||
在本例中,`redis-failover` 类型的 Component 会创建一个 CR ,名为 RedisFailover 。但是在没有资源关联规则的情况下,假设在你的 Application 中使用了 RedisFailover ,虽然我们知道 RedisFailover 管控了数个 Redis Deployment ,但是 KubeVela 并不知道 RedisFailover 之下有 Deployment 。这时我们可以通过 *资源关联规则* 将我们对于 RedisFailover 的了解*告诉* KubeVela,这样 KubeVela 可以帮助我们建立起整个应用下面纳管资源的拓扑层级关系。此时你将获得 KubeVela 提供的许多有用功能,例如(效果见 [运行插件](#运行插件) ):
|
||||
|
||||
- VelaUX 资源拓扑视图
|
||||
- `vela exec` 直接在 Application 包含容器中执行命令
|
||||
- `vela port-forward` 转发 Application 包含容器的端口
|
||||
- `vela log` 查看 Application 包含容器的日志
|
||||
- `vela status --pod/--endpoint` 查看 Application 包含的 Pod ,可供访问的 endpoint 等
|
||||
|
||||
```cue
|
||||
// resources/topology.cue
|
||||
|
||||
package main
|
||||
|
||||
import "encoding/json"
|
||||
|
||||
resourceTopology: {
|
||||
apiVersion: "v1"
|
||||
kind: "ConfigMap"
|
||||
metadata: {
|
||||
name: "redis-operator-topology"
|
||||
namespace: "vela-system"
|
||||
labels: {
|
||||
"rules.oam.dev/resources": "true"
|
||||
"rules.oam.dev/resource-format": "json"
|
||||
}
|
||||
}
|
||||
data: rules: json.Marshal([{
|
||||
parentResourceType: {
|
||||
group: "databases.spotahome.com"
|
||||
kind: "RedisFailover"
|
||||
}
|
||||
// RedisFailover CR 会创建以下三类资源
|
||||
childrenResourceType: [
|
||||
{
|
||||
apiVersion: "apps/v1"
|
||||
kind: "StatefulSet"
|
||||
},
|
||||
// KubeVela 内置 Deployment 等资源的拓扑,因此无需继续向下编写
|
||||
{
|
||||
apiVersion: "apps/v1"
|
||||
kind: "Deployment"
|
||||
},
|
||||
{
|
||||
apiVersion: "v1"
|
||||
kind: "Service"
|
||||
},
|
||||
]
|
||||
}])
|
||||
}
|
||||
```
|
||||
|
||||
### definitions 目录
|
||||
|
||||
Definitions 目录存放 KubeVela 模块定义(Definition),包括组件定义(ComponentDefinition)、策略定义(TraitDefinition)等。因为这才是真正给用户提供能力的,所以这是一个插件最重要的部分。有了这里定义的模块定义类型,用户就可以在自己的 Application 中使用他们了。
|
||||
|
||||
编写 ComponentDefinition 的过程主要参照 KubeVela 文档 [自定义组件](https://kubevela.net/zh/docs/platform-engineers/components/custom-component) 与 [Redis Operator 使用文档](https://github.com/spotahome/redis-operator/blob/master/README.md) ,这与传统编写 ComponentDefinition 的方式并无差异。模块定义包括:[Component Definition](https://kubevela.io/docs/platform-engineers/components/custom-component), [Trait Definition](https://kubevela.io/docs/platform-engineers/traits/customize-trait), [Policy Definition](https://kubevela.io/docs/platform-engineers/policy/custom-policy) 和 [Workflow Step Definition](https://kubevela.io/docs/platform-engineers/workflow/workflow)。
|
||||
|
||||
我们要编写的是一个 ComponentDefinition ,名为 `redis-failover`,它会创建一个 RedisFailover 的 CR ,这样刚刚添加的 Redis Operator 就可以帮助创建 Redis 集群,见[完整代码](https://github.com/kubevela/catalog/blob/master/experimental/addons/redis-operator/definitions/redis-failover.cue)。
|
||||
|
||||
### metadata.yaml
|
||||
|
||||
这里包含了插件的元数据,即插件的名称、版本、系统要求等,可以参考[文档](https://kubevela.net/zh/docs/platform-engineers/addon/intro#%E6%8F%92%E4%BB%B6%E7%9A%84%E5%9F%BA%E6%9C%AC%E4%BF%A1%E6%81%AF%E6%96%87%E4%BB%B6)。需要注意的是,本次介绍的为 KubeVela v1.5 之后的新写法,因此需要避免使用某些不兼容的元数据字段,以下样例中包含了所有的可用元数据。
|
||||
|
||||
:::tip
|
||||
例如传统的 `deployTo.runtimeCluster` (安装至子集群)等元数据在新写法中已有代替(使用 topology Policy),应当使用新写法。可见完整代码中的 [`template.cue`](https://github.com/kubevela/catalog/blob/958a770a9adb3268e56ca4ec2ce99d2763617b15/experimental/addons/redis-operator/template.cue#L28)
|
||||
:::
|
||||
|
||||
```yaml
|
||||
# 插件名称,与目录名一致
|
||||
name: redis-operator
|
||||
# 插件描述
|
||||
description: Redis Operator creates/configures/manages high availability redis with sentinel automatic failover atop Kubernetes.
|
||||
# 展示用标签
|
||||
tags:
|
||||
- redis
|
||||
# 插件版本
|
||||
version: 0.0.1
|
||||
# 展示用图标
|
||||
icon: https://xxx.com
|
||||
# 插件所包含项目的官网地址
|
||||
url: https://github.com/spotahome/redis-operator
|
||||
# 可能依赖的其他插件,例如 fluxcd
|
||||
dependencies: []
|
||||
|
||||
# 系统版本要求
|
||||
system:
|
||||
vela: ">=v1.5.0"
|
||||
kubernetes: ">=1.19"
|
||||
```
|
||||
|
||||
## 运行插件
|
||||
|
||||
至此我们已经将插件的主要部分编写完成,下载 [完整代码](https://github.com/kubevela/catalog/tree/master/experimental/addons/redis-operator) 补全部分细节后,即可尝试运行。
|
||||
|
||||
下载得到 redis-operator 目录后,我们可以通过 `vela addon enable redis-operator` 安装本地的 `redis-operator` 插件。安装完成后就可以参考插件的 [README](https://github.com/kubevela/catalog/tree/master/experimental/addons/redis-operator/README.md) 试用我们的 Redis 插件了!
|
||||
|
||||
> 这里也体现出插件的 README 的重要性,其中需要包括插件的作用、详细使用指南等,确保用户可以快速上手
|
||||
|
||||
在用户使用你编写的插件时,只需如下 **4** 行 yaml 即可在 Application 中创建包含 3 个 Node 的高可用 Redis 集群!相比于手动安装 Redis Operator 并创建 CR ,甚至逐一手动配置 Redis 集群,插件的方式极大地方便了用户。
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: redis-operator-sample
|
||||
spec:
|
||||
components:
|
||||
# This component is provided by redis-operator addon.
|
||||
# In this example, 2 redis instance and 2 sentinel instance
|
||||
# will be created.
|
||||
- type: redis-failover
|
||||
name: ha-redis
|
||||
properties:
|
||||
# You can increase/decrease this later to add/remove instances.
|
||||
replicas: 3
|
||||
```
|
||||
|
||||
只需 apply 仅仅数行的 yaml 文件,我们就轻松创建了如下图所示的整个复杂的资源。并且由于我们编写了 *资源关联规则 (Resource Topology)* ,用户可以通过 VelaUX 轻松获得刚刚创建的 Redis 集群的资源拓扑状态,了解 Application 底层资源的运行状况,不再受限于 Application Component 级别的可观测性。如图我们能直接观测到整个 Application 的拓扑,直至每个 Redis Pod ,可见图中部分 Pod 仍在准备中:
|
||||
|
||||

|
||||
|
||||
在执行 `vela exec/log/port-forward` 等命令时也可以精确地看到 Application 底层包含的资源(即支撑 Redis 集群的 3 个 Redis Pod 和 3 个 Sentinel Pod)。如果你在使用单集群,乍一看你可能不会觉得 exec 进一个 Pod 是很特殊的功能。但是一旦考虑到多集群,能够在横跨多个集群的资源中进行选择能够极大的节省时间:
|
||||
|
||||

|
||||
|
||||
使用 `vela status` 命令能获取这个 Application 的运行状态,有了资源关联规则后可以更进一步,直接通过 vela 寻找出 Redis Sentinel 的 Endpoint 来访问 Redis 集群:
|
||||
|
||||

|
||||
|
||||
## 结语
|
||||
|
||||
通过本文档,相信你已经了解插件的作用及制作插件的要点。如果你成功制作了属于自己的插件, KubeVela 社区欢迎开发者贡献插件至 [addon catalog](https://github.com/kubevela/catalog) ,这样你的插件还能够被其他 KubeVela 用户发现并使用!
|
|
@ -13,6 +13,8 @@ hide_table_of_contents: false
|
|||
|
||||
有时我们已经有一些 Terraform 云资源,这些资源可能由 Terraform 二进制程序或其他程序创建和管理。 为了获得 [使用 KubeVela 管理云资源的好处](2022-06-27-terraform-integrate-with-vela.md#part-1-glue-terraform-module-as-kubevela-capability) 或者只是在管理云资源的方式上保持一致性,我们可能希望将这些现有的 Terraform 云资源导入 KubeVela 并使用 vela 进行管理。如果我们只是创建一个描述这些云资源的应用,这些云资源将被重新创建并可能导致错误。 为了解决这个问题,我们制作了 [一个简单的 `backup_restore` 工具](https://github.com/kubevela/terraform-controller/tree/master/hack/tool/backup_restore)。 本博客将向你展示如何使用 `backup_restore` 工具将现有的 Terraform 云资源导入 KubeVela。
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
## 步骤1:创建 Terraform 云资源
|
||||
|
||||
由于我们要演示如何将现有的云资源导入 KubeVela,我们需要先创建一个。 如果你已经拥有此类资源,则可以跳过此步骤。
|
||||
|
@ -23,7 +25,6 @@ hide_table_of_contents: false
|
|||
- 获得云服务凭证,在本文中,我们将使用 aws 作为示例。
|
||||
- 学习[如何使用terraform](https://www.terraform.io/language)的基础知识。
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
让我们开始吧!
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: The Canary Rollout of the Helm Chart Application Is Coming!
|
||||
title: 你要的 Helm Chart 应用金丝雀发布终于来了!
|
||||
author: 王易可(楚岳)
|
||||
author_title: KubeVela Team
|
||||
author_url: https://github.com/kubevela/kubevela
|
||||
|
|
|
@ -10,8 +10,9 @@ image: https://raw.githubusercontent.com/oam-dev/KubeVela.io/main/docs/resources
|
|||
hide_table_of_contents: false
|
||||
---
|
||||
|
||||
> KubeVela 1.5 于近日正式发布。在该版本中为社区带来了更多的开箱即用的应用交付能力,包括新增系统可观测;新增Cloud Shell 终端,将 Vela CLI 搬到了浏览器;增强的金丝雀发布;优化多环境应用交付工作流等。进一步提升和打磨了 KubeVela 作为应用交付平台的高扩展性体验。另外,社区也正式开始推动项目提级到 CNCF Incubation 阶段,同时在多次社区会议中听取了多个社区标杆用户的实践分享,这也证明了社区的良性发展。项目的成熟度,采纳度皆取得了阶段性成绩。这非常感谢社区 200 多位开发者的贡献。
|
||||
KubeVela 1.5 于近日正式发布。在该版本中为社区带来了更多的开箱即用的应用交付能力,包括新增系统可观测;新增Cloud Shell 终端,将 Vela CLI 搬到了浏览器;增强的金丝雀发布;优化多环境应用交付工作流等。进一步提升和打磨了 KubeVela 作为应用交付平台的高扩展性体验。另外,社区也正式开始推动项目提级到 CNCF Incubation 阶段,同时在多次社区会议中听取了多个社区标杆用户的实践分享,这也证明了社区的良性发展。项目的成熟度,采纳度皆取得了阶段性成绩。这非常感谢社区 200 多位开发者的贡献。
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
KubeVela 近一年来发布了五个大版本,每一次迭代都是一个飞跃。1.1 的发布带来了衔接多集群的能力,1.2/1.3 带来了扩展体系和更友好的开发者体验,1.4 引入了全链路安全机制。 如今 1.5 的发布让我们离 KubeVela “让应用交付和管理更轻松”的愿景更近了一步。一路走来,我们始终遵循同样的设计理念,在不失可扩展性的基础上构建自动化处理底层差异化基础设施复杂性的平台,帮助应用开发者低成本从业务开发升级为云原生研发。技术上则围绕从代码到云、从应用交付到管理的完整链路,基于开放应用模型(OAM)提炼衔接基础设施的框架能力。如图 1 所示,如今的 KubeVela 已经覆盖了从应用定义、交付、运维、管理全链路的能力,而这一切能力都基于 OAM 的可扩展性(即 OAM Definition)插件式地衔接生态项目实现。**本质上,每一个 Definition 都是将一项具体能力的使用经验,转化为一个可复用的最佳实践模块,通过插件打包实现企业共享或者社区共享。**
|
||||
|
||||
|
|
|
@ -0,0 +1,375 @@
|
|||
---
|
||||
title: "KubeVela 插件指南:轻松扩展你的平台专属能力"
|
||||
authors:
|
||||
- name: 姜洪烨
|
||||
title: KubeVela Team
|
||||
url: https://github.com/charlie0129
|
||||
image_url: https://github.com/charlie0129.png
|
||||
tags: [ "KubeVela", "addon", "extensibility" ]
|
||||
description: 手把手教你如何制作 KubeVela 自定义插件。
|
||||
image: https://raw.githubusercontent.com/oam-dev/KubeVela.io/main/docs/resources/KubeVela-03.png
|
||||
hide_table_of_contents: false
|
||||
---
|
||||
|
||||
KubeVela 插件(addon)可以方便地扩展 KubeVela 的能力。正如我们所知,KubeVela 是一个微内核高度可扩展的平台,用户可以通过 [模块定义(Definition)](https://kubevela.net/zh/docs/platform-engineers/oam/x-definition)扩展 KubeVela 的系统能力,而 KubeVela 插件正是方便将这些**自定义扩展**及其**依赖**打包并分发的核心功能。不仅如此,KubeVela 社区的插件中心也在逐渐壮大,如今已经有超过 50 款插件,涵盖可观测性、微服务、FinOps、云资源、安全等大量场景功能。
|
||||
|
||||
这篇博客将会全方位介绍 KubeVela 插件的核心机制,教你如何编写一个自定义插件。在最后,我们将展示最终用户使用插件的体验,以及插件将如何融入到 KubeVela 平台,为用户提供一致的体验。
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
## 为什么要使用 KubeVela 插件
|
||||
|
||||
用户使用插件的一个典型方法是通过 KubeVela 团队维护的 [插件中心(addon catalog)](https://github.com/kubevela/catalog) ,它包含了 KubeVela 团队与社区开发者精心编写的系统扩展功能,并以插件的形式发布于插件中心,这样你可以一键下载并安装这些插件。例如安装 FluxCD 可以快速给你的 KubeVela Application 提供部署 Helm Chart 的能力。
|
||||
|
||||
相较于使用 KubeVela 的插件功能,如果你自己的内部平台想要集成一个云原生的功能,你大概会这么做:
|
||||
|
||||
1. 通过 Helm Chart 或者下载 yaml 文件手动安装 FluxCD 或类似的 CRD Operator。
|
||||
2. 编写系统集成的代码,让用户界面可以通过统一的方式使用 FluxCD 等 CRD 的功能,在 KubeVela 系统中就是通过编写模块定义(OAM Definition)完成。
|
||||
|
||||
实际上,在 KubeVela 1.1 版本之前,我们也是通过类似的方式完成的。这会带来如下问题:
|
||||
|
||||
1. **操作繁琐**:用户需要手动查阅文档如何安装 FluxCD 并处理可能发生的错误
|
||||
2. **资源分散**:用户需要下载不同的文件,既需要安装 Helm 安装 FluxCD 还需要下载模块定义等系统扩展的集成配置
|
||||
3. **难以分发复用**:用户需要手动下载模块定义就注定了这些资源难以以一个统一的方式分发给用户,也无法形成社区生态让不同的用户可以享受社区便利
|
||||
4. **缺少多集群支持**:KubeVela 将多集群交付作为一等公民,而这样的手动安装系统扩展的方式显然难以维护多集群的环境
|
||||
5. **无版本管理**:用户需要手动管理模块定义和 Controller 之间的版本
|
||||
|
||||
而 KubeVela 插件就是为了逐一解决这些问题而诞生的。
|
||||
|
||||
## KubeVela 插件是如何工作的
|
||||
|
||||
KubeVela 的插件主要包含两部分:
|
||||
|
||||
- 一部分是安装能力的提供者,通常是一个 CRD Operator/Controller。这个**安装过程实质上就是运行一个 OAM 应用**,addon 交付中所使用的功能与普通应用能力完全等价。
|
||||
- 另一部分就是扩展能力跟 KubeVela 体系的粘合层,也就是模块定义和其他的一些集成配置。OAM 模块定义为用户提供了插件扩展出的组件、运维特征以及工作流步骤等功能,也帮助 CRD Operator 提供用户友好的抽象,使得最终用户无需理解复杂的 CRD 参数,只需要根据最佳实践提供必要的参数。
|
||||
|
||||

|
||||
|
||||
插件的工作机制如上图所示,KubeVela 的应用具备多集群交付的能力,所以也能帮助插件中的 CRD Operator 部署到这些集群中。模块定义文件仅需要在控制面被 KubeVela 使用,所以无需部署到被管控的集群中。
|
||||
|
||||
:::tip
|
||||
一旦插件被安装,就会创建一个 KubeVela 应用,包含所有的相关资源和配置,这些配置都会设置 KubeVela 应用对外作为 OwnerReference(父节点)。当我们想要卸载一个插件时,只需要删除这个应用,Kubernetes 提供的资源回收机制会自动将标记了 OwnerReference 的资源一并删除。
|
||||
:::
|
||||
|
||||
例如一个 Redis 插件,它能让用户在自己的应用中使用 Redis 集群类型的组件(Component),这样可以快速创建 Redis 集群。那么这个插件至少会包括一个 Redis Operator 来提供创建 Redis 集群的能力(通过 Application 描述),还有一个组件的模块定义 (ComponentDefinition) 来提供 Redis 集群的组件类型。
|
||||
|
||||
所有整个插件的安装过程会将 Redis Operator 放在一个 KubeVela 应用中下发到多集群,而组件定义和 UI 扩展等配置文件则只部署到控制面集群并设置应用对象为 OwnerReference。
|
||||
|
||||
## 创建自己的插件
|
||||
|
||||
:::tip
|
||||
为保证以下内容功能全部可用,请确保你的 KubeVela 版本 为 v1.5+。
|
||||
:::
|
||||
|
||||
我们将以 Redis 插件为例,讲解如何从头创建一个 KubeVela 插件的实际过程。本次完整的 Redis 插件代码见 [catalog/redis-operator](https://github.com/kubevela/catalog/tree/master/experimental/addons/redis-operator)。
|
||||
|
||||
:::tip
|
||||
在这里我们会尽可能全面的介绍制作插件中涉及的核心知识,但是作为一个介绍性博客,我们会尽量避免讨论过深的细节以免篇幅过于膨胀,了解完整的功能及细节可以参考[自定义插件文档](https://kubevela.net/zh/docs/platform-engineers/addon/intro)。
|
||||
:::
|
||||
|
||||
**首先我们需要思考我们要创建的插件有什么作用?** 例如我们假设 Redis 插件可以提供 `redis-failover` 类型的 Component,这样用户只需在 Application 中定义一个 `redis-failover` Component 即可快速创建 Redis 集群。
|
||||
|
||||
**然后考虑如何达到这个目的?** 要提供 `redis-failover` 类型的 Component 我们需要定义一个 ComponentDefinition ;要提供创建 Redis 集群的能力支持,我们可以使用 [Redis Operator](https://github.com/spotahome/redis-operator) 。
|
||||
|
||||
那至此我们的大目标就明确了:
|
||||
|
||||
- 编写插件的应用描述文件(OAM Application),这将会用于安装 Redis Operator (完整代码可以到插件中心的[`template.cue`](https://github.com/kubevela/catalog/blob/master/experimental/addons/redis-operator/template.cue) 及 [`resources/`](https://github.com/kubevela/catalog/tree/master/experimental/addons/redis-operator/resources) 目录查看。)
|
||||
- 编写 `redis-failover` 类型的 [ComponentDefinition](https://kubevela.net/zh/docs/platform-engineers/components/custom-component) (完整代码请查看 [`definitions/` 目录](https://github.com/kubevela/catalog/tree/master/experimental/addons/redis-operator/definitions))
|
||||
|
||||
不过在开始编写之前,我们首先需要了解一个 KubeVela 插件的目录结构。后续我们会在编写的过程中详细说明每个文件的作用,在这里只需大致了解有哪些文件即可。
|
||||
|
||||
:::tip
|
||||
命令行工具 `vela addon init` 可以帮助你创建目录结构的初始化脚手架。
|
||||
:::
|
||||
|
||||
```shell
|
||||
redis-operator/
|
||||
├── definitions
|
||||
│ └── redis-failover.cue
|
||||
├── resources
|
||||
│ ├── crd.yaml
|
||||
│ ├── redis-operator.cue
|
||||
│ └── topology.cue
|
||||
├── metadata.yaml
|
||||
├── parameter.cue
|
||||
├── README.md
|
||||
└── template.cue
|
||||
```
|
||||
|
||||
让我们逐一来解释它们:
|
||||
|
||||
1. `redis-operator/` 是目录名,同时也是插件名称,请保持一致。
|
||||
2. `definitions/` 用于存放模块定义, 例如 TraitDefinition 和 ComponentDefinition。
|
||||
3. `redis-failover.cue` 定义我们编写的 redis-failover 组件类型,包含了用户如何使用这个组件的参数以及这个组件与底层资源交互的细节。
|
||||
4. `resources/` 用于存放资源文件, 之后会在 `template.cue` 中使用他们共同组成一个 KubeVela 应用来部署插件。
|
||||
5. `crd.yaml` 是 Redis Operator 的 Kubernetes 自定义资源定义,在 `resources/` 文件夹中的 YAML 文件会被直接部署到集群中。
|
||||
6. `redis-operator.cue` 一个 web-service 类型的 Component ,用于安装 Redis Operator。
|
||||
7. `topology.cue` 是可选的,帮助 KubeVela 建立应用所纳管资源的拓扑关系。
|
||||
8. `metadata.yaml` 是插件的元数据,包含插件名称、版本、维护人等,为插件中心提供了概览信息。
|
||||
9. `parameter.cue` 插件参数定义,用户可以利用这些参数在插件安装时做轻量级自定义。
|
||||
10. `README.md` 提供给最终用户阅读,包含插件使用指南等。
|
||||
11. `template.cue` 定义插件最终部署时的完整应用形态,包含一个 OAM 应用模板以及对其他资源对象的引用。
|
||||
|
||||
:::tip
|
||||
在插件中制作中我们会广泛使用 CUE 语言来编排配置,如果对 CUE 不熟悉,可以花 10 分钟快速查阅[入门指南](https://kubevela.net/zh/docs/platform-engineers/cue/basic)有一个基本了解。
|
||||
:::
|
||||
|
||||
### parameter.cue
|
||||
|
||||
```cue
|
||||
parameter: {
|
||||
//+usage=Redis Operator image.
|
||||
image: *"quay.io/spotahome/redis-operator:v1.1.0" | string
|
||||
// 其余省略
|
||||
}
|
||||
```
|
||||
|
||||
在 `parameter.cue` 中定义的参数都是用户可以自定义的(类似于 Helm Values),后续在 template.cue 或者 resources 中可以通过 `parameter.<parameter-name>` 访问参数。在我们的例子中,用户可以自定义 `image` ,这样后续我们创建 Redis Operator (`redis-operator.cue`) 的时候可以通过 `parameter.image` 使用用户指定的容器镜像。
|
||||
|
||||
参数不仅可以给用户预留安装时的自定义输入,还可以作为安装时的条件进行部分安装。比如 `fluxcd` 插件有一个参数叫 [`onlyHelmComponents`](https://github.com/kubevela/catalog/blob/master/addons/fluxcd/parameter.cue),它的作用就是可以帮助用户只部署用于安装 Helm Chart 的组件能力,而其他控制器就可以不安装。如果你对于实现细节感兴趣,可以参考fluxcd 插件的 [这部分配置](https://github.com/kubevela/catalog/blob/master/addons/fluxcd/template.cue#L25).
|
||||
|
||||
在设计提供什么参数供用户自定义插件安装时,我们也应该遵循一下这些最佳实践来为用户提供更好的使用体验。
|
||||
|
||||
:::tip 最佳实践
|
||||
- 不要在 parameter.cue 中提供大量的细节参数,将大量细节抽象出少量参数供用户调节是一个更好的做法
|
||||
- 为参数提供默认值(如样例中的 image 参数)或将参数标记为可选(如样例的 clusters 参数),确保用户仅使用默认值可以得到一个可用的配置
|
||||
- 为参数提供使用说明(通过注释标记实现,见样例)
|
||||
- 尽量保持插件不同版本间的参数一致,防止因为升级导致不兼容
|
||||
:::
|
||||
|
||||
### `template.cue` 和 `resources/` 目录
|
||||
|
||||
这是存放我们应用描述文件的地方,即一个 OAM Application 。这描述了实际的插件安装过程。我们主要会在这里包含 Redis Operator ,给集群提供管理 Redis 集群的能力。
|
||||
|
||||
`template.cue` 和 `resources/` 目录本质上是相同的,都是构成 KubeVela 应用的组成部分,且都是在同一个 package 下的 CUE 文件。
|
||||
|
||||
那为什么需要 resources 目录呢?除去历史原因,这主要是为了可读性的考虑,在 Application 中包含大量资源的时候 template.cue 可能变得很长,这时我们可以把资源放置在 resource 中增加可读性。一般来说,我们将 Application 的框架放在 template.cue 中,将 Application 内部的 Components、Traits 等信息放在 resource 目录中。
|
||||
|
||||
#### template.cue
|
||||
|
||||
`template.cue` 定义了应用的框架,绝大多数内容都是固定写法,具体的作用可以参考代码块中的注释。
|
||||
|
||||
```cue
|
||||
// template.cue 应用描述文件
|
||||
|
||||
// package 名称需要与 resources 目录中 cue 的 package 一致,方便引用 resources 目录中的内容
|
||||
package main
|
||||
|
||||
// Application 模板中多数字段均为固定写法,你需要注意的只有 spec.components
|
||||
|
||||
output: {
|
||||
// 这是一个经典的 OAM Application
|
||||
apiVersion: "core.oam.dev/v1beta1"
|
||||
kind: "Application"
|
||||
// 不需要 metadata
|
||||
spec: {
|
||||
components: [
|
||||
// 创建 Redis Operator
|
||||
redisOperator // 定义于 resources/redis-operator.cue 中
|
||||
]
|
||||
policies: [
|
||||
// 这里会指定安装插件的 namespace ,是否安装至子集群等
|
||||
// 多为固定写法,无需记忆,可查阅本次样例的完整代码
|
||||
// https://github.com/kubevela/catalog/blob/master/experimental/addons/redis-operator/template.cue
|
||||
// 文档可参照 https://kubevela.net/zh/docs/end-user/policies/references
|
||||
]
|
||||
}
|
||||
}
|
||||
// 定义资源关联规则,用于将资源粘合在一起。后续会着重介绍
|
||||
// Documentation: https://kubevela.net/zh/docs/reference/topology-rule
|
||||
outputs: topology: resourceTopology // 定义于 resources/topology.cue 中
|
||||
```
|
||||
|
||||
在插件安装时,系统主要关注两个关键字:
|
||||
|
||||
- 一是 `output` 字段,定义了插件对应的应用,在应用内部 `spec.components` 定义了部署的组件,在我们的例子中引用了存放在 `resources/` 目录中的 `redisOperator` 组件。output 中的 Application 对象不是严格的 Kubernetes 对象,其中 metadata 里的内容(主要是插件名称)会被插件安装的过程自动注入。
|
||||
- 另一个是 `outputs` 字段,定义了除了常规应用之外的配置,任何你想要跟插件一同部署的额外 Kubernetes 对象都可以定义在这里。请注意 outputs 中的这些对象必须遵循 Kubernetes API。
|
||||
|
||||
#### `resources/` 资源文件
|
||||
|
||||
我们这里使用一个 `webservice` 类型的 Component 来安装 Redis Operator。当然,如果你可以接受依赖 FluxCD 的话,你也可以使用 `helm` 类型的 Component 直接安装一个 Helm Chart(因为 `helm` 类型的 Component 主要由 FluxCD 插件提供)。不过编写 addon 的一个原则是尽量减少外部依赖,所以我们这里使用 KubeVela 内置的 `webservice` 类型,而不是 `helm`。
|
||||
|
||||
```cue
|
||||
// resources/redis-operator.cue
|
||||
|
||||
// package 名称与 template.cue 一致,方便在 template.cue 中引用以下的 redisOperator
|
||||
package main
|
||||
|
||||
redisOperator: {
|
||||
// 这是 OAM Application 中的 Component ,它将会创建一个 Redis Operator
|
||||
// https://kubevela.net/zh/docs/end-user/components/references
|
||||
name: "redis-operator"
|
||||
type: "webservice"
|
||||
properties: {
|
||||
// Redis Operator 镜像名称,parameter.image 即在 parameter.cue 中用户可自定义的参数
|
||||
image: parameter.image
|
||||
imagePullPolicy: "IfNotPresent"
|
||||
}
|
||||
traits: [
|
||||
// 略
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
你可以阅读代码块中的注释了解字段的具体作用。
|
||||
|
||||
#### KubeVela 提供的资源粘合能力
|
||||
|
||||
值得注意的一个功能是 [*资源关联规则 (Resource Topology)*](https://kubevela.net/zh/docs/reference/topology-rule) 。虽然它不是必须的,但是它能帮助 KubeVela 建立应用所纳管资源的拓扑关系。这就是 KubeVela 如何将各种各样的资源粘合成 Application 的。这在我们使用 Kubernetes 自定义资源(CR)的时候特别有用。
|
||||
|
||||
```cue
|
||||
// resources/topology.cue
|
||||
|
||||
package main
|
||||
|
||||
import "encoding/json"
|
||||
|
||||
resourceTopology: {
|
||||
apiVersion: "v1"
|
||||
kind: "ConfigMap"
|
||||
metadata: {
|
||||
name: "redis-operator-topology"
|
||||
namespace: "vela-system"
|
||||
labels: {
|
||||
"rules.oam.dev/resources": "true"
|
||||
"rules.oam.dev/resource-format": "json"
|
||||
}
|
||||
}
|
||||
data: rules: json.Marshal([{
|
||||
parentResourceType: {
|
||||
group: "databases.spotahome.com"
|
||||
kind: "RedisFailover"
|
||||
}
|
||||
// RedisFailover CR 会创建以下三类资源
|
||||
childrenResourceType: [
|
||||
{
|
||||
apiVersion: "apps/v1"
|
||||
kind: "StatefulSet"
|
||||
},
|
||||
// KubeVela 内置 Deployment 等资源的拓扑,因此无需继续向下编写
|
||||
{
|
||||
apiVersion: "apps/v1"
|
||||
kind: "Deployment"
|
||||
},
|
||||
{
|
||||
apiVersion: "v1"
|
||||
kind: "Service"
|
||||
},
|
||||
]
|
||||
}])
|
||||
}
|
||||
```
|
||||
|
||||
在本例中,`redis-failover` 类型的 Component 会创建一个 CR ,名为 RedisFailover 。但是在没有资源关联规则的情况下,假设在你的 Application 中使用了 RedisFailover ,虽然我们知道 RedisFailover 管控了数个 Redis Deployment ,但是 KubeVela 并不知道 RedisFailover 之下有 Deployment 。这时我们可以通过 *资源关联规则* 将我们对于 RedisFailover 的了解*告诉* KubeVela,这样 KubeVela 可以帮助我们建立起整个应用下面纳管资源的拓扑层级关系。此时你将获得 KubeVela 提供的许多有用功能,效果见 [运行插件](#运行插件)。
|
||||
|
||||
:::tip
|
||||
资源的拓扑关联功能给我们带来了许多有用的功能,最重要的是为 KubeVela 最终用户使用扩展能力提供了统一体验:
|
||||
|
||||
- VelaUX 资源拓扑视图,从应用到底层资源 Pod 的关联关系一应俱全,包括多集群
|
||||
- 统一的 `vela exec` 命令可以在不同应用组件类型关联的底层容器中执行命令,包括多集群
|
||||
- 统一的 `vela port-forward` 转发不同类型应用组件关联的底层容器端口,包括多集群
|
||||
- 统一的 `vela log` 查看不同类型应用组件关联的底层容器日志,包括多集群
|
||||
- 统一的 `vela status --pod/--endpoint` 查看不同类型应用组件关联的底层容器日志,获得可供访问的地址等,包括多集群
|
||||
:::
|
||||
|
||||
### `definitions/` 目录
|
||||
|
||||
Definitions 目录存放 KubeVela [模块定义(Definition)](https://kubevela.io/docs/getting-started/definition),包括组件定义(ComponentDefinition)、策略定义(TraitDefinition)等。**这是插件中最重要的部分,因为它包含了最终用户安装这个插件以后可以获得哪些功能。**有了这里定义的组件、运维特征、工作流等类型,最终用户就可以在应用中使用他们了。
|
||||
|
||||
在插件中编写模块定义跟常规的编写流程一致,这是一个很大的话题,在这里我们就不详细展开了。你可以通过阅读模块定义对应的文档了解其中的细节:
|
||||
|
||||
- [自定义组件 Component Definition](https://kubevela.io/docs/platform-engineers/components/custom-component)
|
||||
- [自定义运维特征 Trait Definition](https://kubevela.io/docs/platform-engineers/traits/customize-trait)
|
||||
- [自定义策略 Policy Definition](https://kubevela.io/docs/platform-engineers/policy/custom-policy)
|
||||
- [自定义工作流步骤 Workflow Step Definition](https://kubevela.io/docs/platform-engineers/workflow/workflow)。
|
||||
|
||||
在本例中,我们编写 Redis 组件类型主要参照 [自定义组件](https://kubevela.net/zh/docs/platform-engineers/components/custom-component) 与 [Redis Operator 使用文档](https://github.com/spotahome/redis-operator/blob/master/README.md) ,我们将组件类型命名为 `redis-failover`,它会创建一个 RedisFailover 的 CR ,这样刚刚添加的 Redis Operator 就可以帮助创建 Redis 集群,见[完整代码](https://github.com/kubevela/catalog/blob/master/experimental/addons/redis-operator/definitions/redis-failover.cue)。
|
||||
|
||||
### metadata.yaml
|
||||
|
||||
这里包含了插件的元数据,即插件的名称、版本、系统要求等,可以参考[文档](https://kubevela.net/zh/docs/platform-engineers/addon/intro#%E6%8F%92%E4%BB%B6%E7%9A%84%E5%9F%BA%E6%9C%AC%E4%BF%A1%E6%81%AF%E6%96%87%E4%BB%B6)。需要注意的是,本次介绍的为 KubeVela v1.5 之后的新写法,因此需要避免使用某些不兼容的元数据字段,以下样例中包含了所有的可用元数据。
|
||||
|
||||
:::tip
|
||||
例如传统的 `deployTo.runtimeCluster` (安装至子集群)等元数据在新写法中已有代替(使用 topology Policy),应当使用新写法。可见完整代码中的 [`template.cue`](https://github.com/kubevela/catalog/blob/958a770a9adb3268e56ca4ec2ce99d2763617b15/experimental/addons/redis-operator/template.cue#L28)
|
||||
:::
|
||||
|
||||
```yaml
|
||||
# 插件名称,与目录名一致
|
||||
name: redis-operator
|
||||
# 插件描述
|
||||
description: Redis Operator creates/configures/manages high availability redis with sentinel automatic failover atop Kubernetes.
|
||||
# 展示用标签
|
||||
tags:
|
||||
- redis
|
||||
# 插件版本
|
||||
version: 0.0.1
|
||||
# 展示用图标
|
||||
icon: https://xxx.com
|
||||
# 插件所包含项目的官网地址
|
||||
url: https://github.com/spotahome/redis-operator
|
||||
# 可能依赖的其他插件,例如 fluxcd
|
||||
dependencies: []
|
||||
|
||||
# 系统版本要求
|
||||
system:
|
||||
vela: ">=v1.5.0"
|
||||
kubernetes: ">=1.19"
|
||||
```
|
||||
|
||||
## 运行插件
|
||||
|
||||
至此我们已经将插件的主要部分编写完成,下载 [完整代码](https://github.com/kubevela/catalog/tree/master/experimental/addons/redis-operator) 补全部分细节后,即可尝试运行。
|
||||
|
||||
下载得到 redis-operator 目录后,我们可以通过 `vela addon enable redis-operator` 安装本地的 `redis-operator` 插件,这种本地安装插件的方式也可以方便你再制作时做一些调试。
|
||||
|
||||
安装完成后就可以参考插件的 [README](https://github.com/kubevela/catalog/tree/master/experimental/addons/redis-operator/README.md) 试用我们的 Redis 插件了!
|
||||
|
||||
:::tip
|
||||
这里也体现出插件的 README 的重要性,其中需要包括插件的作用、详细使用指南等,确保用户可以快速上手。
|
||||
:::
|
||||
|
||||
在用户使用你编写的插件时,只需如下 **4** 行 yaml 即可在 Application 中创建包含 3 个 Node 的高可用 Redis 集群!相比于手动安装 Redis Operator 并创建 CR ,甚至逐一手动配置 Redis 集群,插件的方式极大地方便了用户。
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: redis-operator-sample
|
||||
spec:
|
||||
components:
|
||||
# This component is provided by redis-operator addon.
|
||||
# In this example, 2 redis instance and 2 sentinel instance
|
||||
# will be created.
|
||||
- type: redis-failover
|
||||
name: ha-redis
|
||||
properties:
|
||||
# You can increase/decrease this later to add/remove instances.
|
||||
replicas: 3
|
||||
```
|
||||
|
||||
只需 apply 仅仅数行的 yaml 文件,我们就轻松创建了如下图所示的整个复杂的资源。并且由于我们编写了 *资源关联规则 (Resource Topology)* ,用户可以通过 VelaUX 轻松获得刚刚创建的 Redis 集群的资源拓扑状态,了解 Application 底层资源的运行状况,不再受限于 Application Component 级别的可观测性。如图我们能直接观测到整个 Application 的拓扑,直至每个 Redis Pod ,可见图中部分 Pod 仍在准备中:
|
||||
|
||||

|
||||
|
||||
在执行 `vela exec/log/port-forward` 等命令时也可以精确地看到 Application 底层包含的资源(即支撑 Redis 集群的 3 个 Redis Pod 和 3 个 Sentinel Pod)。
|
||||
|
||||

|
||||
|
||||
:::
|
||||
如果你在使用单集群,乍一看你可能不会觉得 exec 进一个 Pod 是很特殊的功能。但是一旦考虑到多集群,能够在横跨多个集群的资源中跟单集群一样以统一的方式进行选择查看能够极大的节省时间。
|
||||
:::
|
||||
|
||||
使用 `vela status` 命令能获取这个 Application 的运行状态,有了资源关联规则后可以更进一步,直接通过 vela 寻找出 Redis Sentinel 的 Endpoint 来访问 Redis 集群:
|
||||
|
||||

|
||||
|
||||
## 结语
|
||||
|
||||
通过本文,相信你已经了解插件的作用及制作插件的要点。通过插件体系,我们将获得如下优势:
|
||||
|
||||
1. 将平台的能力打包成一个易于安装、便于分发复用、且可以形成社区生态的插件市场。
|
||||
2. 充分复用 CUE 和 KubeVela 应用通过的强大能力,将基础设施资源灵活定义并进行多集群分发。
|
||||
3. 无论扩展的资源类型是什么,均可以接入应用体系,为最终用户提供一致的体验。
|
||||
|
||||
|
||||
最后,如果你成功制作了属于自己的插件,KubeVela 社区非常欢迎开发者贡献插件至 [插件中心](https://github.com/kubevela/catalog) ,这样你的插件还能够被其他 KubeVela 社区用户发现并使用!
|
Loading…
Reference in New Issue