kubevela 1.7 release blog
Signed-off-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
This commit is contained in:
parent
5cfcafcfbe
commit
89771e35c6
|
|
@ -16,6 +16,8 @@ The cloud-native landscape is formed by a fast-growing ecosystem of tools with t
|
|||
|
||||
Kubernetes offers a comprehensive set of entities that enables any potential application to be deployed into it, independent of its complexity. This however has a significant impact from the point of view of its adoption. Kubernetes is becoming as complex as it is powerful, and that translates into a steep learning curve for newcomers into the ecosystem. Thus, this has generated a new trend focused on providing developers with tools that improve their day-to-day activities without losing the underlying capabilities of the underlying system.
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
## Napptive Application Platform
|
||||
|
||||
The NAPPTIVE Playground offers a cloud native application platform focused on providing a simplified method to operate on Kubernetes clusters and deploy complex applications without needing to work with low-level Kubernetes entities. This is especially important to be able to accommodate non-developer users' personas as the computing resources of a company are being consolidated in multi-purpose, multi-tenant clusters. Data scientists, business analysts, modeling experts and many more can benefit from simple solutions that do not require any type of knowledge of cloud computing infrastructures, or orchestration systems such as Kubernetes, which enable them to run their applications with ease in the existing infrastructure.
|
||||
|
|
|
|||
|
|
@ -22,6 +22,8 @@ KubeVela is a modern software platform that makes delivering and operating appli
|
|||
KubeVela cares the whole lifecycle of the applications, including both the Day-1 Delivery and the Day-2 Operating stages. It is able to connect with a wide range of Continuous Integration tools, like Jenkins or GitLab CI, and help users deliver and operate applications across hybrid environments.
|
||||

|
||||
|
||||
<!--truncate-->
|
||||
|
||||
# Why KubeVela?
|
||||
## Challenges and Difficulties
|
||||
Nowadays, the fast growing of the cloud native infrastructures has given more and more capabilities for users to deploying applications, such as High Availability and Security, but also exposes an increase number of complexities directly to application developers.
|
||||
|
|
|
|||
|
|
@ -20,6 +20,8 @@ KubeVela is infrastructure agnostic and **application-centric**. It allows you t
|
|||
|
||||
Open Application Model (OAM) is a set of standard yet higher-level abstractions for modeling cloud-native applications on top of today’s hybrid and multi-cloud environments. You can find more conceptual details here.
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
|
|
|
|||
|
|
@ -24,6 +24,8 @@ Incubated in the OAM model, KubeVela focuses on helping enterprises build unifie
|
|||
|
||||
As mentioned before, OpenYurt supports the access of edge nodes, allowing users to manage edge nodes by operating native Kubernetes. "Edge nodes" are used to represent computing resources closer to users (such as virtual machines or physical servers in a nearby data center). After you add them through OpenYurt, these edge nodes are converted into nodes that can be used in Kubernetes. OpenYurt uses NodePool to describe a group of edge nodes in the same region. After basic resource management is met, we have the following core requirements for how to orchestrate and deploy applications to different NodePools in a cluster.
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
1. **Unified Configuration:** If you manually modify each resource to be distributed, there are many manual interventions, which are prone to errors. We need a unified way to configure parameters, which can not only facilitate batch management operations, but also connect security and audit to meet enterprise risk control and compliance requirements.
|
||||
2. **Differential Deployment:** Most workloads deployed to different NodePools have the same attributes, but there are still personalized configuration differences. The key is how to set the parameters related to node selection. For example, `NodeSelector` can instruct the Kubernetes scheduler to schedule workloads to different NodePools.
|
||||
3. **Scalability:** The cloud-native ecosystem is booming. Both workload types and different O&M functions are growing. We need the overall application architecture to be scalable so that we can fully benefit from the dividends of the cloud-native ecosystem and meet business needs flexibly.
|
||||
|
|
|
|||
|
|
@ -13,7 +13,11 @@ hide_table_of_contents: false
|
|||
[Serverless Application Engine (SAE)](https://www.alibabacloud.com/product/severless-application-engine) is a Kubernetes-based cloud product that combines the Serverless architecture and the microservice model. As an iterative cloud product, it has encountered many challenges in the process of rapid development. **How can we solve these challenges in the booming cloud-native era and perform reliable and fast upgrades for architecture?** The SAE team and the KubeVela community worked closely to address these challenges and came up with a replicable open-source solution, KubeVela Workflow.
|
||||
|
||||
This article describes how to use KubeVela Workflow to upgrade the architecture of SAE and interprets multiple practice scenarios.
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
## Challenges in the Serverless Era
|
||||
|
||||
SAE is an application hosting platform for business application architecture and microservices. It is a Kubernetes-based cloud product that combines the Serverless architecture and the microservice model.
|
||||
|
||||

|
||||
|
|
|
|||
|
|
@ -18,6 +18,8 @@ The CNCF Technical Oversight Committee (TOC) has voted to accept KubeVela as a C
|
|||
|
||||

|
||||
|
||||
<!--truncate-->
|
||||
|
||||
The KubeVela project evolved from [oam-kubernetes-runtime](https://github.com/crossplane/oam-kubernetes-runtime) and developed with [bootstrap contributions](https://github.com/kubevela/community/blob/main/OWNERS.md#bootstrap-contributors) from more than eight different organizations, including Alibaba Cloud, Microsoft, Upbound and more. It was publicly announced as an open source project in November 2020, released as v1.0 in April 2021, and accepted as a CNCF sandbox project in June 2021. The project has more than [260 contributors](https://kubevela.devstats.cncf.io/d/22/prs-authors-table?orgId=1) and committers across multiple continents from organizations like Didi, JD.com, JiHu GitLab, SHEIN, and more.
|
||||
|
||||
“KubeVela pioneered a path for delivering applications across multi-cloud/multi-cluster environments with unified yet extensible abstractions,” said Lei Zhang, CNCF TOC sponsor. “This innovation unlocked the next-generation software delivery experience and filled the ‘last mile’ gap in existing practices which focus on the ‘deploy’ stage rather than ‘orchestrating’. We are excited to welcome more app-centric tools/platforms in the CNCF community and look forward to watching the adoption of KubeVela grow to a new level in the fast-growing application delivery ecosystem.”
|
||||
|
|
|
|||
|
|
@ -0,0 +1,422 @@
|
|||
---
|
||||
title: "Interpreting KubeVela 1.7: Taking Over Your Existing Workloads"
|
||||
author: Jianbo Sun
|
||||
author_title: KubeVela Team
|
||||
author_url: https://github.com/kubevela/KubeVela
|
||||
author_image_url: https://avatars.githubusercontent.com/u/2173670
|
||||
tags: [ KubeVela, release-note, Kubernetes, DevOps, CNCF, CI/CD, Application delivery, Adopt workloads]
|
||||
description: "This article interprets the release of KubeVela 1.7."
|
||||
image: https://raw.githubusercontent.com/oam-dev/KubeVela.io/main/docs/resources/KubeVela-03.png
|
||||
hide_table_of_contents: false
|
||||
---
|
||||
|
||||
The KubeVela 1.7 version has been officially released for some time, during which KubeVela has been officially promoted to a CNCF incubation project, marking a new milestone. KubeVela 1.7 itself is also a turning point because KubeVela has been focusing on the design of an extensible system from the beginning, and the demand for the core functionality of controllers has gradually converged, freeing up more resources to focus on user experience, ease of use, and performance. In this article, we will focus on highlighting the prominent features of version 1.7, such as workload takeover and performance optimization.
|
||||
|
||||
## Taking Over Your Existing Workloads
|
||||
|
||||
Taking over existing workloads has always been a highly demanded requirement within the community, with a clear scenario: existing workloads can be naturally migrated to the OAM standard system and be managed uniformly by KubeVela's application delivery control plane. The workload takeover feature also allows reuse of VelaUX's UI console functions, including a series of operations and maintenance characteristics, workflow steps, and a rich plugin ecosystem. In version 1.7, we officially released this feature. Before diving into the specific operation details, let's first have a basic understanding of its operation mode.
|
||||
|
||||
### "read-only" and "take-over" policy
|
||||
|
||||
To meet the needs of different usage scenarios, KubeVela provides two modes for unified management. **One is the "read-only" mode, which is suitable for systems that already have a self-built platform internally and still have the main control capability for existing businesses. The new KubeVela-based platform system can only observe these applications in a read-only manner. The other mode is the "take-over" mode, which is suitable for users who want to directly migrate their workloads to the KubeVela system and achieve complete unified management.**
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
* The "read-only" mode, as the name suggests, does not perform any "write" operations on resources. Workloads managed in read-only mode can be visualized through KubeVela's toolset (such as CLI and VelaUX), which satisfies the need for unified viewing and observability. At the same time, when the managed application generated under read-only mode is deleted, the underlying workload resources will not be reclaimed. If the underlying workload is artificially modified by other controllers, KubeVela can also observe these changes.
|
||||
|
||||
* The "take-over" mode means that the underlying workloads will be fully managed by KubeVela, just like other workloads created directly through the KubeVela system. The update, deletion, and other lifecycle of the workloads will be fully controlled by KubeVela's application system. By default, modifications to workloads by other systems will no longer take effect and will be changed back by KubeVela's end-state control loop, unless you add other management policies (such as apply-once).
|
||||
|
||||
The method of declaring the take-over mode uses KubeVela's policy system, as shown below:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: read-only
|
||||
spec:
|
||||
components:
|
||||
- name: nginx
|
||||
type: webservice
|
||||
properties:
|
||||
image: nginx
|
||||
policies:
|
||||
- type: read-only
|
||||
name: read-only
|
||||
properties:
|
||||
rules:
|
||||
- selector:
|
||||
resourceTypes: ["Deployment"]
|
||||
```
|
||||
|
||||
In the "read-only" policy, we have defined multiple read-only rules. For example, if the read-only selector hits a "Deployment" resource, it means that only the resources related to Deployment are read-only. We can still create and modify resources such as "Ingress" and "Service" using operational features. However, modifying the number of instances of Deployment using the "scaler" operational feature will not take effect.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: take-over
|
||||
spec:
|
||||
components:
|
||||
- name: nginx-take-over
|
||||
type: k8s-objects
|
||||
properties:
|
||||
objects:
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx
|
||||
traits:
|
||||
- type: scaler
|
||||
properties:
|
||||
replicas: 3
|
||||
policies:
|
||||
- type: take-over
|
||||
name: take-over
|
||||
properties:
|
||||
rules:
|
||||
- selector:
|
||||
resourceTypes: ["Deployment"]
|
||||
```
|
||||
|
||||
In the "take-over" policy, we also include a series of selectors to ensure that the resources being taken over are controllable. In the example above, without the "take-over" policy, the operation would fail if there is already a Deployment resource named "nginx" in the system, because the resource already exists. On one hand, the take-over policy ensures that already existing resources can be included in management when creating applications; on the other hand, it also allows the reuse of previously existing workload configurations, and only modifications to the number of instances in operational features such as "scaler" will be "patched" to the original configuration as part of the new configuration.
|
||||
|
||||
### Use command line to take over workloads
|
||||
|
||||
After learning about the `take-over` mode, you might wonder if there is an easy way to take over workloads with a single command. Yes, KubeVela's command line provides such a convenient way to take over workloads such as common K8s resources and "Helm". It is very easy to use. Specifically, the vela CLI automatically recognizes the resources in the system and assembles them into an application for take-over. Our core principle in designing this feature is that **"taking over resources cannot trigger a restart of the underlying workloads"**.
|
||||
|
||||
As shown below, by default, using `vela adopt` will manage the resources in "read-only" mode. Simply specify the type, namespace, and name of the native resources you want to take over, and an Application object for takeover will be automatically generated. The generated application spec is strictly consistent with the actual fields in the cluster.
|
||||
|
||||
```shell
|
||||
$ vela adopt deployment/default/example configmap/default/example
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
labels:
|
||||
app.oam.dev/adopt: native
|
||||
name: example
|
||||
namespace: default
|
||||
spec:
|
||||
components:
|
||||
- name: example.Deployment.example
|
||||
properties:
|
||||
objects:
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: example
|
||||
namespace: default
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: example
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: example
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
imagePullPolicy: Always
|
||||
name: nginx
|
||||
restartPolicy: Always
|
||||
...
|
||||
type: k8s-objects
|
||||
- name: example.config
|
||||
properties:
|
||||
objects:
|
||||
- apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: example
|
||||
namespace: default
|
||||
type: k8s-objects
|
||||
policies:
|
||||
- name: read-only
|
||||
properties:
|
||||
rules:
|
||||
- selector:
|
||||
componentNames:
|
||||
- example.Deployment.example
|
||||
- example.config
|
||||
type: read-only
|
||||
```
|
||||
|
||||
The currently supported default takeover types and their corresponding resource API names are as follows:
|
||||
|
||||
- crd: ["CustomResourceDefinition"]
|
||||
- ns: ["Namespace"]
|
||||
- workload: ["Deployment", "StatefulSet", "DaemonSet", "CloneSet"]
|
||||
- service: ["Service", "Ingress", "HTTPRoute"]
|
||||
- config: ["ConfigMap", "Secret"]
|
||||
- sa: ["ServiceAccount", "Role", "RoleBinding", "ClusterRole", "ClusterRoleBinding"]
|
||||
- operator: ["MutatingWebhookConfiguration", "ValidatingWebhookConfiguration", "APIService"]
|
||||
- storage: ["PersistentVolume", "PersistentVolumeClaim"]
|
||||
|
||||
If you want to change the application to takeover mode and deploy it directly to the cluster, just add a few parameters:
|
||||
|
||||
```shell
|
||||
vela adopt deployment/default/example --mode take-over --apply
|
||||
```
|
||||
|
||||
In addition to native resources, the vela command line also supports taking over workloads created by Helm applications by default.
|
||||
|
||||
```shell
|
||||
vela adopt mysql --type helm --mode take-over --apply --recycle -n default
|
||||
```
|
||||
|
||||
The above command will manage the "mysql" Helm release in the "default" namespace through "take-over" mode. Specifying `--recycle` will clean up the original Helm release metadata after a successful deployment.
|
||||
|
||||
Once the workloads have been taken over, the corresponding KubeVela Applications have been generated, and the related operations have been integrated with the KubeVela system. You can see the taken-over applications on the VelaUX interface, and also view and operate the applications through other vela command line functions.
|
||||
|
||||
You can also use a command to take over all the workloads in your namespace in batches. Based on KubeVela's resource topology capabilities, the system will automatically recognize the associated resources and form a complete application. For custom resources such as CRDs, KubeVela also supports custom rules for association relationships.
|
||||
|
||||
```shell
|
||||
vela adopt --all --apply
|
||||
```
|
||||
|
||||
This command will automatically recognize the resources and their association relationships in the current namespace based on the built-in resource topology rules, and take over the applications accordingly. Taking a Deployment as an example, the automatically taken over application looks like the following, which not only takes over the main workload Deployment but also its corresponding resources, including ConfigMap, Service, and Ingress.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: test2
|
||||
namespace: default
|
||||
spec:
|
||||
components:
|
||||
- name: test2.Deployment.test2
|
||||
properties:
|
||||
objects:
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: test2
|
||||
namespace: default
|
||||
spec:
|
||||
...
|
||||
type: k8s-objects
|
||||
- name: test2.Service.test2
|
||||
properties:
|
||||
objects:
|
||||
- apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: test2
|
||||
namespace: default
|
||||
spec:
|
||||
...
|
||||
type: k8s-objects
|
||||
- name: test2.Ingress.test2
|
||||
properties:
|
||||
objects:
|
||||
- apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test2
|
||||
namespace: default
|
||||
spec:
|
||||
...
|
||||
type: k8s-objects
|
||||
- name: test2.config
|
||||
properties:
|
||||
objects:
|
||||
- apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: record-event
|
||||
namespace: default
|
||||
type: k8s-objects
|
||||
policies:
|
||||
- name: read-only
|
||||
properties:
|
||||
rules:
|
||||
- selector:
|
||||
componentNames:
|
||||
- test2.Deployment.test2
|
||||
- test2.Service.test2
|
||||
- test2.Ingress.test2
|
||||
- test2.config
|
||||
type: read-only
|
||||
```
|
||||
|
||||
The demonstration result is shown below:
|
||||

|
||||
|
||||
If you want to use custom resource topology relationships to take over custom resources, you can use the following command:
|
||||
|
||||
```yaml
|
||||
vela adopt <your-crd> --all --resource-topology-rule=<your-rule-file.cue>
|
||||
```
|
||||
|
||||
### Flexible Definition of Takeover Rules
|
||||
|
||||
Given KubeVela's highly extensible design principles, the workloads and takeover methods faced by resource takeover are also different. Therefore, we have also designed a fully extensible and programmable way to take over workloads. In fact, the one-click takeover capability of the command line is also based on KubeVela's extensible takeover rules as a [special case](https://github.com/kubevela/kubevela/blob/master/references/cli/adopt-templates/default.cue). The core idea is to define a configuration transformation rule through CUE, and then specify the transformation rule when executing the `vela adopt` command, as shown below.
|
||||
|
||||
```shell
|
||||
vela adopt deployment/my-workload --adopt-template=my-adopt-rule.cue
|
||||
```
|
||||
|
||||
This mode is only suitable for advanced users, and we will not go into too much detail here. If you want to learn more details, you can refer to the [official documentation](https://kubevela.net/zh/docs/end-user/policies/resource-adoption) on workload takeover.
|
||||
|
||||
## Significant Performance Optimization
|
||||
|
||||
Performance optimization is also a major highlight of this version. Based on the practical experience of different users in the community, **we have improved the overall performance of the controller, the capacity of a single application, and the overall application processing throughput by 5 to 10 times** without changing the default resource quotas. This also includes some changes to default configurations, with parameter trimming for some niche scenarios that affect performance.
|
||||
|
||||
In terms of single application capacity, because KubeVela applications may contain a large number of actual Kubernetes APIs, this often leads to the ResourceTracker behind the application that records the actual resource status and the ApplicationRevision object that records version information exceeding the 2MB limit of a single Kubernetes object. In version 1.7, we have added zstd compression functionality and enabled it by default, which directly compresses the size of resources by [nearly 10 times](https://github.com/kubevela/kubevela/pull/5090). This also means that **the resource capacity that a single KubeVela Application can support has increased by 10 times**.
|
||||
|
||||
In addition, for some scenarios such as recording application and component versions, these version records themselves will increase proportionally with the number of applications, such as default recording of 10 application versions, which will increase by a factor of ten with the number of applications. Due to the list-watch mechanism of the controller itself, these additional objects will occupy controller memory, leading to a significant increase in memory usage. Many users (such as GitOps users) may have their own version management system. In order to avoid waste of memory, we have changed the default upper limit of application version records from 10 to 2. For component versions, which are relatively niche, we have disabled them by default. **This reduces the controller's overall memory consumption to one-third of the original**.
|
||||
|
||||
In addition, some parameter adjustments have been made, including reducing the number of historical versions recorded in the Definition from 20 to 2, and increasing the default Kubernetes API interaction limit QPS from 100 to 200, among others. In future versions, we will continue to optimize the performance of the controller.
|
||||
|
||||
## Usability Improvement
|
||||
|
||||
In addition to the core feature updates and performance improvements, this release also includes enhancements to the usability of many features.
|
||||
|
||||
### Client-side multi-environment resource rendering
|
||||
|
||||
"Dry run" is a popular concept in Kubernetes, which refers to the practice of running resources in a dry, non-destructive mode before they are actually applied to the cluster, to check if the configurations are valid. KubeVela also provides this feature, which not only checks if resources can be run, but also translates the abstract applications of OAM into the Kubernetes native API, allowing the client to convert from application abstraction to actual resources. The new feature added in version 1.7 is to specify different files for dry run, generating different actual resources.
|
||||
|
||||
For example, we can write different policy and workflow files for different environments, such as "test-policy.yaml" and "prod-policy.yaml". In this way, the same application can be specified with different policies and workflows in the client, generating different underlying resources, such as:
|
||||
|
||||
- Running in test environment
|
||||
```shell
|
||||
vela dry-run -f app.yaml -f test-policy.yaml -f test-workflow.yaml
|
||||
```
|
||||
|
||||
- Running in production environment
|
||||
```shell
|
||||
vela dry-run -f app.yaml -f prod-policy.yaml -f prod-workflow.yaml
|
||||
```
|
||||
|
||||
Here is the content of `app.yaml`, which specifies an external workflow to be referenced:
|
||||
|
||||
```shell
|
||||
# app.yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: first-vela-app
|
||||
spec:
|
||||
components:
|
||||
- name: express-server
|
||||
type: webservice
|
||||
properties:
|
||||
image: oamdev/hello-world
|
||||
ports:
|
||||
- port: 8000
|
||||
expose: true
|
||||
traits:
|
||||
- type: scaler
|
||||
properties:
|
||||
replicas: 1
|
||||
workflow:
|
||||
ref: deploy-demo
|
||||
```
|
||||
|
||||
And the content of `prod-policy.yaml` and `prod-workflow.yaml` are as follows:
|
||||
|
||||
```shell
|
||||
apiVersion: core.oam.dev/v1alpha1
|
||||
kind: Policy
|
||||
metadata:
|
||||
name: env-prod
|
||||
type: topology
|
||||
properties:
|
||||
clusters: ["local"]
|
||||
namespace: "prod"
|
||||
---
|
||||
apiVersion: core.oam.dev/v1alpha1
|
||||
kind: Policy
|
||||
metadata:
|
||||
name: ha
|
||||
type: override
|
||||
properties:
|
||||
components:
|
||||
- type: webservice
|
||||
traits:
|
||||
- type: scaler
|
||||
properties:
|
||||
replicas: 2
|
||||
```
|
||||
```shell
|
||||
apiVersion: core.oam.dev/v1alpha1
|
||||
kind: Workflow
|
||||
metadata:
|
||||
name: deploy-demo
|
||||
namespace: default
|
||||
steps:
|
||||
- type: deploy
|
||||
name: deploy-prod
|
||||
properties:
|
||||
policies: ["ha", "env-prod"]
|
||||
```
|
||||
|
||||
The corresponding YAML files for the test environment can be modified in the same way by changing the parameters. This feature is particularly useful for users who use KubeVela as a client abstraction tool and combine it with tools such as Argo to synchronize resources.
|
||||
|
||||
### Enhanced Application Deletion Feature
|
||||
|
||||
In many special scenarios, deleting applications has always been a painful experience. In version 1.7, we have added some convenient ways to support smooth application deletion for various special cases.
|
||||
|
||||
- **Deleting certain workloads in case of cluster disconnection**: We provide an interactive method to delete resources, which allows users to select the underlying workloads by viewing the cluster name, namespace, and resource type, to remove resources involved in special scenarios such as cluster disconnection.
|
||||
|
||||

|
||||
|
||||
- **Retain underlying resources when deleting an application**: If you only want to delete the metadata of the application while keeping the underlying workload and configuration, you can use the `--orphan` flag to retain the underlying resources when deleting the application.
|
||||
|
||||
- **Deleting application when controller is uninstalled**: When you have uninstalled the KubeVela controller but found some remaining applications that were not cleaned up, you can use `--force` flag to delete these applications.
|
||||
|
||||
|
||||
### Custom output after addon installation
|
||||
|
||||
For the KubeVela plugin system, we have added a `NOTES.cue` file that allows plugin makers to dynamically output installation completion prompts. For example, for the Backstage plugin, the `NOTES.cue` file is as follows:
|
||||
|
||||
```shell
|
||||
info: string
|
||||
if !parameter.pluginOnly {
|
||||
info: """
|
||||
By default, the backstage app is strictly serving in the domain `127.0.0.1:7007`, check it by:
|
||||
|
||||
vela port-forward addon-backstage -n vela-system
|
||||
|
||||
You can build your own backstage app if you want to use it in other domains.
|
||||
"""
|
||||
}
|
||||
if parameter.pluginOnly {
|
||||
info: "You can use the endpoint of 'backstage-plugin-vela' in your own backstage app by configuring the 'vela.host', refer to example https://github.com/wonderflow/vela-backstage-demo."
|
||||
}
|
||||
notes: (info)
|
||||
```
|
||||
|
||||
This plugin's output will be displayed with different content based on the parameters used by the user during plugin installation.
|
||||
|
||||
### Enhanced Workflow Capabilities
|
||||
|
||||
In version 1.7, we have enhanced the workflow capabilities with more granular options:
|
||||
|
||||
- Support specifying a failed step for retry
|
||||
```shell
|
||||
vela workflow restart <app-name> --step=<step-name>
|
||||
```
|
||||
|
||||
- The step name of a workflow can be left blank, and it will be generated automatically by the webhook.
|
||||
- The parameter passing in workflows now supports overriding existing parameters.
|
||||
|
||||
In addition, we have added [a series of new workflow steps](https://kubevela.net/docs/end-user/workflow/built-in-workflow-defs) in this version, including the typical `built-push-image` step that allows users to build an image and push it to a registry within the workflow. During the execution of the workflow steps, users can check the logs of a specific step using the `vela workflow logs <name> --step <step-name>` command. The full list of built-in workflow steps can be found in the official documentation.
|
||||
|
||||
### Enhanced VelaUX Capabilities
|
||||
|
||||
The VelaUX console has also been enhanced in version 1.7, including:
|
||||
|
||||
* Enhanced application workflow orchestration capabilities, supporting full workflow capabilities including sub-steps, input/output, timeouts, conditional statements, step dependencies, etc. Application workflow status viewing is also more comprehensive, with historical workflow records, step details, step logs, and input/output information available for query.
|
||||
* Support for application version regression, allowing users to view the differences between multiple versions of an application and select a specific version to roll back to.
|
||||
* Support for multi-tenancy, with stricter limits on multi-tenant permissions aligned with the Kubernetes RBAC model.
|
||||
|
||||
## What's Next
|
||||
|
||||
Recently, the official version of KubeVela 1.8 is also in full swing and is expected to meet you at the end of March. We will further enhance the following aspects:
|
||||
|
||||
Enhance the scalability and stability of the KubeVela core controller, provide a sharding solution for controller horizontal scaling, optimize the controller performance and mapping for 10,000-level application scale in multi-cluster scenarios, and provide a new performance evaluation for the community.
|
||||
VelaUX supports out-of-the-box gray release capabilities, and interfaces with observability plugins for interactive release processes. At the same time, VelaUX forms an extensible framework system, providing configuration capabilities for customizing UI and supporting business custom extensions and interfaces.
|
||||
Enhance the GitOps workflow capabilities to support the complete VelaUX experience for applications synchronized with Git repositories.
|
||||
If you want to learn more about our plans, become a contributor, or partner with us, you can contact us through community communication (https://github.com/kubevela/community), and we look forward to your participation!
|
||||
|
|
@ -11,11 +11,17 @@ hide_table_of_contents: false
|
|||
---
|
||||
|
||||
## 背景
|
||||
|
||||
随着万物互联场景的逐渐普及,边缘设备的算力也不断增强,如何借助云计算的优势满足复杂多样化的边缘应用场景,让云原生技术延伸到端和边缘成为了新的技术挑战,“云边协同”正在逐渐成为新的技术焦点。本文将围绕 CNCF 的两大开源项目 KubeVela 和 OpenYurt,以一个实际的 Helm 应用交付的场景,为大家介绍云边协同的解决方案。
|
||||
|
||||
OpenYurt 专注于以无侵入的方式将 Kubernetes 扩展到边缘计算领域。OpenYurt 依托原生 Kubernetes 的容器编排、调度能力,将边缘算力纳入到 Kubernetes 基础设施中统一管理,提供了诸如边缘自治、高效运维通道、边缘单元化管理、边缘流量拓扑、安全容器、边缘 Serverless/FaaS、异构资源支持等能力。以 Kubernetes 原生的方式为云边协同构建了统一的基础设施。
|
||||
|
||||
KubeVela 孵化于 OAM 模型,专注于帮助企业构建统一的应用交付和管理能力,为业务开发者屏蔽底层基础设施复杂度,提供灵活的扩展能力,并提供开箱即用的微服务容器管理、云资源管理、版本化和灰度发布、扩缩容、可观测性、资源依赖编排和数据传递、多集群、CI 对接、GitOps 等特性。最大化的提升开发者自助式应用管理的研发效能,提升也满足平台长期演进的扩展性诉求。
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
## OpenYurt 与 KubeVela 结合能解决什么问题?
|
||||
|
||||
如上所述,OpenYurt 满足了边缘节点的接入,让用户可以通过操作原生 Kubernetes 的方式管理边缘节点。边缘节点通常用来表示距离用户更近的计算资源,比如某个就近机房中的虚拟机或物理服务器等,通过 OpenYurt 加入后,这些边缘节点会转化为 Kubernetes 中可以使用的节点(Node)。OpenYurt 用节点池(NodePool)来描述同一地域的一组边缘节点。在满足了基本的资源管理后,针对应用如何编排部署到一个集群中的不同节点池,我们通常会有如下核心需求:
|
||||
|
||||
1. **统一配置**: 如果对每一份要下发的资源做手动修改,需要很多人工介入,非常容易出错和遗漏。我们需要统一的方式来做参数配置,不仅可以方便地做批量管理操作,还可以对接安全和审计,满足企业风险控制与合规需求。
|
||||
|
|
|
|||
|
|
@ -21,8 +21,11 @@ Serverless 应用引擎(SAE)是面向业务应用架构、微服务架构的
|
|||

|
||||
|
||||
如上架构图,SAE 的用户可以将多种不同类型的业务应用托管在 SAE 之上。而在 SAE 底层,则会通过 JAVA 业务层处理相关的业务逻辑,以及与 Kubernetes 资源进行交互。在最底层,则依靠高可用,免运维,按需付费的弹性资源池。
|
||||
|
||||
在这个架构下,SAE 主要依托其 JAVA 业务层为用户提供功能。这样的架构在帮助用户一键式部署应用的同时,也带来了不少挑战。
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
在 Serverless 持续发展的当下,SAE 主要遇到了三大挑战:
|
||||
|
||||
1. SAE 内部的工程师在开发运维的过程中,存在着一些复杂且非标准化的运维流程。如何自动化这些复杂的操作,从而降低人力的消耗?
|
||||
|
|
|
|||
|
|
@ -20,6 +20,8 @@ CNCF TOC(Technical Oversight Committee,技术监督委员会)已经投票
|
|||
|
||||
KubeVela 项目的前身是 [oam-kubernetes-runtime](https://github.com/crossplane/oam-kubernetes-runtime) 项目,它由来自八家不同组织的[开发者](https://github.com/kubevela/community/blob/main/OWNERS.md#bootstrap-contributors)一同在社区发起,包括阿里云、微软、Upbound 等。它于 2020 年 11 月发布正式对外开源,2021 年 4 月发布了 v1.0,2021 年 6 月加入 CNCF 成为沙箱项目。该项目目前的贡献者来自世界各地,有超过 260 多名[贡献者](https://kubevela.devstats.cncf.io/d/22/prs-authors-table?orgId=1),包括招商银行、滴滴、京东、极狐 GitLab、SHEIN 等。
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
“KubeVela 开创了一条跨多云/多集群环境交付应用程序的道路,具有统一且可扩展的抽象。”CNCF TOC Sponsor 张磊表示: “这项创新开启了下一代软件交付体验,填补了现有社区生态应用交付的‘最后一公里’,该实践专注于更简单的‘部署’而不是复杂‘编排’。我们很高兴在 CNCF 社区中能涌现出更多以应用为中心的工具/平台,并期待看到 KubeVela 的采用在快速发展的应用交付生态系统中发展到一个新的水平。”
|
||||
|
||||
KubeVela 目前已被多家公司所采纳,被用于大部分的公共云以及内部部署的生产中。大多数用户采用 KubeVela 作为他们的内部“PaaS ”,作为 CI/CD 流水线的一部分,或者作为一个可扩展的 DevOps 内核来构建他们自己的 IDP。公开[采用者](https://github.com/kubevela/community/blob/main/ADOPTERS.md)包括阿里巴巴,使用 KubeVela 作为核心,进行跨混合环境交付和管理应用;字节跳动,使用 KubeVela 和 Crossplane 提供进阶的游戏 PaaS 能力;招商银行,利用 KubeVela 搭建混合云应用平台,统一从搭建、发布、运行的全流程;以及其他更多行业的公司。
|
||||
|
|
|
|||
|
|
@ -0,0 +1,385 @@
|
|||
---
|
||||
title: "KubeVela 1.7 版本解读:接管你的已有工作负载"
|
||||
author: 孙健波
|
||||
author_title: KubeVela Team
|
||||
author_url: https://github.com/kubevela/KubeVela
|
||||
author_image_url: https://avatars.githubusercontent.com/u/2173670
|
||||
tags: [ KubeVela, release-note, Kubernetes, DevOps, CNCF, CI/CD, Application delivery, Adopt workloads]
|
||||
description: "This article interprets the release of KubeVela 1.7."
|
||||
image: https://raw.githubusercontent.com/oam-dev/KubeVela.io/main/docs/resources/KubeVela-03.png
|
||||
hide_table_of_contents: false
|
||||
---
|
||||
|
||||
KubeVela 1.7 版本已经正式发布一段时间,在此期间 KubeVela 正式晋级成为了 CNCF 的孵化项目,开启了一个新的里程碑。而 KubeVela 1.7 本身也是一个转折点,由于 KubeVela 从一开始就专注于可扩展体系的设计,对于控制器核心功能的需求也开始逐步收敛,我们开始腾出手来更加专注于用户体验、易用性、以及性能。在本文中,我们将重点挑选 1.7 版本中的工作负载接管、性能优化等亮点功能进行介绍。
|
||||
|
||||
## 接管你的工作负载
|
||||
|
||||
接管已有的工作负载一直是社区里呼声很高的需求,其场景也非常明确,即已经存在的工作负载可以自然的迁移到 OAM 标准化体系中,被 KubeVela 的应用交付控制面统一管理,复用 VelaUX 的 UI 控制台功能,包括社区的一系列运维特征、工作流步骤以及丰富的插件生态。在 1.7 版本中,我们正式发布了该功能,在了解具体怎么操作之前,让我们先对其运行模式有个基本了解。
|
||||
|
||||
### “只读” 和 “接管” 两种模式
|
||||
|
||||
为了适应不同的使用场景,KubeVela 提供了两种模式来满足你统一管理的需求,**一种是只读模式,适用于内部已经有自建平台的系统,这些系统对于存量业务依旧具有主要的控制能力,而新的基于 KubeVela 的平台系统可以只读式的统一观测到这些应用。另一种是接管模式,适用于想直接做迁移的用户,可以把已有的工作负载自动的接管到 KubeVela 体系中,并且完全统一管理**。
|
||||
|
||||
<!--truncate-->
|
||||
|
||||
- “只读”(read-only)模式顾名思义,它不会对资源有任何“写”操作。使用只读模式纳管的工作负载,可以通过 KubeVela 的工具集(如 CLI、VelaUX)做可视化,满足统一查看、可观测方面的需求。与此同时,只读模式下生成的纳管应用被删除时,底下的工作负载资源也不会被回收。而底层工作负载被其他控制器会人为修改时,KubeVela 也可以观察到这些变化。
|
||||
- “接管”(take-over)模式意味着底层的工作负载会被 KubeVela 完全管理,跟其他直接通过 KubeVela 体系创建出来的工作负载一样,工作负载的更新、删除等生命周期将完全由 KubeVela 应用体系控制。默认情况下,其他系统对工作负载的修改也就不再生效,会被 KubeVela 面向终态的控制循环改回来,除非你加入了其他的管理策略(如 apply-once)。
|
||||
|
||||
而声明接管模式的方法则使用 KubeVela 的策略(policy)体系,如下所示:
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: read-only
|
||||
spec:
|
||||
components:
|
||||
- name: nginx
|
||||
type: webservice
|
||||
properties:
|
||||
image: nginx
|
||||
policies:
|
||||
- type: read-only
|
||||
name: read-only
|
||||
properties:
|
||||
rules:
|
||||
- selector:
|
||||
resourceTypes: ["Deployment"]
|
||||
```
|
||||
|
||||
在 “read-only” 策略中,我们定义了多种只读规则,如样例中只读选择器命中的资源是 “Deployment”,那就意味着只有对 Deployment 的资源是只读的,我们依旧可以通过运维特征创建和修改 “Ingress”、“Service” 等资源,而使用 “scaler” 运维特征对 Deployment 的实例数做修改则不会生效。
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: take-over
|
||||
spec:
|
||||
components:
|
||||
- name: nginx-take-over
|
||||
type: k8s-objects
|
||||
properties:
|
||||
objects:
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx
|
||||
traits:
|
||||
- type: scaler
|
||||
properties:
|
||||
replicas: 3
|
||||
policies:
|
||||
- type: take-over
|
||||
name: take-over
|
||||
properties:
|
||||
rules:
|
||||
- selector:
|
||||
resourceTypes: ["Deployment"]
|
||||
```
|
||||
|
||||
在“take-over”策略中,我们也包括了一系列的选择器,可以保证接管的资源是可控的。而上述的例子在不加“take-over”策略时,如果系统中已经有名为“nginx”的 Deployment 资源,则会运行失败,因为资源已经存在。一方面,接管策略保证了应用创建时可以将已经存在的资源纳入管理;另一方面,也可以复用之前已经存在的工作负载配置,只会将诸如 scaler 运维特征中对实例数的修改作为配置的一部分 “patch”到原先的配置上。
|
||||
### 使用命令行一键接管工作负载
|
||||
在了解了接管模式以后,你肯定会想是否有一种简便的方式,可以一键接管工作负载?没错,KubeVela 的命令行提供了这种简便方式,可以将诸如 K8s 常见的资源、“Helm”等工作负载一键接管,使用起来非常方便。具体而言,`vela`CLI 会自动去识别系统里的资源并将其组装成一个应用完成接管,我们在设计这个功能的核心原则是“**资源的接管不能触发底层工作负载的重启**”。
|
||||
|
||||
如下所示,默认情况下,使用 `vela adopt`会用“read-only”模式管理,只需指定要接管的原生资源类型、命名空间及其名称,就可以自动生成接管的 Application 对象。生成的应用 spec 跟集群中实际的字段严格一致。
|
||||
```shell
|
||||
$ vela adopt deployment/default/example configmap/default/example
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
labels:
|
||||
app.oam.dev/adopt: native
|
||||
name: example
|
||||
namespace: default
|
||||
spec:
|
||||
components:
|
||||
- name: example.Deployment.example
|
||||
properties:
|
||||
objects:
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: example
|
||||
namespace: default
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: example
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: example
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
imagePullPolicy: Always
|
||||
name: nginx
|
||||
restartPolicy: Always
|
||||
...
|
||||
type: k8s-objects
|
||||
- name: example.config
|
||||
properties:
|
||||
objects:
|
||||
- apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: example
|
||||
namespace: default
|
||||
type: k8s-objects
|
||||
policies:
|
||||
- name: read-only
|
||||
properties:
|
||||
rules:
|
||||
- selector:
|
||||
componentNames:
|
||||
- example.Deployment.example
|
||||
- example.config
|
||||
type: read-only
|
||||
```
|
||||
|
||||
目前支持的默认接管类型其名称和资源 API 对应关系如下:
|
||||
|
||||
- crd: ["CustomResourceDefinition"]
|
||||
- ns: ["Namespace"]
|
||||
- workload: ["Deployment", "StatefulSet", "DaemonSet", "CloneSet"]
|
||||
- service: ["Service", "Ingress", "HTTPRoute"]
|
||||
- config: ["ConfigMap", "Secret"]
|
||||
- sa: ["ServiceAccount", "Role", "RoleBinding", "ClusterRole", "ClusterRoleBinding"]
|
||||
- operator: ["MutatingWebhookConfiguration", "ValidatingWebhookConfiguration", "APIService"]
|
||||
- storage: ["PersistentVolume", "PersistentVolumeClaim"]
|
||||
|
||||
如果想要把应用改成接管模式、并且直接部署到集群中,只需增加几个参数即可:
|
||||
```shell
|
||||
vela adopt deployment/default/example --mode take-over --apply
|
||||
```
|
||||
除了原生资源,vela 命令行也默认支持接管 Helm 应用创建的工作负载。
|
||||
```shell
|
||||
vela adopt mysql --type helm --mode take-over --apply --recycle -n default
|
||||
```
|
||||
如上述命令就会通过“接管” 模式管理 "default" 命名空间下名为“mysql”的 helm release,指定 `--recycle`可以在部署成功后把原来的 helm release 元信息清理掉。
|
||||
|
||||
接管后的工作负载就已经生成出了 KubeVela 的 Application,所以相关的操作就已经跟 KubeVela 体系对接,你可以在 VelaUX 界面上看到接管的应用,也可以通过 vela 命令行的其他功能查看、操作应用。
|
||||
|
||||
你还可以通过命令批量一键接管你命名空间的全部工作负载,根据 KubeVela 的资源拓扑关系能力,系统会自动识别关联的资源,形成一个完整的应用。对于 CRD 等自定义资源,KubeVela 也支持自定义关联关系的规则。
|
||||
|
||||
```shell
|
||||
vela adopt --all --apply
|
||||
```
|
||||
|
||||
这个命令会默认以内置资源拓扑规则识别当前命名空间下的资源及其关联关系,并进行应用接管。以一个 Deployment 为例,自动接管后的应用如下,除了主工作负载 Deployment 之外,还接管了它的对应资源,包括 ConfigMap,Service 以及 Ingress。
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: test2
|
||||
namespace: default
|
||||
spec:
|
||||
components:
|
||||
- name: test2.Deployment.test2
|
||||
properties:
|
||||
objects:
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: test2
|
||||
namespace: default
|
||||
spec:
|
||||
...
|
||||
type: k8s-objects
|
||||
- name: test2.Service.test2
|
||||
properties:
|
||||
objects:
|
||||
- apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: test2
|
||||
namespace: default
|
||||
spec:
|
||||
...
|
||||
type: k8s-objects
|
||||
- name: test2.Ingress.test2
|
||||
properties:
|
||||
objects:
|
||||
- apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test2
|
||||
namespace: default
|
||||
spec:
|
||||
...
|
||||
type: k8s-objects
|
||||
- name: test2.config
|
||||
properties:
|
||||
objects:
|
||||
- apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: record-event
|
||||
namespace: default
|
||||
type: k8s-objects
|
||||
policies:
|
||||
- name: read-only
|
||||
properties:
|
||||
rules:
|
||||
- selector:
|
||||
componentNames:
|
||||
- test2.Deployment.test2
|
||||
- test2.Service.test2
|
||||
- test2.Ingress.test2
|
||||
- test2.config
|
||||
type: read-only
|
||||
```
|
||||
演示效果如下:
|
||||

|
||||
|
||||
如果你希望使用自定义资源拓扑关系纳管自定义资源,可以使用如下命令:
|
||||
```yaml
|
||||
vela adopt <your-crd> --all --resource-topology-rule=<your-rule-file.cue>
|
||||
```
|
||||
### “接管规则”灵活定义
|
||||
鉴于 KubeVela 充分可扩展的设计原则,资源接管面临的工作负载、接管方式也各不相同,我们自然也设计了完全可扩展、可编程的工作负载接管方式。事实上,命令行的一键接管能力,也只是基于 KubeVela 可扩展接管规则的一种[特例](https://github.com/kubevela/kubevela/blob/master/references/cli/adopt-templates/default.cue)。其核心思想是通过 CUE 定义一种配置转换的规则,然后在执行 `vela adopt`命令时指定转换规则即可,如下所示。
|
||||
```shell
|
||||
vela adopt deployment/my-workload --adopt-template=my-adopt-rule.cue
|
||||
```
|
||||
这种模式仅适用于高阶用户,在这里我们将不做过于深入的展开。如果你想了解更多细节,可以参考工作负载接管的[官方文档](https://kubevela.net/zh/docs/end-user/policies/resource-adoption)。
|
||||
## 大幅性能优化
|
||||
性能优化也是本次版本中的一大亮点,基于过往社区中各类用户不同场景的实践,我们在默认资源配额不变的情况下,**将控制器的整体性能、单应用容量、应用处理吞吐量整体提升了 5 到 10 倍**。其中也包含了一些默认配置的变化,针对一些影响性能的小众场景,做参数上的裁剪。
|
||||
|
||||
在单应用容量层面,由于 KubeVela 的应用可能会包含大量实际的 Kubernetes API,这往往会导致 Application 背后记录实际资源状态的 ResourceTracker 以及记录版本信息的 ApplicationRevision 对象超过 Kubernetes 单个对象的 2MB 限额。在 1.7 版本中,我们加入了 ztsd 压缩功能并默认开启,这直接将资源的大小压缩了[近 10 倍](https://github.com/kubevela/kubevela/pull/5090)。这也意味着单个 **KubeVela Application 中能够支持的资源容量扩大了 10 倍**。
|
||||
|
||||
除此之外,针对一些场景如记录应用版本、组件版本,这些版本记录本身会由于应用数量规模的上升而等比例倍数提升,如默认记录 10 个应用版本,则会按应用数量的十倍递增。由于控制器本身 list-watch 的机制,这些增加的对象都会占用控制器内存,会导致内存使用量大幅提升。而许多用户(如使用 GitOps)可能有自己的版本管理系统,为了避免内存上的浪费,我们将应用版本的默认记录上限从 10 个改成了 2 个。而对于使用场景相对小众的组件版本,我们则默认关闭。这使得**控制器整体的内存消耗缩小为原先的 1/3**。
|
||||
|
||||
除此之外,还有一些参数调整包括将 Definition 的历史版本记录从 20 个缩小为 2 个,将 Kubernetes API 交互的限流 QPS 默认从 100 提升到 200 等等。在后续的版本中,我们将持续优化控制器的性能。
|
||||
|
||||
## 易用性提升
|
||||
除了版本核心功能和性能提升以外,这个版本还对诸多的功能易用性进行了提升。
|
||||
### 客户端多环境资源渲染
|
||||
dry run 是 Kubernetes 中很受欢迎的一个概念,即在资源实际生效之前,先空运行一下,校验资源的配置是否合法。在 KubeVela 中也有这个功能,除了校验资源是否可以运行,还能将 OAM 抽象的应用转换为 Kubernetes 原生资源的 API,能够在 CLI 客户端实现从应用抽象到实际资源的转换。而 1.7 中增加的功能,就是指定不同的文件做 dry run,生成不同的实际资源。
|
||||
|
||||
如我们可以将测试和生产两个不同环境的策略(policy)和工作流(workflow)写成不同的文件,分别为 "test-policy.yaml" 和 "prod-policy.yaml",这样就可以在客户端,对同一个应用指定不同环境的策略和工作流,生成不同的底层资源,如:
|
||||
|
||||
- 测试环境运行
|
||||
```shell
|
||||
vela dry-run -f app.yaml -f test-policy.yaml -f test-workflow.yaml
|
||||
```
|
||||
|
||||
- 生产环境运行
|
||||
```shell
|
||||
vela dry-run -f app.yaml -f prod-policy.yaml -f prod-workflow.yaml
|
||||
```
|
||||
其中,`app.yaml`中的内容如下,指定了引用一个外部的工作流:
|
||||
```shell
|
||||
# app.yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: first-vela-app
|
||||
spec:
|
||||
components:
|
||||
- name: express-server
|
||||
type: webservice
|
||||
properties:
|
||||
image: oamdev/hello-world
|
||||
ports:
|
||||
- port: 8000
|
||||
expose: true
|
||||
traits:
|
||||
- type: scaler
|
||||
properties:
|
||||
replicas: 1
|
||||
workflow:
|
||||
ref: deploy-demo
|
||||
```
|
||||
而 `prod-plicy.yaml`和 `prod-workflow.yaml`的内容分别如下:
|
||||
```shell
|
||||
apiVersion: core.oam.dev/v1alpha1
|
||||
kind: Policy
|
||||
metadata:
|
||||
name: env-prod
|
||||
type: topology
|
||||
properties:
|
||||
clusters: ["local"]
|
||||
namespace: "prod"
|
||||
---
|
||||
apiVersion: core.oam.dev/v1alpha1
|
||||
kind: Policy
|
||||
metadata:
|
||||
name: ha
|
||||
type: override
|
||||
properties:
|
||||
components:
|
||||
- type: webservice
|
||||
traits:
|
||||
- type: scaler
|
||||
properties:
|
||||
replicas: 2
|
||||
```
|
||||
```shell
|
||||
apiVersion: core.oam.dev/v1alpha1
|
||||
kind: Workflow
|
||||
metadata:
|
||||
name: deploy-demo
|
||||
namespace: default
|
||||
steps:
|
||||
- type: deploy
|
||||
name: deploy-prod
|
||||
properties:
|
||||
policies: ["ha", "env-prod"]
|
||||
```
|
||||
对应的,测试环境的 YAML 文件可以用同样的模式修改其中的参数,这个功能非常适用于将 KubeVela 用作客户端抽象工具的用户,结合 Argo 等工具做资源的同步。
|
||||
### 应用删除功能增强
|
||||
在许多特殊场景下,应用删除一直是一个比较痛苦的体验。在 1.7 版本中,我们加入一些简便的方式,针对各类特殊情况,支持应用的顺利删除。
|
||||
|
||||
- **在集群失联的特殊情况下删除部分对应工作负载**:我们提供了一个交互式删除资源的方法,可以通过查看集群名称、命名空间、资源类型,来选择底层工作负载,摘除集群失联这类特殊场景下涉及的资源。
|
||||
|
||||

|
||||
|
||||
- **删除应用时保留底层资源**:如果只想删除应用元数据而底层的工作负载和配置想保留,此时可以用`--orphan`参数在删除应用是保留下层资源。
|
||||
- **在控制器已经卸载的情况下删除应用**。当你已经卸载了 KubeVela 的控制器,但发现还有残留应用没删干净时,你可以通过指定 `--force` 来删除这些应用。
|
||||
### 插件安装后自定义输出
|
||||
对于 KubeVela 插件体系,我们新增了一个 `NOTES.cue`文件,可以允许插件的制作者动态的输出安装完成后的提示。比如针对 backstage 这个插件,其中的 NOTES.cue 文件如下:
|
||||
```shell
|
||||
info: string
|
||||
if !parameter.pluginOnly {
|
||||
info: """
|
||||
By default, the backstage app is strictly serving in the domain `127.0.0.1:7007`, check it by:
|
||||
|
||||
vela port-forward addon-backstage -n vela-system
|
||||
|
||||
You can build your own backstage app if you want to use it in other domains.
|
||||
"""
|
||||
}
|
||||
if parameter.pluginOnly {
|
||||
info: "You can use the endpoint of 'backstage-plugin-vela' in your own backstage app by configuring the 'vela.host', refer to example https://github.com/wonderflow/vela-backstage-demo."
|
||||
}
|
||||
notes: (info)
|
||||
```
|
||||
这个插件的输出就会根据用户安装插件时使用的不同参数显示不同内容。
|
||||
### 工作流能力增强
|
||||
在 1.7 版本中,我们支持了更多细粒度的工作流能力:
|
||||
|
||||
- 支持指定某个失败的步骤做重试
|
||||
```shell
|
||||
vela workflow restart <app-name> --step=<step-name>
|
||||
```
|
||||
|
||||
- 工作流的步骤名称可以不填,由 webhook 自动生成。
|
||||
- 工作流的参数传递支持覆盖已有参数。
|
||||
|
||||
除此之外,我们新增了一系列[新的工作流步骤](https://kubevela.net/docs/end-user/workflow/built-in-workflow-defs),其中比较典型的步骤是 `built-push-image`,支持在工作流中构建镜像并推送到镜像仓库中。在工作流步骤的执行过程中,你可以通过 `vela workflow logs <name> --step <step-name>`查看执行日志。
|
||||
|
||||
### VelaUX 功能增强
|
||||
VelaUX 的控制台也在 1.7 版本中做了一系列增强包括:
|
||||
|
||||
- 支持更丰富的应用工作流编排能力,支持完整的工作流能力,包括子步骤、输入输出、超时、条件判断、步骤依赖等功能。应用工作流状态查看也更全面,可以查询历史工作流记录、步骤详情、步骤的日志和输入输出等信息。
|
||||
- 支持应用版本回归,可以查看多版本之间应用的差异,选择某个版本回滚。
|
||||
- 多租户支持,对多租户权限做更严格的限制,于 Kubernetes RBAC 模型对齐。
|
||||
## 近期的规划
|
||||
近期,KubeVela 1.8 正式版也在紧锣密鼓的筹划中,预计会在 3 月底正式跟大家见面,我们将在如下几个方面进一步增强:
|
||||
|
||||
- KubeVela 核心控制器的大规模性能和稳定性增强,针对控制器水平扩容提供分片(Sharding)方案,对多集群场景下万级别应用规模做控制器的性能优化和摸底,为社区提供一个全新的性能评测。
|
||||
- VelaUX 支持开箱即用的灰度发布能力,对接可观测插件做发布过程的交互。与此同时,VelaUX 形成可扩展框架体系,对 UI 的定制提供配置能力,支持业务自定义扩展和对接。
|
||||
- GitOps 工作流能力增强,支持 git 仓库同步的应用对接 VelaUX 的完整体验。
|
||||
|
||||
如果你想了解更多的规划、成为贡献者或者合作伙伴,可以通过参与社区沟通( [https://github.com/kubevela/community](https://github.com/kubevela/community) )联系我们,期待你的加入!
|
||||
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 655 KiB |
Loading…
Reference in New Issue