sync commit 8f67454396a2c93442c9d0558465630fdd6e7073 from kubevela-refs/heads/master

This commit is contained in:
kubevela-bot 2021-05-17 14:42:50 +00:00
parent a3956f353a
commit ed4a0ede6b
9 changed files with 137 additions and 114 deletions

View File

@ -15,7 +15,7 @@ vela cap center config <centerName> <centerURL> [flags]
### Examples
```
vela cap center config mycenter https://github.com/oam-dev/catalog/cap-center
vela cap center config mycenter https://github.com/oam-dev/catalog/tree/master/registry
```
### Options
@ -35,4 +35,4 @@ vela cap center config mycenter https://github.com/oam-dev/catalog/cap-center
* [vela cap center](vela_cap_center) - Manage Capability Center
###### Auto generated by spf13/cobra on 20-Mar-2021
###### Auto generated by spf13/cobra on 6-May-2021

View File

@ -4,32 +4,15 @@ title: How it Works
In this documentation, we will explain the core idea of KubeVela and clarify some technical terms that are widely used in the project.
## Architecture
The overall architecture of KubeVela is shown as below:
![alt](resources/arch.png)
### Control Plane Cluster
The main part of KubeVela. It is the component that users will interact with and where the KubeVela controllers are running on. Control plane cluster will deploy the application to multiple *runtime clusters*.
### Runtime Clusters
Runtime clusters are where the applications are actually running on. The needed addons in runtime cluster are automatically discovered and installed with leverage of [CRD Lifecycle Management (CLM)](https://github.com/cloudnativeapp/CLM).
## API
On control plane cluster, KubeVela introduces [Open Application Model (OAM)](https://oam.dev) as the higher level API. Hence users of KubeVela only work on application level with a consistent experience, regardless of the complexity of heterogeneous runtime environments.
This API could be explained as below:
On control plane, KubeVela introduces [Open Application Model (OAM)](https://oam.dev) as the main API to model a full deployment of modern microservices application. This API could be explained as below:
![alt](resources/concepts.png)
In detail:
- Components - deployable/provisionable entities that compose your application.
- e.g. a Helm chart, a Kubernetes workload, a CUE or Terraform module, or a cloud database instance etc.
- e.g. a Helm chart, a Kubernetes workload, a Terraform module, or a cloud database instance etc.
- Traits - attachable features that will *overlay* given component with operational behaviors.
- e.g. autoscaling rules, rollout strategies, ingress rules, sidecars, security policies etc.
- Application - full description of your application deployment assembled with components and traits.
@ -45,24 +28,20 @@ To ensure simple yet consistent user experience across hybrid environments. Kube
- **Application Team**
- Choose a environment, assemble the application with components and traits per needs, and deploy it to target environment.
> Note that either platform team or application team application will only talk to the control plane cluster. KubeVela is designed to hide the details of runtime clusters except for debugging or verifying purpose.
> Note that either platform team or application team application will only talk to the control plane cluster. KubeVela is designed to hide the details of runtime infrastructures except for debugging or verifying purpose.
Below is how this workflow looks like:
![alt](resources/how-it-works.png)
All the capability building blocks can be updated or extended easily at any time since they are fully programmable (currently support CUE, Terraform and Helm).
All the capability building blocks can be updated or extended easily at any time since they are fully programmable via CUE.
## Environment
Before releasing an application to production, it's important to test the code in testing/staging workspaces. In KubeVela, we describe these workspaces as "environments". Each environment has its own configuration (e.g., domain, Kubernetes cluster and namespace, configuration data, access control policy, etc.) to allow user to create different deployment environments such as "test" and "production".
Currently, a KubeVela `environment` only maps to a Kubernetes namespace. In the future, a `environment` will contain multiple clusters.
## What's Next
Here are some recommended next steps:
- Learn how to [deploy an application](end-user/application) and understand how it works.
- Join `#kubevela` channel in CNCF [Slack](https://cloud-native.slack.com) and/or [Gitter](https://gitter.im/oam-dev/community)
Welcome onboard and sail Vela!
- Learn KubeVela's [core concepts](./concepts)
- Learn how to [deploy an application](end-user/application) in detail and understand how it works.

View File

@ -2,7 +2,7 @@
title: Application
---
This documentation will walk through how to use KubeVela to design a simple application without any placement rule.
This documentation will walk through how to use KubeVela to design a simple application without any polices or placement rule defined.
> Note: since you didn't declare placement rule, KubeVela will deploy this application directly to the control plane cluster (i.e. the cluster your `kubectl` is talking to). This is also the same case if you are using local cluster such as KinD or MiniKube to play KubeVela.
@ -75,10 +75,10 @@ Traits are platform provided features that could *overlay* a given component wit
```shell
$ kubectl get trait -n vela-system
NAME APPLIES-TO DESCRIPTION
cpuscaler [webservice worker] configure k8s HPA with CPU metrics for Deployment
ingress [webservice worker] Configures K8s ingress and service to enable web traffic for your service. Please use route trait in cap center for advanced usage.
scaler [webservice worker] Configures replicas for your service.
sidecar [webservice worker] inject a sidecar container into your app
cpuscaler [webservice worker] Automatically scale the component based on CPU usage.
ingress [webservice worker] Enable public web traffic for the component.
scaler [webservice worker] Manually scale the component.
sidecar [webservice worker] Inject a sidecar container to the component.
```
Let's check the specification of `sidecar` trait.
@ -116,7 +116,7 @@ spec:
properties:
image: nginx
traits:
- type: cpuscaler # Assign a HPA to scale the component by CPU usage
- type: cpuscaler # Automatically scale the component by CPU usage after deployed
properties:
min: 1
max: 10
@ -218,48 +218,3 @@ website-v1 35m
```
Furthermore, the system will decide how to/whether to rollout the application based on the attached [rollout plan](scopes/rollout-plan).
### Verify
<details>
On the runtime cluster, you could see a Kubernetes Deployment named `frontend` is running, with port exposed, and with a container `fluentd` injected.
```shell
$ kubectl get deploy frontend
NAME READY UP-TO-DATE AVAILABLE AGE
frontend 1/1 1 1 97s
```
```shell
$ kubectl get deploy frontend -o yaml
...
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: frontend
ports:
- containerPort: 80
protocol: TCP
- image: fluentd
imagePullPolicy: Always
name: sidecar-test
...
```
Another Deployment is also running named `backend`.
```shell
$ kubectl get deploy backend
NAME READY UP-TO-DATE AVAILABLE AGE
backend 1/1 1 1 111s
```
An HPA was also created by the `cpuscaler` trait.
```shell
$ kubectl get HorizontalPodAutoscaler frontend
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
frontend Deployment/frontend <unknown>/50% 1 10 1 101m
```
</details>

View File

@ -7,13 +7,13 @@ import TabItem from '@theme/TabItem';
> For upgrading existing KubeVela, please read the [upgrade guide](./advanced-install#upgrade).
## 1. Choose Kubernetes Cluster
## 1. Choose Control Plane Cluster
Requirements:
- Kubernetes cluster >= v1.15.0
- `kubectl` installed and configured
KubeVela is a simple custom controller that can be installed on any Kubernetes cluster including managed offerings or your own clusters. The only requirement is please ensure [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/) is installed and enabled.
KubeVela relies on Kubernetes as control plane. The control plane could be any managed Kubernetes offering or your own cluster. The only requirement is please ensure [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/) is installed and enabled.
For for local deployment and test, you could use `minikube` or `kind`.

View File

@ -8,19 +8,13 @@ slug: /
## Motivation
The trend of cloud-native technology is moving towards pursuing consistent application delivery across clouds and on-premises infrastructures using Kubernetes as the common abstraction layer. Kubernetes, although excellent in abstracting low-level infrastructure details, does introduce extra complexity to application developers, namely understanding the concepts of pods, port exposing, privilege escalation, resource claims, CRD, and so on. Weve seen the nontrivial learning curve and the lack of developer-facing abstraction have impacted user experiences, slowed down productivity, led to unexpected errors or misconfigurations in production. People start to question the value of this revolution: "why am I bothered with all these details?".
The trend of cloud-native technology is moving towards pursuing consistent application delivery across clouds and on-premises infrastructures using Kubernetes as the common layer. Kubernetes, although excellent in abstracting low-level infrastructure details, does not introduce abstractions to model software deployment on top of hybrid environments. Weve seen the lack of such application level context have impacted user experiences, slowed down productivity, led to unexpected errors or misconfigurations in production.
On the other hand, abstracting Kubernetes to serve developers' requirements is a highly opinionated process, and the resultant abstractions would only make sense had the decision makers been the platform team. Unfortunately, the platform team today face the following dilemma:
*There is no tool or framework for them to build user friendly yet highly extensible abstractions for application management*.
Thus, many application platforms today are essentially restricted abstractions with in-house add-on mechanisms despite the extensibility of Kubernetes. This makes extending such platforms for developers' requirements or to wider scenarios almost impossible, not to mention taking the full advantage of the rich Kubernetes ecosystems.
In the end, developers complain those platforms are too rigid and slow in response to feature requests or improvements. The platform team do want to help but the engineering effort is daunting: any simple API change in the platform could easily become a marathon negotiation around the opinionated abstraction design.
On the other hand, modeling a deployment of a modern microservice application is a highly opinionated and fragmented process today. Thus, most solutions aim to solve above problem (although built with Kubernetes) are essentially restricted systems and barely extensible. As the needs of your application grow, they are almost certain to outgrow the capabilities of such systems. Application teams complain they are too rigid and slow in response to feature requests or improvements. The platform team do want to help but the engineering effort is daunting: any simple API change in the platform could easily become a marathon negotiation around the opinionated abstraction design.
## What is KubeVela?
For platform team, KubeVela serves as a framework that relieves the pains of building modern application platforms by doing the following:
KubeVela is a modern application platform that aims to make deploying and managing applications across hybrid, multi-cloud environments easier and faster by doing the following:
**Application Centric** - KubeVela introduces consistent yet application centric API to capture a full deployment of microservices on top of hybrid environments. No infrastructure level concern, simply deploy.
@ -28,39 +22,49 @@ For platform team, KubeVela serves as a framework that relieves the pains of bui
**Runtime Agnostic** - KubeVela is built with Kubernetes as control plane but adaptable to any runtime as data-plane. It can deploy (and manage) diverse workload types such as container, cloud functions, databases, or even EC2 instances across hybrid environments.
With KubeVela, the platform team finally have the tooling supports to design easy-to-use application platform with high confidence and low turn around time.
## Architecture
For end-users (e.g. application team), this platform will enable them to design and ship applications to hybrid environments with minimal effort, and instead of managing a handful infrastructure details, a simple application definition that can be easily integrated with any CI/CD pipeline is all they need.
The overall architecture of KubeVela is shown as below:
![alt](resources/arch.png)
### Control Plane
Control plane is where KubeVela itself lives in. As the project's name implies, KubeVela by design leverages Kubernetes as control plane. This is the key of how KubeVela brings full *automation* and strong *determinism* to application delivery at scale. Users will interact with KubeVela via the application centric API to model the application deployment, and KubeVela will distribute it to target *runtime infrastructure* per policies and rules declared by users.
### Runtime Infrastructures
Runtime infrastructures are where the applications are actually running on. KubeVela allows you to model and deploy applications to any Kubernetes based infrastructure (either local, managed offerings, or IoT/Edge/On-Premise ones), or to public cloud platforms.
## Comparisons
### KubeVela vs. Platform-as-a-Service (PaaS)
The typical examples are Heroku and Cloud Foundry. They provide full application management capabilities and aim to improve developer experience and efficiency. In this context, KubeVela shares the same goal.
The typical examples are Heroku and Cloud Foundry. They provide full application deployment and management capabilities and aim to improve developer experience and efficiency. In this context, KubeVela shares the same goal.
Though the biggest difference lies in **flexibility**.
KubeVela enables you to serve end users with programmable building blocks which are fully flexible and coded by yourself. Comparing to this mechanism, traditional PaaS systems are highly restricted, i.e. they have to enforce constraints in the type of supported applications and capabilities, and as application needs grows, you always outgrow the capabilities of the PaaS system - this will never happen in KubeVela platform.
KubeVela enables you to serve end users with programmable building blocks (based on [CUE](https://cuelang.org/)) which are fully flexible and can be extended at any time. Comparing to this mechanism, traditional PaaS systems are highly restricted, i.e. they have to enforce constraints in the type of supported applications and capabilities, and as application needs grows, you always outgrow the capabilities of the PaaS system - this will never happen in KubeVela platform.
So think of KubeVela as a Heroku that is fully extensible to serve your needs as you grow.
So think of KubeVela as a Heroku but it is fully extensible when your needs grow.
### KubeVela vs. Serverless
Serverless platform such as AWS Lambda provides extraordinary user experience and agility to deploy serverless applications. However, those platforms impose even more constraints in extensibility. They are arguably "hard-coded" PaaS.
KubeVela can easily deploy Kubernetes based serverless workloads such as Knative, OpenFaaS by referencing them as new components. Even for AWS Lambda, KubeVela can also deploy such workload leveraging Terraform based component.
KubeVela can easily deploy both Kubernetes based serverless workloads such as Knative/OpenFaaS, or cloud functions such as AWS Lambda. Simply register what you want to deploy as a "component".
### KubeVela vs. Platform agnostic developer tools
The typical example is Hashicorp's Waypoint. Waypoint is a developer facing tool which introduces a consistent workflow (i.e., build, deploy, release) to ship applications on top of different platforms.
KubeVela can be integrated with such tools seamlessly. In this case, developers would use the Waypoint workflow as the UI to deploy and manage applications with KubeVela's abstractions (e.g. applications, components, traits etc).
KubeVela can be integrated with such tools seamlessly. In this case, developers would use the Waypoint workflow as the UI to deploy and manage applications across hybrid environments with KubeVela's abstractions (e.g. applications, components, traits etc).
### KubeVela vs. Helm
Helm is a package manager for Kubernetes that provides package, install, and upgrade a set of YAML files for Kubernetes as a unit.
KubeVela as a modern deployment system can naturally deploys Helm charts. A common example is you could easily use KubeVela to declare and deploy an application which is composed by a WordPress Helm chart and a AWS RDS instance defined by Terraform, or distribute the Helm chart to multiple clusters.
KubeVela as a modern deployment system can naturally deploys Helm charts across hybrid environments. For example, you could easily use KubeVela to declare and deploy an application which is composed by a WordPress Helm chart and a AWS RDS instance defined by Terraform, or distribute the Helm chart to multiple clusters.
KubeVela also leverages Helm to manage the capability addons in runtime clusters.
@ -68,6 +72,13 @@ KubeVela also leverages Helm to manage the capability addons in runtime clusters
KubeVela is a Kubernetes add-on for building developer-centric deployment system. It leverages [Open Application Model](https://github.com/oam-dev/spec) and the native Kubernetes extensibility to resolve a hard problem - making shipping applications enjoyable on Kubernetes.
## Getting Started
Now let's [get started](./quick-start) with KubeVela!
## What's Next
Here are some recommended next steps:
- [Get started](./quick-start) with KubeVela.
- Learn KubeVela's [core concepts](./concepts).
- Learn how to [deploy an application](end-user/application) in detail and understand how it works.
- Join `#kubevela` channel in CNCF [Slack](https://cloud-native.slack.com) and/or [Gitter](https://gitter.im/oam-dev/community)
Welcome onboard and sail Vela!

View File

@ -97,15 +97,17 @@ By default, patch trait in KubeVela leverages the CUE `merge` operation. It has
### Strategy Patch
The `strategy patch` is useful for patching array list.
Strategy Patch is effective by adding annotation, and supports the following two ways
> Note that this is not a standard CUE feature, KubeVela enhanced CUE in this case.
With `//+patchKey=<key_name>` annotation, merging logic of two array lists will not follow the CUE behavior. Instead, it will treat the list as object and use a strategy merge approach:
#### 1. With `+patchKey=<key_name>` annotation
This is useful for patching array list, merging logic of two array lists will not follow the CUE behavior. Instead, it will treat the list as object and use a strategy merge approach:
- if a duplicated key is found, the patch data will be merge with the existing values;
- if no duplication found, the patch will append into the array list.
The example of strategy patch trait will like below:
The example of strategy patch trait with 'patchKey' will like below:
```yaml
apiVersion: core.oam.dev/v1beta1
@ -174,6 +176,76 @@ spec:
So the above trait which attaches a Service to given component instance will patch an corresponding label to the workload first and then render the Service resource based on template in `outputs`.
#### 2. With `+patchStrategy=retainkeys` annotation
Similar to strategy [retainkeys](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/#use-strategic-merge-patch-to-update-a-deployment-using-the-retainkeys-strategy) in K8s strategic merge patch
In some scenarios that the entire object needs to be replaced, retainkeys strategy is the best choice. the example as follows:
Assume the Deployment is the base resource
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: retainkeys-demo
spec:
selector:
matchLabels:
app: nginx
strategy:
type: rollingUpdate
rollingUpdate:
maxSurge: 30%
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: retainkeys-demo-ctr
image: nginx
```
Now want to replace rollingUpdate strategy with a new strategy, you can write the patch trait like below
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
name: recreate
spec:
appliesToWorkloads:
- deployments.apps
extension:
template: |-
patch: {
spec: {
// +patchStrategy=retainKeys
strategy: type: "Recreate"
}
}
```
Then the base resource becomes as follows
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: retainkeys-demo
spec:
selector:
matchLabels:
app: nginx
strategy:
type: Recreate
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: retainkeys-demo-ctr
image: nginx
```
## More Use Cases of Patch Trait
Patch trait is in general pretty useful to separate operational concerns from the component definition, here are some more examples.

View File

@ -10,15 +10,21 @@ Make sure you have finished and verified the installation following [this guide]
## Step 2: Deploy Your First Application
```bash
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/vela-app.yaml
```bash script
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/vela-app.yaml
```
```console
application.core.oam.dev/first-vela-app created
```
Above command will apply an application to KubeVela and let it distribute the application to proper runtime infrastructure.
Check the status until we see `status` is `running` and services are `healthy`:
```bash
$ kubectl get application first-vela-app -o yaml
```bash script
kubectl get application first-vela-app -o yaml
```
```console
apiVersion: core.oam.dev/v1beta1
kind: Application
...
@ -34,10 +40,12 @@ status:
status: running
```
If your cluster has a working ingress, you can visit the service.
You can now directly visit the application (regardless of where it is running).
```bash script
curl -H "Host:testsvc.example.com" http://<your ip address>/
```
$ curl -H "Host:testsvc.example.com" http://<your ip address>/
```console
<xmp>
Hello World
@ -58,9 +66,6 @@ Hello World
Here are some recommended next steps:
- Learn KubeVela starting from its [core concepts](./concepts)
- Learn KubeVela's [core concepts](./concepts)
- Learn more details about [`Application`](end-user/application) and what it can do for you.
- Learn how to attach [rollout plan](end-user/scopes/rollout-plan) to this application, or [place it to multiple runtime clusters](end-user/scopes/appdeploy).
- Join `#kubevela` channel in CNCF [Slack](https://cloud-native.slack.com) and/or [Gitter](https://gitter.im/oam-dev/community)
Welcome onboard and sail Vela!

Binary file not shown.

Before

Width:  |  Height:  |  Size: 266 KiB

After

Width:  |  Height:  |  Size: 429 KiB

View File

@ -40,6 +40,7 @@ module.exports = {
'end-user/traits/annotations-and-labels',
'end-user/traits/sidecar',
'end-user/traits/volumes',
'end-user/traits/service-binding',
'end-user/traits/more',
]
},