3.8 KiB
title |
---|
Known Limitations |
Limitations and Known Issues
Here are some known issues for using Helm chart as application component. Pleas note most of these restrictions will be fixed over time.
Only one main workload in the chart
The chart must have exactly one workload being regarded as the main workload. In this context, main workload
means the workload that will be tracked by KubeVela controllers, applied with traits and added into scopes. Only the main workload
will benefit from KubeVela with rollout, revision, traffic management, etc.
To tell KubeVela which one is the main workload, you must follow these two steps:
1. Declare main workload's resource definition
The field .spec.definitionRef
in WorkloadDefinition
is used to record the
resource definition of the main workload.
The name should be in the format: <resource>.<group>
.
For example, the Deployment resource should be defined as:
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
...
spec:
definitionRef:
name: deployments.apps
version: v1
The CloneSet workload resource should be defined as:
...
spec:
definitionRef:
name: clonesets.apps.kruise.io
version: v1alpha1
2. Qualified full name of the main workload
The name of the main workload should be templated with a default fully
qualified app
name. DO NOT assign any value to .Values.fullnameOverride
.
Also, Helm highly recommend that new charts are created via
$ helm create
command so the template names are automatically defined as per this best practice.
Upgrade the application
Rollout strategy
For now, Helm based components cannot benefit from application level rollout strategy.
So currently in-place upgrade by modifying the application specification directly is the only way to upgrade the Helm based components, no advanced rollout strategy can be assigned to it. Please check this sample.
Changing settings
will trigger Helm release upgrade
For Helm based component, .spec.components.settings
is the way user override the default values of the chart, so any changes applied to settings
will trigger a Helm release upgrade.
This process is handled by Helm and Flux2/helm-controller
, hence you can define remediation
strategies in the schematic according to fluxcd/helmrelease API
doc
and spec doc
in case failure happens during this upgrade.
For example
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
metadata:
name: webapp-chart
spec:
...
schematic:
helm:
release:
chart:
spec:
chart: "podinfo"
version: "5.1.4"
upgrade:
remediation:
retries: 3
remediationStrategy: rollback
repository:
url: "http://oam.dev/catalog/"
Note: currently, it's hard to get helpful information of a living Helm release to figure out what happened if upgrading failed. We will enhance the observability to help users track the situation of Helm release in application level.
Changing traits
may make Pods restart
Traits work on Helm based component in the same way as CUE based component, i.e. changes on traits may impact the main workload instance. Hence, the Pods belonging to this workload instance may restart twice during upgrade, one is by the Helm upgrade, and the other one is caused by traits.