mirror of https://github.com/fluxcd/flagger.git
Split the CRD docs into canary target, service, status, analysis
This commit is contained in:
parent
0e81b5f4d2
commit
2837d4407e
|
|
@ -122,7 +122,7 @@ canary analysis and can be used for conformance testing or load testing.
|
|||
|
||||
If port discovery is enabled, Flagger scans the deployment spec and extracts the containers
|
||||
ports excluding the port specified in the canary service and Envoy sidecar ports.
|
||||
`These ports will be used when generating the ClusterIP services.
|
||||
These ports will be used when generating the ClusterIP services.
|
||||
|
||||
For a deployment that exposes two ports:
|
||||
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ to drive the canary analysis and promotion.
|
|||
|
||||
### Canary Custom Resource
|
||||
|
||||
For a deployment named _podinfo_, a canary promotion can be defined using Flagger's custom resource:
|
||||
For a deployment named _podinfo_, a canary can be defined using Flagger's custom resource:
|
||||
|
||||
```yaml
|
||||
apiVersion: flagger.app/v1alpha3
|
||||
|
|
@ -19,16 +19,8 @@ spec:
|
|||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: podinfo
|
||||
autoscalerRef:
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
name: podinfo
|
||||
service:
|
||||
name: podinfo
|
||||
port: 9898
|
||||
portName: http
|
||||
targetPort: 9898
|
||||
portDiscovery: true
|
||||
canaryAnalysis:
|
||||
interval: 1m
|
||||
threshold: 10
|
||||
|
|
@ -48,7 +40,27 @@ spec:
|
|||
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test:9898/"
|
||||
```
|
||||
|
||||
### Canary target
|
||||
|
||||
A canary resource can target a Kubernetes Deployment or DaemonSet.
|
||||
|
||||
Kubernetes Deployment example:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
progressDeadlineSeconds: 60
|
||||
targetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: podinfo
|
||||
autoscalerRef:
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
name: podinfo
|
||||
```
|
||||
|
||||
Based on the above configuration, Flagger generates the following Kubernetes objects:
|
||||
|
||||
* `deployment/<targetRef.name>-primary`
|
||||
* `hpa/<autoscalerRef.name>-primary`
|
||||
|
||||
|
|
@ -57,9 +69,6 @@ and the target deployment is scaled to zero.
|
|||
Flagger will detect changes to the target deployment (including secrets and configmaps) and will perform a
|
||||
canary analysis before promoting the new version as primary.
|
||||
|
||||
The autoscaler reference is optional, when specified, Flagger will pause the traffic increase while the
|
||||
target and primary deployments are scaled up or down. HPA can help reduce the resource usage during the canary analysis.
|
||||
|
||||
If the target deployment uses secrets and/or configmaps, Flagger will create a copy of each object using the `-primary`
|
||||
prefix and will reference these objects in the primary deployment. You can disable the secrets/configmaps tracking
|
||||
with the `-enable-config-tracking=false` command flag in the Flagger deployment manifest under containers args
|
||||
|
|
@ -87,10 +96,37 @@ If you use a different convention you can specify your label with
|
|||
the `-selector-labels=my-app-label` command flag in the Flagger deployment manifest under containers args
|
||||
or by setting `--set selectorLabels=my-app-label` when installing Flagger with Helm.
|
||||
|
||||
The target deployment should expose a TCP port that will be used by Flagger to create the ClusterIP Services.
|
||||
The container port from the target deployment should match the `service.port` or `service.targetPort`.
|
||||
The autoscaler reference is optional, when specified, Flagger will pause the traffic increase while the
|
||||
target and primary deployments are scaled up or down. HPA can help reduce the resource usage during the canary analysis.
|
||||
|
||||
Based on the canary spec service, Flagger generates the following Kubernetes ClusterIP service:
|
||||
The progress deadline represents the maximum time in seconds for the canary deployment to make progress
|
||||
before it is rolled back, defaults to ten minutes.
|
||||
|
||||
### Canary service
|
||||
|
||||
A canary resource dictates how the target workload is exposed inside the cluster.
|
||||
The canary target should expose a TCP port that will be used by Flagger to create the ClusterIP Services.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
service:
|
||||
name: podinfo
|
||||
port: 9898
|
||||
portName: http
|
||||
targetPort: 9898
|
||||
portDiscovery: true
|
||||
```
|
||||
|
||||
The container port from the target workload should match the `service.port` or `service.targetPort`.
|
||||
The `service.name` is optional, defaults to `spec.targetRef.name`.
|
||||
The `service.targetPort` can be a container port number or name.
|
||||
The `service.portName` is optional (defaults to `http`), if your workload uses gPRC then set the port name to `grcp`.
|
||||
|
||||
If port discovery is enabled, Flagger scans the target workload and extracts the containers
|
||||
ports excluding the port specified in the canary service and service mesh sidecar ports.
|
||||
These ports will be used when generating the ClusterIP services.
|
||||
|
||||
Based on the canary spec service, Flagger creates the following Kubernetes ClusterIP service:
|
||||
|
||||
* `<service.name>.<namespace>.svc.cluster.local`
|
||||
selector `app=<name>-primary`
|
||||
|
|
|
|||
|
|
@ -30,7 +30,11 @@ helm upgrade -i flagger flagger/flagger \
|
|||
--set metricsServer=http://prometheus:9090
|
||||
```
|
||||
|
||||
For Istio multi-cluster shared control plane you can install Flagger on each remote cluster and set the Istio control plane host cluster kubeconfig:
|
||||
Note that Flagger depends on Istio telemetry and Prometheus, if you're installing Istio with istioctl
|
||||
then you should be using the [default profile](https://istio.io/docs/setup/additional-setup/config-profiles/).
|
||||
|
||||
For Istio multi-cluster shared control plane you can install Flagger
|
||||
on each remote cluster and set the Istio control plane host cluster kubeconfig:
|
||||
|
||||
```bash
|
||||
helm upgrade -i flagger flagger/flagger \
|
||||
|
|
@ -42,7 +46,9 @@ helm upgrade -i flagger flagger/flagger \
|
|||
--set istio.kubeconfig.key=kubeconfig
|
||||
```
|
||||
|
||||
Note that the Istio kubeconfig must be stored in a Kubernetes secret with a data key named `kubeconfig`. For more details on how to configure Istio multi-cluster credentials read the [Istio docs](https://istio.io/docs/setup/install/multicluster/shared-vpn/#credentials).
|
||||
Note that the Istio kubeconfig must be stored in a Kubernetes secret with a data key named `kubeconfig`.
|
||||
For more details on how to configure Istio multi-cluster credentials
|
||||
read the [Istio docs](https://istio.io/docs/setup/install/multicluster/shared-vpn/#credentials).
|
||||
|
||||
Deploy Flagger for Linkerd:
|
||||
|
||||
|
|
@ -114,7 +120,8 @@ helm delete flagger
|
|||
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
|
||||
> **Note** that on uninstall the Canary CRD will not be removed. Deleting the CRD will make Kubernetes remove all the objects owned by Flagger like Istio virtual services, Kubernetes deployments and ClusterIP services.
|
||||
> **Note** that on uninstall the Canary CRD will not be removed. Deleting the CRD will make Kubernetes
|
||||
>remove all the objects owned by Flagger like Istio virtual services, Kubernetes deployments and ClusterIP services.
|
||||
|
||||
If you want to remove all the objects created by Flagger you have delete the Canary CRD with kubectl:
|
||||
|
||||
|
|
@ -169,7 +176,8 @@ kubectl apply -k github.com/weaveworks/flagger//kustomize/istio
|
|||
|
||||
This deploys Flagger in the `istio-system` namespace and sets the metrics server URL to Istio's Prometheus instance.
|
||||
|
||||
Note that you'll need kubectl 1.14 to run the above the command or you can download the [kustomize binary](https://github.com/kubernetes-sigs/kustomize/releases) and run:
|
||||
Note that you'll need kubectl 1.14 to run the above the command or you can download
|
||||
the [kustomize binary](https://github.com/kubernetes-sigs/kustomize/releases) and run:
|
||||
|
||||
```bash
|
||||
kustomize build github.com/weaveworks/flagger//kustomize/istio | kubectl apply -f -
|
||||
|
|
@ -205,9 +213,11 @@ Install Flagger and Prometheus:
|
|||
kubectl apply -k github.com/weaveworks/flagger//kustomize/kubernetes
|
||||
```
|
||||
|
||||
This deploys Flagger and Prometheus in the `flagger-system` namespace, sets the metrics server URL to `http://flagger-prometheus.flagger-system:9090` and the mesh provider to `kubernetes`.
|
||||
This deploys Flagger and Prometheus in the `flagger-system` namespace, sets the metrics server URL
|
||||
to `http://flagger-prometheus.flagger-system:9090` and the mesh provider to `kubernetes`.
|
||||
|
||||
The Prometheus instance has a two hours data retention and is configured to scrape all pods in your cluster that have the `prometheus.io/scrape: "true"` annotation.
|
||||
The Prometheus instance has a two hours data retention and is configured to scrape all pods in your cluster
|
||||
that have the `prometheus.io/scrape: "true"` annotation.
|
||||
|
||||
To target a different provider you can specify it in the canary custom resource:
|
||||
|
||||
|
|
@ -265,5 +275,6 @@ Install Flagger with Slack:
|
|||
kubectl apply -k .
|
||||
```
|
||||
|
||||
If you want to use MS Teams instead of Slack, replace `-slack-url` with `-msteams-url` and set the webhook address to `https://outlook.office.com/webhook/YOUR/TEAMS/WEBHOOK`.
|
||||
If you want to use MS Teams instead of Slack, replace `-slack-url` with `-msteams-url` and set the webhook address
|
||||
to `https://outlook.office.com/webhook/YOUR/TEAMS/WEBHOOK`.
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue