fix docs and e2e install.sh

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
This commit is contained in:
Sanskar Jaiswal 2022-02-18 17:48:44 +05:30 committed by Sanskar Jaiswal
parent 438877674a
commit ba4646cddb
3 changed files with 67 additions and 31 deletions

View File

@ -8,10 +8,17 @@ This guide shows you how to use Gateway API and Flagger to automate canary deplo
Flagger requires a Kubernetes cluster **v1.16** or newer and any mesh/ingress that implements the `v1alpha2` of Gateway API. We'll be using Contour for the sake of this tutorial, but you can use any other implementation.
Install Contour with GatewayAPI and create a GatewayClass and a Gateway object:
Install the GatewayAPI CRDs:
```bash
kubectl apply -f https://raw.githubusercontent.com/projectcontour/contour/release-1.20/examples/render/contour-gateway.yaml
kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.4.1" \
| kubectl apply -f -
```
Install a cluster-wide GatewayClass; a Gateway belonging to the GatewayClass and Contour components in the `projectcontour` namespace:
```bash
kubectl apply -f https://raw.githubusercontent.com/projectcontour/contour/release-1.20/examples/render/contour.yaml
```
Install Flagger in the `flagger-system` namespace:
@ -42,7 +49,7 @@ Deploy the load testing service to generate traffic during the canary analysis:
kubectl apply -k https://github.com/fluxcd/flagger//kustomize/tester?ref=main
```
Create metric templates targeting the Prometheus server in the `flagger-system` namespace. The PromQL query below is meant for `Envoy`, but you can [change it to your ingress/mesh provider](https://docs.flagger.app/faq#metrics) accordingly.
Create metric templates targeting the Prometheus server in the `flagger-system` namespace. The PromQL queries below are meant for `Envoy`, but you can [change it to your ingress/mesh provider](https://docs.flagger.app/faq#metrics) accordingly.
```yaml
apiVersion: flagger.app/v1beta1
@ -68,14 +75,14 @@ spec:
apiVersion: flagger.app/v1beta1
kind: MetricTemplate
metadata:
name: request-success-rate
name: error-rate
namespace: flagger-system
spec:
provider:
type: prometheus
address: http://flagger-prometheus:9090
query: |
sum(
100 - sum(
rate(
envoy_cluster_upstream_rq{
envoy_cluster_name=~"{{ namespace }}_{{ target }}-canary_[0-9a-zA-Z-]+",
@ -100,7 +107,7 @@ Save the above resource as metric-templates.yaml and then apply it:
kubectl apply -f metric-templates.yaml
```
Create a canary custom resource \(replace example.com with your own domain\):
Create a canary custom resource \(replace "loaclproject.contour.io" with your own domain\):
```yaml
apiVersion: flagger.app/v1beta1
@ -150,10 +157,10 @@ spec:
# minimum req success rate (non 5xx responses)
# percentage (0-100)
templateRef:
name: request-success-rate
name: error-rate
namespace: flagger-system
thresholdRange:
min: 99
max: 1
interval: 1m
- name: latency
templateRef:
@ -199,6 +206,29 @@ service/podinfo-primary
httproutes.gateway.networking.k8s.io/podinfo
```
## Expose the app outside the cluster
Find the external address of Contour's Envoy load balancer:
```bash
export ADDRESS="$(kubectl -n projectcontour get svc/envoy -ojson \
| jq -r ".status.loadBalancer.ingress[].hostname")"
echo $ADDRESS
```
Configure your DNS server with a CNAME record \(AWS\) or A record \(GKE/AKS/DOKS\) and point a domain e.g. `app.example.com` to the LB address.
Now you can access the podinfo UI using your domain address.
Note that you should be using HTTPS when exposing production workloads on internet. You can obtain free TLS certs from Let's Encrypt, read this [guide](https://github.com/stefanprodan/eks-contour-ingress) on how to configure cert-manager to secure Contour with TLS certificates.
If you're using a local cluster via kind/k3s you can port forward the Envoy LoadBalancer service:
```bash
kubectl port-forward -n projectcontour svc/envoy 8080:80
```
Now you can access the podinfo UI on `localhost:8080`
## Automated canary promotion
Trigger a canary deployment by updating the container image:

View File

@ -14,15 +14,17 @@ fi
mkdir -p ${REPO_ROOT}/bin
echo ">>> Installing Contour ${CONTOUR_VER}, Gateway API components ${GATEWAY_API_VER}"
# retry if it fails, creating a gateway object is flaky sometimes
until cd ${REPO_ROOT}/bin && kubectl apply -f \
https://raw.githubusercontent.com/projectcontour/contour/${CONTOUR_VER}/examples/render/contour-gateway.yaml; do
sleep 1
done
echo ">>> Installing Gateway API CRDs"
kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.4.1" \
| kubectl apply -f -
echo ">>> Installing Contour components, GatewayClass and Gateway"
kubectl apply -f https://raw.githubusercontent.com/projectcontour/contour/${CONTOUR_VER}/examples/render/contour-gateway.yaml
kubectl -n projectcontour rollout status deployment/contour
kubectl -n projectcontour get all
kubectl get gatewayclass -oyaml
kubectl -n projectcontour get gateway -oyaml
echo '>>> Installing Kustomize'
cd ${REPO_ROOT}/bin && \

View File

@ -30,14 +30,14 @@ spec:
apiVersion: flagger.app/v1beta1
kind: MetricTemplate
metadata:
name: request-success-rate
name: error-rate
namespace: flagger-system
spec:
provider:
type: prometheus
address: http://flagger-prometheus:9090
query: |
sum(
100 - sum(
rate(
envoy_cluster_upstream_rq{
envoy_cluster_name=~"{{ namespace }}_{{ target }}-canary_[0-9a-zA-Z-]+",
@ -71,7 +71,6 @@ spec:
progressDeadlineSeconds: 60
service:
port: 9898
targetPort: 9898
portName: http
hosts:
- localproject.contour.io
@ -84,12 +83,12 @@ spec:
maxWeight: 50
stepWeight: 10
metrics:
- name: request-success-rate
- name: error-rate
templateRef:
name: request-success-rate
name: error-rate
namespace: flagger-system
thresholdRange:
min: 99
max: 1
interval: 1m
- name: latency
templateRef:
@ -133,6 +132,11 @@ fi
echo '✔ Canary service custom metadata test passed'
if ! kubectl -n test get httproute podinfo -oyaml; then
echo "Could not find HTTPRoute podinfo"
exit 1
fi
echo '>>> Triggering canary deployment'
kubectl -n test set image deployment/podinfo podinfod=stefanprodan/podinfo:3.1.1
@ -198,12 +202,12 @@ spec:
threshold: 5
iterations: 5
metrics:
- name: request-success-rate
- name: error-rate
templateRef:
name: request-success-rate
name: error-rate
namespace: flagger-system
thresholdRange:
min: 99
max: 1
interval: 1m
- name: latency
templateRef:
@ -289,12 +293,12 @@ spec:
x-canary:
exact: "insider"
metrics:
- name: request-success-rate
- name: error-rate
templateRef:
name: request-success-rate
name: error-rate
namespace: flagger-system
thresholdRange:
min: 99
max: 1
interval: 1m
- name: latency
templateRef:
@ -347,7 +351,7 @@ spec:
apiVersion: apps/v1
kind: Deployment
name: podinfo
progressDeadlineSeconds: 60
progressDeadlineSeconds: 30
service:
port: 9898
targetPort: 9898
@ -363,13 +367,13 @@ spec:
maxWeight: 50
stepWeight: 10
metrics:
- name: request-success-rate
- name: error-rate
templateRef:
name: request-success-rate
name: error-rate
namespace: flagger-system
thresholdRange:
min: 99
interval: 1m
max: 1
interval: 30s
- name: latency
templateRef:
name: latency