Rework locality LB docs into tasks (#8402)

Removing the old locality docs entirely and replacing with tested tasks.
This commit is contained in:
Nathan Mittler 2020-11-24 09:31:15 -08:00 committed by GitHub
parent 4c544a8009
commit c5e8a9adc5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
18 changed files with 1079 additions and 118 deletions

View File

@ -44,7 +44,7 @@ Below is our list of existing features and their current phases. This informatio
| Gateway: Ingress, Egress for all protocols | Stable
| TLS termination and SNI Support in Gateways | Stable
| SNI (multiple certs) at ingress | Stable
| [Locality load balancing](/docs/ops/configuration/traffic-management/locality-load-balancing/) | Beta
| [Locality load balancing](/docs/tasks/traffic-management/locality-load-balancing/) | Beta
| Enabling custom filters in Envoy | Alpha
| CNI container interface | Alpha
| [Sidecar API](/docs/reference/config/networking/sidecar/) | Beta

View File

@ -1,110 +0,0 @@
---
title: Locality Load Balancing
description: Information on how to enable and understand Locality Load Balancing.
weight: 20
keywords: [locality,load balancing,priority,prioritized]
aliases:
- /help/ops/traffic-management/locality-load-balancing
- /help/ops/locality-load-balancing
- /help/tasks/traffic-management/locality-load-balancing
- /docs/ops/traffic-management/locality-load-balancing
owner: istio/wg-networking-maintainers
test: no
---
A locality defines a geographic location within your mesh using the following triplet:
- Region
- Zone
- Sub-zone
The geographic location typically represents a data center. Istio uses
this information to prioritize load balancing pools to control
the geographic location where requests are sent.
## Configuring locality load balancing
This feature is enabled by default. To disable locality load balancing,
pass the `--set meshConfig.localityLbSetting.enabled=false` flag when installing Istio.
## Requirements
Currently, the service discovery platform populates the locality automatically.
In Kubernetes, a pod's locality is determined via the [well-known labels for region and zone](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesioregion)
on the node it is deployed. If you are using a hosted Kubernetes service your cloud provider
should configure this for you. If you are running your own Kubernetes cluster you will need
to add these labels to your nodes. The sub-zone concept doesn't exist in Kubernetes.
As a result, Istio introduced the custom node label `topology.istio.io/subzone` to define a sub-zone.
In order for Istio to determine locality, a Service must be associated with the caller.
To determine when instances are unhealthy, the proxies require an [outlier detection](/docs/reference/config/networking/destination-rule/#OutlierDetection)
configuration in a destination rule for each service.
## Locality-prioritized load balancing
_Locality-prioritized load balancing_ is the default behavior for _locality load balancing_.
In this mode, Istio tells Envoy to prioritize traffic to the workload instances most closely matching
the locality of the Envoy sending the request. When all instances are healthy, the requests
remains within the same locality. When instances become unhealthy, traffic spills over to
instances in the next prioritized locality. This behavior continues until all localities are
receiving traffic. You can find the exact percentages in the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/load_balancing/priority).
{{< warning >}}
If no outlier detection configurations are defined in destination rules, the proxy can't determine if an instance is healthy, and it
routes traffic globally even if you enabled **locality-prioritized** load balancing.
{{< /warning >}}
A typical prioritization for an Envoy with a locality of `us-west/zone2` is as follows:
- Priority 0: `us-west/zone2`
- Priority 1: `us-west/zone1`, `us-west/zone3`
- Priority 2: `us-east/zone1`, `us-east/zone2`, `eu-west/zone1`
The hierarchy of prioritization matches in the following order:
1. Region
1. Zone
1. Sub-zone
Proxies in the same zone but different regions are not considered local to one another.
### Overriding the locality fail-over
Sometimes, you need to constrain the traffic fail-over to avoid sending traffic to
endpoints across the globe when there are not enough healthy endpoints in the
same region. This behavior is useful when sending fail-over traffic across regions
would not improve service health or many other reasons including regulatory controls.
To constrain traffic to a region, configure the `meshConfig.localityLbSetting` option during install. See the
[Locality load balancing reference guide](/docs/reference/config/networking/destination-rule#LocalityLoadBalancerSetting)
for options.
An example configuration:
{{< text yaml >}}
meshConfig:
localityLbSetting:
enabled: true
failover:
- from: us-east
to: eu-west
- from: us-west
to: us-east
{{< /text >}}
## Locality-weighted load balancing
Locality-weighted load balancing distributes user-defined percentages of traffic to certain localities.
For example, if we want to keep 80% of traffic within our region, and send 20% of traffic out of region:
{{< text yaml >}}
meshConfig:
localityLbSetting:
enabled: true
distribute:
- from: "us-central1/*"
to:
"us-central1/*": 80
"us-central2/*": 20
{{< /text >}}

View File

@ -367,7 +367,7 @@ the cluster, enabling cross-cluster load balancing for these services.
By default, Istio will load balance requests evenly between endpoints in
each cluster. In large systems that span geographic regions, it may be
desirable to use [locality load balancing](/docs/ops/configuration/traffic-management/locality-load-balancing)
desirable to use [locality load balancing](/docs/tasks/traffic-management/locality-load-balancing)
to prefer that traffic stay in the same zone or region.
In some advanced scenarios, load balancing across clusters may not be desired.

View File

@ -0,0 +1,59 @@
---
title: Locality Load Balancing
description: This series of tasks demonstrate how to configure locality load balancing in Istio.
weight: 65
icon: tasks
keywords: [locality,load balancing,priority,prioritized,kubernetes,multicluster]
simple_list: true
content_above: true
aliases:
- /help/ops/traffic-management/locality-load-balancing
- /help/ops/locality-load-balancing
- /help/tasks/traffic-management/locality-load-balancing
- /docs/ops/traffic-management/locality-load-balancing
- /docs/tasks/traffic-management/locality-load-balancing
owner: istio/wg-networking-maintainers
test: n/a
---
A *locality* defines the geographic location of a
{{< gloss >}}workload instance{{</ gloss >}} within your mesh. The following
triplet defines a locality:
- **Region**: Represents a large geographic area, such as *us-east*. A region
typically contains a number of availability *zones*. In Kubernetes, the label
[`topology.kubernetes.io/region`](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesioregion)
determines a node's region.
- **Zone**: A set of compute resources within a region. By running services in
multiple zones within a region, failover can occur between zones within the
region while maintaining data locality with the end-user. In Kubernetes, the
label [`topology.kubernetes.io/zone`](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone)
determines a node's zone.
- **Sub-zone**: Allows administrators to further subdivide zones for more
fine-grained control, such as "same rack". The sub-zone concept doesn't exist
in Kubernetes. As a result, Istio introduced the custom node label
[`topology.istio.io/subzone`](https://github.com/istio/api/blob/master/label/label.go#L42)
to define a sub-zone.
{{< tip >}}
If you are using a hosted Kubernetes service your cloud provider should
configure the region and zone labels for you. If you are running your own
Kubernetes cluster you will need to add these labels to your nodes.
{{< /tip >}}
Localities are hierarchical, in the matching order:
1. Region
1. Zone
1. Sub-zone
That means that a pod running in zone `bar` of region `foo`
is **not** considered to be local to a pod running in zone `bar` of region
`baz`.
Istio uses this locality information to control load balancing behavior.
Follow one of the tasks in this series to configure locality load balancing for
your mesh.

View File

@ -0,0 +1,160 @@
---
title: Before you begin
description: Initial steps before configuring locality load balancing.
weight: 1
icon: tasks
keywords: [locality,load balancing,priority,prioritized,kubernetes,multicluster]
test: yes
owner: istio/wg-networking-maintainers
---
Before you begin tasks for locality load balancing, you must first
[install Istio on multiple clusters](/docs/setup/install/multicluster). The
clusters must span three regions, containing four availability zones. The
number of clusters required may vary based on the capabilities offered by
your cloud provider.
{{< tip >}}
For simplicity, we will assume that there is only a single
{{< gloss >}}primary cluster{{< /gloss >}} in the mesh. This simplifies
the process of configuring the control plane, since changes only need to be
applied to one cluster.
{{< /tip >}}
We will deploy several instances of the `HelloWorld` application as follows:
{{< image width="75%"
link="setup.svg"
caption="Setup for locality load balancing tasks"
>}}
## Environment Variables
This guide assumes that all clusters will be accessed through contexts in the
default [Kubernetes configuration file](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
The following environment variables will be used for the various contexts:
Variable | Description
-------- | -----------
`CTX_PRIMARY` | The context used for applying configuration to the primary cluster.
`CTX_R1_Z1` | The context used to interact with pods in `region1.zone1`.
`CTX_R1_Z2` | The context used to interact with pods in `region1.zone2`.
`CTX_R2_Z3` | The context used to interact with pods in `region2.zone3`.
`CTX_R3_Z4` | The context used to interact with pods in `region3.zone4`.
## Create the `sample` namespace
To begin, generate yaml for the `sample` namespace with automatic sidecar
injection enabled:
{{< text bash >}}
$ cat <<EOF > sample.yaml
apiVersion: v1
kind: Namespace
metadata:
name: sample
labels:
istio-injection: enabled
EOF
{{< /text >}}
Add the `sample` namespace to each cluster:
{{< text bash >}}
$ for CTX in "$CTX_PRIMARY" "$CTX_R1_Z1" "$CTX_R1_Z2" "$CTX_R2_Z3" "$CTX_R3_Z4"; \
do \
kubectl --context="$CTX" apply -f sample.yaml; \
done
{{< /text >}}
## Deploy `HelloWorld`
Generate the `HelloWorld` YAML for each locality, using the
locality as the version string:
{{< text bash >}}
$ for LOC in "region1.zone1" "region1.zone2" "region2.zone3" "region3.zone4"; \
do \
./@samples/helloworld/gen-helloworld.sh@ \
--version "$LOC" > "helloworld-${LOC}.yaml"; \
done
{{< /text >}}
Apply the `HelloWorld` YAML to the appropriate cluster for each locality:
{{< text bash >}}
$ kubectl apply --context="${CTX_R1_Z1}" -n sample \
-f helloworld-region1.zone1.yaml
{{< /text >}}
{{< text bash >}}
$ kubectl apply --context="${CTX_R1_Z2}" -n sample \
-f helloworld-region1.zone2.yaml
{{< /text >}}
{{< text bash >}}
$ kubectl apply --context="${CTX_R2_Z3}" -n sample \
-f helloworld-region2.zone3.yaml
{{< /text >}}
{{< text bash >}}
$ kubectl apply --context="${CTX_R3_Z4}" -n sample \
-f helloworld-region3.zone4.yaml
{{< /text >}}
## Deploy `Sleep`
Deploy the `Sleep` application to `region1` `zone1`:
{{< text bash >}}
$ kubectl apply --context="${CTX_R1_Z1}" \
-f @samples/sleep/sleep.yaml@ -n sample
{{< /text >}}
## Wait for `HelloWorld` pods
Wait until the `HelloWorld` pods in each zone are `Running`:
{{< text bash >}}
$ kubectl get pod --context="${CTX_R1_Z1}" -n sample -l app="helloworld" \
-l version="region1.zone1"
NAME READY STATUS RESTARTS AGE
helloworld-region1.zone1-86f77cd7b-cpxhv 2/2 Running 0 30s
{{< /text >}}
{{< text bash >}}
$ kubectl get pod --context="${CTX_R1_Z2}" -n sample -l app="helloworld" \
-l version="region1.zone2"
NAME READY STATUS RESTARTS AGE
helloworld-region1.zone2-86f77cd7b-cpxhv 2/2 Running 0 30s
{{< /text >}}
{{< text bash >}}
$ kubectl get pod --context="${CTX_R2_Z3}" -n sample -l app="helloworld" \
-l version="region2.zone3"
NAME READY STATUS RESTARTS AGE
helloworld-region2.zone3-86f77cd7b-cpxhv 2/2 Running 0 30s
{{< /text >}}
{{< text bash >}}
$ kubectl get pod --context="${CTX_R3_Z4}" -n sample -l app="helloworld" \
-l version="region3.zone4"
NAME READY STATUS RESTARTS AGE
helloworld-region3.zone4-86f77cd7b-cpxhv 2/2 Running 0 30s
{{< /text >}}
**Congratulations!** You successfully configured the system and are now ready
to begin the locality load balancing tasks!
## Next steps
You can now configure one of the following load balancing options:
- [Locality failover](/docs/tasks/traffic-management/locality-load-balancing/failover)
- [Locality weighted distribution](/docs/tasks/traffic-management/locality-load-balancing/distribute)
{{< warning >}}
Only one of the load balancing options should be configured, as they are
mutually exclusive. Attempting to configure both may lead to unexpected
behavior.
{{< /warning >}}

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 134 KiB

View File

@ -0,0 +1,112 @@
#!/bin/bash
# shellcheck disable=SC2034,SC2153,SC2155,SC2164
# Copyright Istio Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
####################################################################################################
# WARNING: THIS IS AN AUTO-GENERATED FILE, DO NOT EDIT. PLEASE MODIFY THE ORIGINAL MARKDOWN FILE:
# docs/tasks/traffic-management/locality-load-balancing/before-you-begin/index.md
####################################################################################################
snip_create_the_sample_namespace_1() {
cat <<EOF > sample.yaml
apiVersion: v1
kind: Namespace
metadata:
name: sample
labels:
istio-injection: enabled
EOF
}
snip_create_the_sample_namespace_2() {
for CTX in "$CTX_PRIMARY" "$CTX_R1_Z1" "$CTX_R1_Z2" "$CTX_R2_Z3" "$CTX_R3_Z4"; \
do \
kubectl --context="$CTX" apply -f sample.yaml; \
done
}
snip_deploy_helloworld_1() {
for LOC in "region1.zone1" "region1.zone2" "region2.zone3" "region3.zone4"; \
do \
./samples/helloworld/gen-helloworld.sh \
--version "$LOC" > "helloworld-${LOC}.yaml"; \
done
}
snip_deploy_helloworld_2() {
kubectl apply --context="${CTX_R1_Z1}" -n sample \
-f helloworld-region1.zone1.yaml
}
snip_deploy_helloworld_3() {
kubectl apply --context="${CTX_R1_Z2}" -n sample \
-f helloworld-region1.zone2.yaml
}
snip_deploy_helloworld_4() {
kubectl apply --context="${CTX_R2_Z3}" -n sample \
-f helloworld-region2.zone3.yaml
}
snip_deploy_helloworld_5() {
kubectl apply --context="${CTX_R3_Z4}" -n sample \
-f helloworld-region3.zone4.yaml
}
snip_deploy_sleep_1() {
kubectl apply --context="${CTX_R1_Z1}" \
-f samples/sleep/sleep.yaml -n sample
}
snip_wait_for_helloworld_pods_1() {
kubectl get pod --context="${CTX_R1_Z1}" -n sample -l app="helloworld" \
-l version="region1.zone1"
}
! read -r -d '' snip_wait_for_helloworld_pods_1_out <<\ENDSNIP
NAME READY STATUS RESTARTS AGE
helloworld-region1.zone1-86f77cd7b-cpxhv 2/2 Running 0 30s
ENDSNIP
snip_wait_for_helloworld_pods_2() {
kubectl get pod --context="${CTX_R1_Z2}" -n sample -l app="helloworld" \
-l version="region1.zone2"
}
! read -r -d '' snip_wait_for_helloworld_pods_2_out <<\ENDSNIP
NAME READY STATUS RESTARTS AGE
helloworld-region1.zone2-86f77cd7b-cpxhv 2/2 Running 0 30s
ENDSNIP
snip_wait_for_helloworld_pods_3() {
kubectl get pod --context="${CTX_R2_Z3}" -n sample -l app="helloworld" \
-l version="region2.zone3"
}
! read -r -d '' snip_wait_for_helloworld_pods_3_out <<\ENDSNIP
NAME READY STATUS RESTARTS AGE
helloworld-region2.zone3-86f77cd7b-cpxhv 2/2 Running 0 30s
ENDSNIP
snip_wait_for_helloworld_pods_4() {
kubectl get pod --context="${CTX_R3_Z4}" -n sample -l app="helloworld" \
-l version="region3.zone4"
}
! read -r -d '' snip_wait_for_helloworld_pods_4_out <<\ENDSNIP
NAME READY STATUS RESTARTS AGE
helloworld-region3.zone4-86f77cd7b-cpxhv 2/2 Running 0 30s
ENDSNIP

View File

@ -0,0 +1,135 @@
#!/usr/bin/env bash
# shellcheck disable=SC2034
# Copyright Istio Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Initialize KUBE_CONTEXTS
_set_kube_vars
# Include the before you begin tasks.
source content/en/docs/tasks/traffic-management/locality-load-balancing/before-you-begin/snips.sh
set -e
set -u
set -o pipefail
function set_env_vars
{
# All use the same cluster.
export CTX_PRIMARY="${KUBE_CONTEXTS[0]}"
export CTX_R1_Z1="${KUBE_CONTEXTS[0]}"
export CTX_R1_Z2="${KUBE_CONTEXTS[0]}"
export CTX_R2_Z3="${KUBE_CONTEXTS[0]}"
export CTX_R3_Z4="${KUBE_CONTEXTS[0]}"
}
function deploy_services
{
echo "Creating the sample namespace"
snip_create_the_sample_namespace_1
snip_create_the_sample_namespace_2
echo "Generating HelloWorld YAML"
snip_deploy_helloworld_1
echo "Adding istio-locality label to YAML"
for LOC in "region1.zone1" "region1.zone2" "region2.zone3" "region3.zone4";
do
add_locality_label "helloworld-${LOC}.yaml" "$LOC"
done
echo "Deploying HelloWorld"
snip_deploy_helloworld_2
snip_deploy_helloworld_3
snip_deploy_helloworld_4
snip_deploy_helloworld_5
echo "Deploying Sleep"
# Make a copy of sleep.yaml.
cp "samples/sleep/sleep.yaml" "samples/sleep/sleep.yaml.original"
# Add the locality label to sleep.yaml
add_locality_label "samples/sleep/sleep.yaml" "region1.zone1"
# Deploy sleep
snip_deploy_sleep_1
# Restore the original file.
mv -f "samples/sleep/sleep.yaml.original" "samples/sleep/sleep.yaml"
echo "Waiting for HelloWorld pods"
_verify_like snip_wait_for_helloworld_pods_1 "$snip_wait_for_helloworld_pods_1_out"
_verify_like snip_wait_for_helloworld_pods_2 "$snip_wait_for_helloworld_pods_2_out"
_verify_like snip_wait_for_helloworld_pods_3 "$snip_wait_for_helloworld_pods_3_out"
_verify_like snip_wait_for_helloworld_pods_4 "$snip_wait_for_helloworld_pods_4_out"
}
function add_locality_label
{
local file="$1"
local locality="$2"
local nl=$'\n'
local output=""
local in_deployment=false
while IFS= read -r line
do
# We only want to add the locality label to deployments, so track when
# we're inside a deployment.
if [[ "$line" =~ ^kind:[[:space:]]([a-zA-Z]+)$ ]]; then
if [[ "${BASH_REMATCH[1]}" == "Deployment" ]]; then
in_deployment=true
else
in_deployment=false
fi
fi
# When we find an app label in the deployment, add the locality label
# right after.
if [[ "$in_deployment" == "true" && $line =~ ([[:space:]]+)app:[[:space:]](.*) ]]; then
output+="${line}${nl}"
output+="${BASH_REMATCH[1]}istio-locality: ${locality}${nl}"
else
output+="${line}${nl}"
fi
done < "$file"
# Overwrite the original file.
echo "$output" > "$file"
}
function verify_traffic
{
local func=$1
local expected=$2
# Require that we match the locality multiple times in a row.
VERIFY_CONSECUTIVE=10
# Verify that all traffic now goes to region1.zone2
_verify_like "$func" "$expected"
unset VERIFY_CONSECUTIVE
}
function cleanup
{
rm -f sample.yaml helloworld-region*.zone*.yaml
# Delete the sample namespaces in each cluster
echo "Deleting sample namespace in all clusters"
for CTX in "$CTX_PRIMARY" "$CTX_R1_Z1" "$CTX_R1_Z2" "$CTX_R2_Z3" "$CTX_R3_Z4"; do
kubectl delete ns sample --context="$CTX" --ignore-not-found=true
done
# Everything should be removed once cleanup completes. Use a small
# timeout for comparing cluster snapshots before/after the test.
export VERIFY_TIMEOUT=20
}

View File

@ -0,0 +1,78 @@
---
title: Locality weighted distribution
description: This guide demonstrates how to configure locality distribution.
weight: 20
icon: tasks
keywords: [locality,load balancing,kubernetes,multicluster]
test: yes
owner: istio/wg-networking-maintainers
---
Follow this guide to configure the distribution of traffic across localities.
Before proceeding, be sure to complete the steps under
[before you begin](/docs/tasks/traffic-management/locality-load-balancing/before-you-begin).
In this task, we will use the `Sleep` pod in `region1` `zone1` as the source of
requests to the `HelloWorld` service. We will configure Istio with the following
distribution across localities:
Region | Zone | % of traffic
------ | ---- | ------------
`region1` | `zone1` | 70
`region1` | `zone2` | 20
`region2` | `zone3` | 0
`region3` | `zone4` | 10
## Configure Weighted Distribution
Apply a `DestinationRule` that configures the following:
- [Outlier detection](/docs/reference/config/networking/destination-rule/#OutlierDetection)
for the `HelloWorld` service. This is required in order for distribution to
function properly. In particular, it configures the sidecar proxies to know
when endpoints for a service are unhealthy.
- [Weighted Distribution](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/load_balancing/locality_weight.html?highlight=weight)
for the `HelloWorld` service as described in the table above.
{{< text bash >}}
$ kubectl --context="${CTX_PRIMARY}" apply -n sample -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: helloworld
spec:
host: helloworld.sample.svc.cluster.local
trafficPolicy:
loadBalancer:
localityLbSetting:
enabled: true
distribute:
- from: region1/zone1/*
to:
"region1/zone1/*": 70
"region1/zone2/*": 20
"region3/zone4/*": 10
outlierDetection:
consecutive5xxErrors: 100
interval: 1s
baseEjectionTime: 10m
EOF
{{< /text >}}
## Verify the distribution
Call the `HelloWorld` service from the `Sleep` pod:
{{< text bash >}}
$ kubectl exec --context="${CTX_R1_Z1}" -n sample -c sleep \
"$(kubectl get pod --context="${CTX_R1_Z1}" -n sample -l \
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sL helloworld.sample:5000/hello
{{< /text >}}
Repeat this a number of times and verify that the number of replies
for each pod match the expected percentage in the table at the top of
this guide.
**Congratulations!** You successfully configured locality distribution!

View File

@ -0,0 +1,53 @@
#!/bin/bash
# shellcheck disable=SC2034,SC2153,SC2155,SC2164
# Copyright Istio Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
####################################################################################################
# WARNING: THIS IS AN AUTO-GENERATED FILE, DO NOT EDIT. PLEASE MODIFY THE ORIGINAL MARKDOWN FILE:
# docs/tasks/traffic-management/locality-load-balancing/distribute/index.md
####################################################################################################
snip_configure_weighted_distribution_1() {
kubectl --context="${CTX_PRIMARY}" apply -n sample -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: helloworld
spec:
host: helloworld.sample.svc.cluster.local
trafficPolicy:
loadBalancer:
localityLbSetting:
enabled: true
distribute:
- from: region1/zone1/*
to:
"region1/zone1/*": 70
"region1/zone2/*": 20
"region3/zone4/*": 10
outlierDetection:
consecutive5xxErrors: 100
interval: 1s
baseEjectionTime: 10m
EOF
}
snip_verify_the_distribution_1() {
kubectl exec --context="${CTX_R1_Z1}" -n sample -c sleep \
"$(kubectl get pod --context="${CTX_R1_Z1}" -n sample -l \
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sL helloworld.sample:5000/hello
}

View File

@ -0,0 +1,103 @@
#!/usr/bin/env bash
# shellcheck disable=SC2034,SC2154
# Copyright Istio Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# @setup profile=default
source content/en/docs/tasks/traffic-management/locality-load-balancing/common.sh
set -e
set -u
set -o pipefail
function configureDistribution
{
echo "Applying configuration for locality distribution"
snip_configure_weighted_distribution_1
# Wait a bit for the change to propagate.
sleep 5
}
function verifyDistribution
{
echo "Verifying the distribution"
# Gather the totals that reach each zone.
local z1=0
local z2=0
local z3=0
local z4=0
for i in {1..50}; do
# Send traffic to HelloWorld and get the reply.
out="$(snip_verify_the_distribution_1)"
echo "$out"
# See which zone replied.
if [[ "$out" == *"region1.zone1"* ]]; then
z1=$(( z1 + 1 ))
elif [[ "$out" == *"region1.zone2"* ]]; then
z2=$(( z2 + 1 ))
elif [[ "$out" == *"region2.zone3"* ]]; then
z3=$(( z3 + 1 ))
elif [[ "$out" == *"region3.zone4"* ]]; then
z4=$(( z4 + 1 ))
else
echo "Unexpected response from HelloWorld: $out"
exit 1
fi
done
# Scale the numbers so that they total 100.
z1=$(( z1 * 2 ))
z2=$(( z2 * 2 ))
z3=$(( z3 * 2 ))
z4=$(( z4 * 2 ))
echo "Actual locality distribution:"
echo "region1.zone1: ${z1}"
echo "region1.zone2: ${z2}"
echo "region2.zone3: ${z3}"
echo "region3.zone4: ${z4}"
if ((z1 < 60 || z1 > 80)); then
echo "Invalid locality distribution to region1.zone1: $z1. Expected: 70"
exit 1
elif ((z2 < 10 || z2 > 30)); then
echo "Invalid locality distribution to region1.zone2: $z2. Expected: 20"
exit 1
elif ((z3 > 0)); then
echo "Invalid locality distribution to region2.zone3: $z3. Expected: 0"
exit 1
elif ((z4 < 5 || z4 > 20)); then
echo "Invalid locality distribution to region1.zone2: $z4. Expected: 10"
exit 1
fi
}
set_env_vars
deploy_services
configureDistribution
verifyDistribution
# @cleanup
set +e # ignore cleanup errors
source content/en/docs/tasks/traffic-management/locality-load-balancing/common.sh
set_env_vars
cleanup

View File

@ -0,0 +1,177 @@
---
title: Locality failover
description: This task demonstrates how to configure your mesh for locality failover.
weight: 10
icon: tasks
keywords: [locality,load balancing,priority,prioritized,kubernetes,multicluster]
test: yes
owner: istio/wg-networking-maintainers
---
Follow this guide to configure your mesh for locality failover.
Before proceeding, be sure to complete the steps under
[before you begin](/docs/tasks/traffic-management/locality-load-balancing/before-you-begin).
In this task, we will use the `Sleep` pod in `region1.zone1` as the source of
requests to the `HelloWorld` service. We will then trigger failures that will
cause failover between localities in the following sequence:
{{< image width="75%"
link="sequence.svg"
caption="Locality failover sequence"
>}}
Internally, [Envoy priorities](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/load_balancing/priority.html)
are used to control failover. These priorities will be assigned as follows for
traffic originating from the `Sleep` pod (in `region1` `zone1`):
Priority | Locality | Details
-------- | -------- | -------
0 | `region1.zone1` | Region, zone, and sub-zone all match.
1 | None | Since we're not using sub-zones, there are no matches for a different sub-zone.
2 | `region1.zone2` | Different zone within the same region.
3 | `region2.zone3` | No match, however failover is defined for `region1`->`region2`.
4 | `region3.zone4` | No match and no failover defined for `region1`->`region3`.
## Configure locality failover
Apply a `DestinationRule` that configures the following:
- [Outlier detection](/docs/reference/config/networking/destination-rule/#OutlierDetection)
for the `HelloWorld` service. This is required in order for failover to
function properly. In particular, it configures the sidecar proxies to know
when endpoints for a service are unhealthy, eventually triggering a failover
to the next locality.
- [Failover](/docs/reference/config/networking/destination-rule/#LocalityLoadBalancerSetting-Failover)
policy between regions. This ensures that failover beyond a region boundary
will behave predictably.
- [Connection Pool](/docs/reference/config/networking/destination-rule/#ConnectionPoolSettings-http)
policy that forces each HTTP request to use a new connection. This task utilizes
Envoy's [drain](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/operations/draining)
function to force a failover to the next locality. Once drained, Envoy will reject
new connection requests. Since each request uses a new connection, this results in failover
immediately following a drain. **This configuration is used for demonstration purposes only.**
{{< text bash >}}
$ kubectl --context="${CTX_PRIMARY}" apply -n sample -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: helloworld
spec:
host: helloworld.sample.svc.cluster.local
trafficPolicy:
connectionPool:
http:
maxRequestsPerConnection: 1
loadBalancer:
simple: ROUND_ROBIN
localityLbSetting:
enabled: true
failover:
- from: region1
to: region2
outlierDetection:
consecutive5xxErrors: 1
interval: 1s
baseEjectionTime: 10m
EOF
{{< /text >}}
## Verify traffic stays in `region1.zone1`
Call the `HelloWorld` service from the `Sleep` pod:
{{< text bash >}}
$ kubectl exec --context="${CTX_R1_Z1}" -n sample -c sleep \
"$(kubectl get pod --context="${CTX_R1_Z1}" -n sample -l \
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sL helloworld.sample:5000/hello
Hello version: region1.zone1, instance: helloworld-region1.zone1-86f77cd7b-cpxhv
{{< /text >}}
Verify that the `version` in the response is `region1.zone`.
Repeat this several times and verify that the response is always the same.
## Failover to `region1.zone2`
Next, we trigger a failover to `region1.zone2`. To do this, we drain the
the Envoy sidecar proxy for `HelloWorld` in `region1.zone1`:
{{< text bash >}}
$ kubectl --context="${CTX_R1_Z1}" exec \
"$(kubectl get pod --context="${CTX_R1_Z1}" -n sample -l app=helloworld \
-l version=region1.zone1 -o jsonpath='{.items[0].metadata.name}')" \
-n sample -c istio-proxy -- curl -sL -X POST 127.0.0.1:15000/drain_listeners
{{< /text >}}
Call the `HelloWorld` service from the `Sleep` pod:
{{< text bash >}}
$ kubectl exec --context="${CTX_R1_Z1}" -n sample -c sleep \
"$(kubectl get pod --context="${CTX_R1_Z1}" -n sample -l \
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sL helloworld.sample:5000/hello
Hello version: region1.zone2, instance: helloworld-region1.zone2-86f77cd7b-cpxhv
{{< /text >}}
The first call will fail, which triggers the failover. Repeat the command
several more times and verify that the `version` in the response is always
`region1.zone2`.
## Failover to `region2.zone3`
Now trigger a failover to `region2.zone3`. As we did previously, we configure
the `HelloWorld` in `region1.zone2` to fail when called:
{{< text bash >}}
$ kubectl --context="${CTX_R1_Z2}" exec \
"$(kubectl get pod --context="${CTX_R1_Z2}" -n sample -l app=helloworld \
-l version=region1.zone2 -o jsonpath='{.items[0].metadata.name}')" \
-n sample -c istio-proxy -- curl -sL -X POST 127.0.0.1:15000/drain_listeners
{{< /text >}}
Call the `HelloWorld` service from the `Sleep` pod:
{{< text bash >}}
$ kubectl exec --context="${CTX_R1_Z1}" -n sample -c sleep \
"$(kubectl get pod --context="${CTX_R1_Z1}" -n sample -l \
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sL helloworld.sample:5000/hello
Hello version: region2.zone3, instance: helloworld-region2.zone3-86f77cd7b-cpxhv
{{< /text >}}
The first call will fail, which triggers the failover. Repeat the command
several more times and verify that the `version` in the response is always
`region2.zone3`.
## Failover to `region3.zone4`
Now trigger a failover to `region3.zone4`. As we did previously, we configure
the `HelloWorld` in `region2.zone3` to fail when called:
{{< text bash >}}
$ kubectl --context="${CTX_R2_Z3}" exec \
"$(kubectl get pod --context="${CTX_R2_Z3}" -n sample -l app=helloworld \
-l version=region2.zone3 -o jsonpath='{.items[0].metadata.name}')" \
-n sample -c istio-proxy -- curl -sL -X POST 127.0.0.1:15000/drain_listeners
{{< /text >}}
Call the `HelloWorld` service from the `Sleep` pod:
{{< text bash >}}
$ kubectl exec --context="${CTX_R1_Z1}" -n sample -c sleep \
"$(kubectl get pod --context="${CTX_R1_Z1}" -n sample -l \
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sL helloworld.sample:5000/hello
Hello version: region3.zone4, instance: helloworld-region3.zone4-86f77cd7b-cpxhv
{{< /text >}}
The first call will fail, which triggers the failover. Repeat the command
several more times and verify that the `version` in the response is always
`region3.zone4`.
**Congratulations!** You successfully configured locality failover!

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 141 KiB

View File

@ -0,0 +1,112 @@
#!/bin/bash
# shellcheck disable=SC2034,SC2153,SC2155,SC2164
# Copyright Istio Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
####################################################################################################
# WARNING: THIS IS AN AUTO-GENERATED FILE, DO NOT EDIT. PLEASE MODIFY THE ORIGINAL MARKDOWN FILE:
# docs/tasks/traffic-management/locality-load-balancing/failover/index.md
####################################################################################################
snip_configure_locality_failover_1() {
kubectl --context="${CTX_PRIMARY}" apply -n sample -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: helloworld
spec:
host: helloworld.sample.svc.cluster.local
trafficPolicy:
connectionPool:
http:
maxRequestsPerConnection: 1
loadBalancer:
simple: ROUND_ROBIN
localityLbSetting:
enabled: true
failover:
- from: region1
to: region2
outlierDetection:
consecutive5xxErrors: 1
interval: 1s
baseEjectionTime: 10m
EOF
}
snip_verify_traffic_stays_in_region1zone1_1() {
kubectl exec --context="${CTX_R1_Z1}" -n sample -c sleep \
"$(kubectl get pod --context="${CTX_R1_Z1}" -n sample -l \
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sL helloworld.sample:5000/hello
}
! read -r -d '' snip_verify_traffic_stays_in_region1zone1_1_out <<\ENDSNIP
Hello version: region1.zone1, instance: helloworld-region1.zone1-86f77cd7b-cpxhv
ENDSNIP
snip_failover_to_region1zone2_1() {
kubectl --context="${CTX_R1_Z1}" exec \
"$(kubectl get pod --context="${CTX_R1_Z1}" -n sample -l app=helloworld \
-l version=region1.zone1 -o jsonpath='{.items[0].metadata.name}')" \
-n sample -c istio-proxy -- curl -sL -X POST 127.0.0.1:15000/drain_listeners
}
snip_failover_to_region1zone2_2() {
kubectl exec --context="${CTX_R1_Z1}" -n sample -c sleep \
"$(kubectl get pod --context="${CTX_R1_Z1}" -n sample -l \
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sL helloworld.sample:5000/hello
}
! read -r -d '' snip_failover_to_region1zone2_2_out <<\ENDSNIP
Hello version: region1.zone2, instance: helloworld-region1.zone2-86f77cd7b-cpxhv
ENDSNIP
snip_failover_to_region2zone3_1() {
kubectl --context="${CTX_R1_Z2}" exec \
"$(kubectl get pod --context="${CTX_R1_Z2}" -n sample -l app=helloworld \
-l version=region1.zone2 -o jsonpath='{.items[0].metadata.name}')" \
-n sample -c istio-proxy -- curl -sL -X POST 127.0.0.1:15000/drain_listeners
}
snip_failover_to_region2zone3_2() {
kubectl exec --context="${CTX_R1_Z1}" -n sample -c sleep \
"$(kubectl get pod --context="${CTX_R1_Z1}" -n sample -l \
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sL helloworld.sample:5000/hello
}
! read -r -d '' snip_failover_to_region2zone3_2_out <<\ENDSNIP
Hello version: region2.zone3, instance: helloworld-region2.zone3-86f77cd7b-cpxhv
ENDSNIP
snip_failover_to_region3zone4_1() {
kubectl --context="${CTX_R2_Z3}" exec \
"$(kubectl get pod --context="${CTX_R2_Z3}" -n sample -l app=helloworld \
-l version=region2.zone3 -o jsonpath='{.items[0].metadata.name}')" \
-n sample -c istio-proxy -- curl -sL -X POST 127.0.0.1:15000/drain_listeners
}
snip_failover_to_region3zone4_2() {
kubectl exec --context="${CTX_R1_Z1}" -n sample -c sleep \
"$(kubectl get pod --context="${CTX_R1_Z1}" -n sample -l \
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sL helloworld.sample:5000/hello
}
! read -r -d '' snip_failover_to_region3zone4_2_out <<\ENDSNIP
Hello version: region3.zone4, instance: helloworld-region3.zone4-86f77cd7b-cpxhv
ENDSNIP

View File

@ -0,0 +1,79 @@
#!/usr/bin/env bash
# shellcheck disable=SC2034,SC2154
# Copyright Istio Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# @setup profile=default
source content/en/docs/tasks/traffic-management/locality-load-balancing/common.sh
set -e
set -u
set -o pipefail
function verify_traffic_region1_zone1
{
echo "Verifying all traffic stays in region1.zone1"
snip_configure_locality_failover_1
verify_traffic snip_verify_traffic_stays_in_region1zone1_1 "$snip_verify_traffic_stays_in_region1zone1_1_out"
}
function failover_to_region1_zone2
{
echo "Triggering failover to region1.zone2"
# Terminate the Envoy on the region1.zone1 pod.
snip_failover_to_region1zone2_1
# Verify that all traffic now goes to region1.zone2
verify_traffic snip_failover_to_region1zone2_2 "$snip_failover_to_region1zone2_2_out"
}
function failover_to_region2_zone3
{
echo "Triggering failover to region2.zone3"
# Terminate the Envoy on the region1.zone2 pod.
snip_failover_to_region2zone3_1
# Verify that all traffic now goes to region2.zone3
verify_traffic snip_failover_to_region2zone3_2 "$snip_failover_to_region2zone3_2_out"
}
function failover_to_region3_zone4
{
echo "Triggering failover to region3.zone4"
# Terminate the Envoy on the region2.zone3 pod.
snip_failover_to_region3zone4_1
# Verify that all traffic now goes to region3.zone4
verify_traffic snip_failover_to_region3zone4_2 "$snip_failover_to_region3zone4_2_out"
}
set_env_vars
deploy_services
verify_traffic_region1_zone1
failover_to_region1_zone2
failover_to_region2_zone3
failover_to_region3_zone4
# @cleanup
set +e # ignore cleanup errors
source content/en/docs/tasks/traffic-management/locality-load-balancing/common.sh
set_env_vars
cleanup

View File

@ -13,7 +13,7 @@ aliases:
## Traffic management
- **Improved** [locality based routing](/docs/ops/configuration/traffic-management/locality-load-balancing/) in multicluster environments.
- **Improved** [locality based routing](/docs/tasks/traffic-management/locality-load-balancing/) in multicluster environments.
- **Improved** outbound traffic policy in [`ALLOW_ANY` mode](https://archive.istio.io/v1.2/docs/reference/config/installation-options/#global-options). Traffic for unknown HTTP/HTTPS hosts on an existing port will be [forwarded as is](/docs/tasks/traffic-management/egress/egress-control/#envoy-passthrough-to-external-services). Unknown traffic will be logged in Envoy access logs.
- **Added** support for setting HTTP idle timeouts to upstream services.
- **Improved** Sidecar support for [NONE mode](/docs/reference/config/networking/sidecar/#CaptureMode) (without iptables) .

View File

@ -238,16 +238,17 @@ func newClusterSnapshot(client kube.Client, contextName string) (ClusterSnapshot
return nil
})
}
if err := wg.Wait(); err != nil {
return nilVal, err
}
sort.Strings(clusterSN.Namespaces)
sort.Slice(clusterSN.NamespaceSnapshots, func(i, j int) bool {
return strings.Compare(clusterSN.NamespaceSnapshots[i].Namespace,
clusterSN.NamespaceSnapshots[j].Namespace) < 0
})
if err := wg.Wait(); err != nil {
return nilVal, err
}
return clusterSN, nil
}

View File

@ -99,7 +99,7 @@ func (s SnapshotValidator) run(ctx framework.TestContext) {
if actual != expected {
// Retriable error.
return nil, true, fmt.Errorf("snapshots are different: \n%v", diffText)
return nil, false, fmt.Errorf("snapshots are different: \n%v", diffText)
}
return nil, true, nil
}, snapshotRetryTimeout, snapshotRetryDelay); err != nil {