9.5 KiB
| id | title | sidebar_label | original_id |
|---|---|---|---|
| pod-network-latency | Pod Network Latency Experiment Details | Pod Network Latency | pod-network-latency |
Experiment Metadata
| Type | Description | Tested K8s Platform |
|---|---|---|
| Generic | Inject Network Latency Into Application Pod | GKE, Packet(Kubeadm), EKS, Minikube > v1.6.0, AKS |
Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing
kubectl get podsin operator namespace (typically,litmus). If not, install from here - Ensure that the
pod-network-latencyexperiment resource is available in the cluster by executing kubectlget chaosexperimentsin the desired namespace. . If not, install from here
Entry Criteria
- Application pods are healthy before chaos injection
Exit Criteria
- Application pods are healthy post chaos injection
Details
- The application pod should be healthy once chaos is stopped. Service-requests should be served despite chaos.
- Causes flaky access to application replica by injecting network delay using pumba.
- Injects latency on the specified container by starting a traffic control (tc) process with netem rules to add egress delays
- Latency is injected via pumba library with command pumba netem delay by passing the relevant network interface, latency, chaos duration and regex filter for container name
- Can test the application's resilience to lossy/flaky network
Steps to Execute the Chaos Experiment
-
This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer Getting Started
-
Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
Prepare chaosServiceAccount
- Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
Sample Rbac Manifest
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pod-network-latency-sa
namespace: default
labels:
name: pod-network-latency-sa
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: pod-network-latency-sa
namespace: default
labels:
name: pod-network-latency-sa
rules:
- apiGroups: ["", "litmuschaos.io", "batch"]
resources:
[
"pods",
"jobs",
"pods/log",
"events",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs:
["create", "list", "get", "patch", "update", "delete", "deletecollection"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: pod-network-latency-sa
namespace: default
labels:
name: pod-network-latency-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-network-latency-sa
subjects:
- kind: ServiceAccount
name: pod-network-latency-sa
namespace: default
Prepare ChaosEngine
- Provide the application info in
spec.appinfo - Override the experiment tunables if desired in
experiments.spec.components.env - To understand the values to provided in a ChaosEngine specification, refer ChaosEngine Concepts
Supported Experiment Tunables
| Variables | Description | Type | Notes |
|---|---|---|---|
| NETWORK_INTERFACE | Name of ethernet interface considered for shaping traffic | Mandatory | |
| TARGET_CONTAINER | Name of container which is subjected to network latency | Mandatory | |
| NETWORK_LATENCY | The latency/delay in milliseconds | Optional | Default (60000ms) |
| TOTAL_CHAOS_DURATION | The time duration for chaos insertion (seconds) | Optional | Default (60s) |
| TARGET_POD | Name of the application pod subjected to pod network latency chaos | Optional | If not provided it will select from the app label provided |
| TARGET_IPs | Destination ips for network chaos | Optional | if not provided, it will induce network chaos for all ips/destinations |
| TARGET_HOSTS | Destination hosts for network chaos | Optional | if not provided, it will induce network chaos for all ips/destinations or TARGET_IPs if already defined |
| PODS_AFFECTED_PERC | The Percentage of total pods to target | Optional | Defaults to 0% (corresponds to 1 replica) |
| CONTAINER_RUNTIME | container runtime interface for the cluster | Optional | Defaults to docker, supported values: docker, containerd, crio |
| SOCKET_PATH | Path of the containerd/crio socket file | Optional | Defaults to `/run/containerd/containerd.sock` |
| LIB | The chaos lib used to inject the chaos | Optional | Defaults to litmus, only litmus supported |
| TC_IMAGE | Image used for traffic control in linux | Optional | default value is `gaiadocker/iproute2` |
| LIB_IMAGE | Image used to run the netem command | Optional | Defaults to `litmuschaos/go-runner:latest` |
| RAMP_TIME | Period to wait before and after injection of chaos in sec | Optional | |
| INSTANCE_ID | A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name. | Ensure that the overall length of the chaosresult CR is still < 64 characters |
Sample ChaosEngine Manifest
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-network-chaos
namespace: default
spec:
# It can be delete/retain
jobCleanUpPolicy: "delete"
# It can be true/false
annotationCheck: "true"
# It can be active/stop
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
monitoring: false
appinfo:
appns: "default"
# FYI, To see app label, apply kubectl get pods --show-labels
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: pod-network-latency-sa
experiments:
- name: pod-network-latency
spec:
components:
env:
#Container name where chaos has to be injected
- name: TARGET_CONTAINER
value: "nginx"
#Network interface inside target container
- name: NETWORK_INTERFACE
value: "eth0"
- name: LIB_IMAGE
value: "litmuschaos/go-runner:latest"
- name: NETWORK_LATENCY
value: "60000"
- name: TOTAL_CHAOS_DURATION
value: "60" # in seconds
# provide the name of container runtime
# it supports docker, containerd, crio
# default to docker
- name: CONTAINER_RUNTIME
value: "docker"
# provide the socket file path
# applicable only for containerd and crio runtime
- name: SOCKET_PATH
value: "/run/containerd/containerd.sock"
Create the ChaosEngine Resource
-
Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
kubectl apply -f chaosengine.yml -
If the chaos experiment is not executed, refer to the troubleshooting section to identify the root cause and fix the issues.
Watch Chaos progress
-
View network latency by setting up a ping on the affected pod from the cluster nodes
ping <pod_ip_address>
Check Chaos Experiment Result
-
Check whether the application is resilient to the Pod Network Latency, once the experiment (job) is completed. The ChaosResult resource name is derived like this:
<ChaosEngine-Name>-<ChaosExperiment-Name>.kubectl describe chaosresult <ChaosEngine-Name>-<ChaosExperiment-Name> -n <application-namespace>
Application Pod Network Latency Demo
- A sample recording of this experiment execution is provided here.