Chore [Docs]: Http Chaos Latency experiment docs (#3619)

* Added docs for http chaos latency

Signed-off-by: avaakash <as86414@gmail.com>

* Fixed node label in common tunabled; Added http-latency to sidebar

Signed-off-by: avaakash <as86414@gmail.com>

* Added http-chaos experiment image; Fixed issues in doc

Signed-off-by: avaakash <as86414@gmail.com>

* Removed target_host; Improved experiment description

Signed-off-by: avaakash <as86414@gmail.com>

* Made latency env as mandatory

Signed-off-by: avaakash <as86414@gmail.com>

* Added entry in contents.md

Signed-off-by: avaakash <as86414@gmail.com>

* Update mkdocs/docs/experiments/categories/pods/pod-http-latency.md

Co-authored-by: Neelanjan Manna <neelanjanmanna@gmail.com>

* Added network_interface; Removed target_host

Signed-off-by: avaakash <as86414@gmail.com>

* embedmd run

Signed-off-by: avaakash <as86414@gmail.com>

* Changed TARGET_PORT->TARGET_SERVICE_PORT, LISTEN_PORT->PROXY_PORT

Signed-off-by: avaakash <as86414@gmail.com>

* Update mkdocs/docs/experiments/categories/pods/pod-http-latency.md

Co-authored-by: Neelanjan Manna <neelanjanmanna@gmail.com>
Co-authored-by: Udit Gaurav <35391335+uditgaurav@users.noreply.github.com>
This commit is contained in:
Akash Shrivastava 2022-06-14 17:20:04 +05:30 committed by GitHub
parent 17d866cf53
commit 774264db1b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
13 changed files with 505 additions and 9 deletions

View File

@ -96,6 +96,11 @@ Chaos actions that apply to generic Kubernetes resources are classified into thi
<td>Injects Network loss into Application Pod</td>
<td><a href="/litmus/experiments/categories/pods/pod-network-loss">pod-network-loss</a></td>
</tr>
<tr>
<td>Pod HTTP Latency</td>
<td>Injects HTTP latency into Application Pod</td>
<td><a href="/litmus/experiments/categories/pods/pod-http-latency">pod-http-latency</a></td>
</tr>
</table>
#### Node Chaos

View File

@ -193,7 +193,6 @@ Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/gcp/gcp-vm-disk-loss-by-label/gcp-disk-loss.yaml yaml)
```yaml
## details of the gcp disk
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
@ -228,7 +227,6 @@ Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/gcp/gcp-vm-disk-loss-by-label/chaos-interval.yaml yaml)
```yaml
# defines delay between each successive iteration of the chaos
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
@ -256,5 +254,4 @@ spec:
- name: GCP_PROJECT_ID
value: 'my-project-4513'
```

View File

@ -203,7 +203,6 @@ Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop-by-label/gcp-instance.yaml yaml)
```yaml
## details of the gcp instance
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
@ -238,7 +237,6 @@ Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop-by-label/managed-instance-group.yaml yaml)
```yaml
## scale up and down to maintain the available instance counts
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
@ -276,7 +274,6 @@ Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop-by-label/chaos-interval.yaml yaml)
```yaml
# defines delay between each successive iteration of the chaos
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
@ -304,5 +301,4 @@ spec:
- name: GCP_PROJECT_ID
value: 'my-project-4513'
```

View File

@ -180,7 +180,7 @@ It defines the target application pod selection from a specific node. It is help
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/common/default-app-health-check.yaml yaml)
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/common/node-label-filter.yaml yaml)
```yaml
## node label to filter target pods
apiVersion: litmuschaos.io/v1alpha1

View File

@ -0,0 +1,376 @@
## Introduction
- It injects http response latency on the service whose port is provided as `TARGET_SERVICE_PORT` by starting proxy server and then redirecting the traffic through the proxy server.
- It can test the application's resilience to lossy/flaky http responses.
!!! tip "Scenario: Add latency to the HTTP request"
![Pod HTTP Latency](../../images/pod-http.png)
## Uses
??? info "View the uses of the experiment"
coming soon
## Prerequisites
??? info "Verify the prerequisites"
- Ensure that Kubernetes Version > 1.17
- Ensure that the Litmus Chaos Operator is running by executing <code>kubectl get pods</code> in operator namespace (typically, <code>litmus</code>).If not, install from <a href="https://v1-docs.litmuschaos.io/docs/getstarted/#install-litmus">here</a>
- Ensure that the <code>pod-http-latency</code> experiment resource is available in the cluster by executing <code>kubectl get chaosexperiments</code> in the desired namespace. If not, install from <a href="https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-http-latency/experiment.yaml">here</a>
## Default Validations
??? info "View the default validations"
The application pods should be in running state before and after chaos injection.
## Minimal RBAC configuration example (optional)
!!! tip "NOTE"
If you are using this experiment as part of a litmus workflow scheduled constructed & executed from chaos-center, then you may be making use of the [litmus-admin](https://litmuschaos.github.io/litmus/litmus-admin-rbac.yaml) RBAC, which is pre installed in the cluster as part of the agent setup.
??? note "View the Minimal RBAC permissions"
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-http-latency/rbac.yaml yaml)
```yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pod-http-latency-sa
namespace: default
labels:
name: pod-http-latency-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-http-latency-sa
namespace: default
labels:
name: pod-http-latency-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# deriving the parent/owner details of the pod(if parent is anyof {deployment, statefulset, daemonsets})
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","replicasets", "daemonsets"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: ["apps.openshift.io"]
resources: ["deploymentconfigs"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: [""]
resources: ["replicationcontrollers"]
verbs: ["get","list"]
# deriving the parent/owner details of the pod(if parent is argo-rollouts)
- apiGroups: ["argoproj.io"]
resources: ["rollouts"]
verbs: ["list","get"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-http-latency-sa
namespace: default
labels:
name: pod-http-latency-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-http-latency-sa
subjects:
- kind: ServiceAccount
name: pod-http-latency-sa
namespace: default
```
Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
## Experiment tunables
??? info "check the experiment tunables"
<h2>Mandatory Fields</h2>
<table>
<tr>
<th> Variables </th>
<th> Description </th>
<th> Notes </th>
</tr>
<tr>
<td> TARGET_SERVICE_PORT </td>
<td> Port of the service to target</td>
<td>Defaults to port 80 </td>
</tr>
<tr>
<td> LATENCY </td>
<td> Latency value in ms to be added to requests</td>
<td> Defaults to 2000 </td>
</tr>
</table>
<h2>Optional Fields</h2>
<table>
<tr>
<th> Variables </th>
<th> Description </th>
<th> Notes </th>
</tr>
<tr>
<td> PROXY_PORT </td>
<td> Port where the proxy will be listening for requests</td>
<td> Defaults to 20000 </td>
</tr>
<tr>
<td> NETWORK_INTERFACE </td>
<td> Network interface to be used for the proxy</td>
<td> Defaults to `eth0` </td>
<tr>
<td> CONTAINER_RUNTIME </td>
<td> container runtime interface for the cluster</td>
<td> Defaults to docker, supported values: docker, containerd and crio for litmus and only docker for pumba LIB </td>
</tr>
<tr>
<td> SOCKET_PATH </td>
<td> Path of the containerd/crio/docker socket file </td>
<td> Defaults to `/var/run/docker.sock` </td>
</tr>
<tr>
<td> TOTAL_CHAOS_DURATION </td>
<td> The duration of chaos injection (seconds) </td>
<td> Default (60s) </td>
</tr>
<tr>
<td> TARGET_PODS </td>
<td> Comma separated list of application pod name subjected to pod http latency chaos</td>
<td> If not provided, it will select target pods randomly based on provided appLabels</td>
</tr>
<tr>
<td> PODS_AFFECTED_PERC </td>
<td> The Percentage of total pods to target </td>
<td> Defaults to 0 (corresponds to 1 replica), provide numeric value only </td>
</tr>
<tr>
<td> LIB_IMAGE </td>
<td> Image used to run the netem command </td>
<td> Defaults to `litmuschaos/go-runner:latest` </td>
</tr>
<tr>
<td> RAMP_TIME </td>
<td> Period to wait before and after injection of chaos in sec </td>
<td> </td>
</tr>
<tr>
<td> SEQUENCE </td>
<td> It defines sequence of chaos execution for multiple target pods </td>
<td> Default value: parallel. Supported: serial, parallel </td>
</tr>
</table>
## Experiment Examples
### Common and Pod specific tunables
Refer the [common attributes](../common/common-tunables-for-all-experiments.md) and [Pod specific tunable](common-tunables-for-pod-experiments.md) to tune the common tunables for all experiments and pod specific tunables.
### Target Port
It defines the target port of the service that is being targetted. It can be tuned via `TARGET_SERVICE_PORT` ENV.
Use the following example to tune this:
[embedmd]:# (pod-http-latency/target-port.yaml yaml)
```yaml
## provide the target port of the service
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: engine-nginx
spec:
engineState: "active"
annotationCheck: "false"
appinfo:
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: pod-http-chaos-sa
experiments:
- name: pod-http-chaos
spec:
components:
env:
# provide the target port of the service
- name: TARGET_SERVICE_PORT
value: "80"
```
### Listen Port
It defines the listen port for the proxy server. It can be tuned via `PROXY_PORT` ENV.
Use the following example to tune this:
[embedmd]:# (pod-http-latency/listen-port.yaml yaml)
```yaml
## provide the listen port for proxy
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: engine-nginx
spec:
engineState: "active"
annotationCheck: "false"
appinfo:
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: pod-http-chaos-sa
experiments:
- name: pod-http-chaos
spec:
components:
env:
# provide the listen port for proxy
- name: PROXY_PORT
value: '8080'
# provide the target port of the service
- name: TARGET_SERVICE_PORT
value: "80"
```
### Latency
It defines the latency value to be added to the http request. It can be tuned via `LATENCY` ENV.
Use the following example to tune this:
[embedmd]:# (pod-http-latency/latency.yaml yaml)
```yaml
## provide the latency value
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: engine-nginx
spec:
engineState: "active"
annotationCheck: "false"
appinfo:
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: pod-http-chaos-sa
experiments:
- name: pod-http-chaos
spec:
components:
env:
# provide the latency value
- name: LATENCY
value: '2000'
# provide the target port of the service
- name: TARGET_SERVICE_PORT
value: "80"
```
### Network Interface
It defines the network interface to be used for the proxy. It can be tuned via `NETWORK_INTERFACE` ENV.
Use the following example to tune this:
[embedmd]:# (pod-http-latency/network-interface.yaml yaml)
```yaml
## provide the listen port for proxy
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: engine-nginx
spec:
engineState: "active"
annotationCheck: "false"
appinfo:
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: pod-http-chaos-sa
experiments:
- name: pod-http-chaos
spec:
components:
env:
# provide the network interface for proxy
- name: NETWORK_INTERFACE
value: "eth0"
# provide the target port of the service
- name: TARGET_SERVICE_PORT
value: '80'
```
### Container Runtime Socket Path
It defines the `CONTAINER_RUNTIME` and `SOCKET_PATH` ENV to set the container runtime and socket file path.
- `CONTAINER_RUNTIME`: It supports `docker`, `containerd`, and `crio` runtimes. The default value is `docker`.
- `SOCKET_PATH`: It contains path of docker socket file by default(`/var/run/docker.sock`). For other runtimes provide the appropriate path.
Use the following example to tune this:
[embedmd]:# (pod-http-latency/container-runtime-and-socket-path.yaml yaml)
```yaml
## provide the container runtime and socket file path
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: engine-nginx
spec:
engineState: "active"
annotationCheck: "false"
appinfo:
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: pod-http-chaos-sa
experiments:
- name: pod-http-chaos
spec:
components:
env:
# runtime for the container
# supports docker, containerd, crio
- name: CONTAINER_RUNTIME
value: 'docker'
# path of the socket file
- name: SOCKET_PATH
value: '/var/run/docker.sock'
# provide the target port of the service
- name: TARGET_SERVICE_PORT
value: "80"
```

View File

@ -0,0 +1,28 @@
## provide the container runtime and socket file path
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: engine-nginx
spec:
engineState: "active"
annotationCheck: "false"
appinfo:
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: pod-http-chaos-sa
experiments:
- name: pod-http-chaos
spec:
components:
env:
# runtime for the container
# supports docker, containerd, crio
- name: CONTAINER_RUNTIME
value: 'docker'
# path of the socket file
- name: SOCKET_PATH
value: '/var/run/docker.sock'
# provide the target port of the service
- name: TARGET_SERVICE_PORT
value: "80"

View File

@ -0,0 +1,24 @@
## provide the latency value
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: engine-nginx
spec:
engineState: "active"
annotationCheck: "false"
appinfo:
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: pod-http-chaos-sa
experiments:
- name: pod-http-chaos
spec:
components:
env:
# provide the latency value
- name: LATENCY
value: '2000'
# provide the target port of the service
- name: TARGET_SERVICE_PORT
value: "80"

View File

@ -0,0 +1,24 @@
## provide the listen port for proxy
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: engine-nginx
spec:
engineState: "active"
annotationCheck: "false"
appinfo:
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: pod-http-chaos-sa
experiments:
- name: pod-http-chaos
spec:
components:
env:
# provide the listen port for proxy
- name: PROXY_PORT
value: '8080'
# provide the target port of the service
- name: TARGET_SERVICE_PORT
value: "80"

View File

@ -0,0 +1,24 @@
## provide the listen port for proxy
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: engine-nginx
spec:
engineState: "active"
annotationCheck: "false"
appinfo:
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: pod-http-chaos-sa
experiments:
- name: pod-http-chaos
spec:
components:
env:
# provide the network interface for proxy
- name: NETWORK_INTERFACE
value: "eth0"
# provide the target port of the service
- name: TARGET_SERVICE_PORT
value: '80'

View File

@ -0,0 +1,21 @@
## provide the target port of the service
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: engine-nginx
spec:
engineState: "active"
annotationCheck: "false"
appinfo:
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: pod-http-chaos-sa
experiments:
- name: pod-http-chaos
spec:
components:
env:
# provide the target port of the service
- name: TARGET_SERVICE_PORT
value: "80"

View File

@ -16,7 +16,7 @@ spec:
spec:
components:
# secret name for the experiment image, if using private registry
imagePullSecrets:
experimentImagePullSecrets:
- name: regcred

Binary file not shown.

After

Width:  |  Height:  |  Size: 236 KiB

View File

@ -99,6 +99,7 @@ nav:
- Pod Network Latency: experiments/categories/pods/pod-network-latency.md
- Pod Network Loss: experiments/categories/pods/pod-network-loss.md
- Pod Network Partition: experiments/categories/pods/pod-network-partition.md
- Pod HTTP Latency: experiments/categories/pods/pod-http-latency.md
- Node Chaos:
- Docker Service Kill: experiments/categories/nodes/docker-service-kill.md
- Kubelet Service Kill: experiments/categories/nodes/kubelet-service-kill.md