Migrated all versions. (#3)

Signed-off-by: Vedant Shrotria <vedant.shrotria@mayadata.io>
This commit is contained in:
VEDANT SHROTRIA 2020-12-21 17:03:46 +05:30 committed by GitHub
parent e67110a933
commit cb04a13e48
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
608 changed files with 40404 additions and 28608 deletions

View File

@ -1,80 +0,0 @@
---
id: version-1.0.0-chaoshub
title: Using and contributing to ChaosHub
sidebar_label: ChaosHub
original_id: chaoshub
---
------
**Important links**
Chaos Hub is maintained at https://hub.litmuschaos.io
To contribute new chaos charts visit: https://github.com/litmuschaos/chaos-charts
**Introduction**
Litmus chaos hub is a place where the chaos engineering community members publish their chaos experiments. A set of related chaos experiments are bundled into a `Chaos Chart`. Chaos Charts are classified into the following categories.
- [Generic Chaos](#generic-chaos)
- [Application Chaos](#application-chaos)
- [Platform Chaos](#platform-chaos)
### Generic Chaos
Chaos actions that apply to generic Kubernetes resources are classified into this category. Following chaos experiments are supported under Generic Chaos Chart
| Experiment name | Description | User guide link |
| ----------- | ----------------------------------------- | --------------------------------------------------------- |
| Container Kill | Kill one container in the application pod | [container-kill](container-kill.md)|
| Pod Delete | Fail the application pod | [pod-delete](pod-delete.md) |
| Pod Network Latency | Experiment to inject network latency to the POD | [pod-network-latency](pod-network-latency.md) |
| Pod Network Loss | Experiment to inject network loss to the POD | [pod-network-loss](pod-network-loss.md) |
| CPU Hog | Exhaust CPU resources on the Kubernetes Node | [cpu-hog](cpu-hog.md) |
| Disk Fill | Fillup Ephemeral Storage of a Resource | [disk-fill](disk-fill.md) |
| Disk Loss | External disk loss from the node | [disk-loss](disk-loss.md)|
| Node Drain| Drain the node where application pod is scheduled | [node-drain](node-drain.md) |
| Pod CPU Hog | Consume CPU resources on the application container | [pod-cpu-hog](pod-cpu-hog.md) |
| Pod Network Corruption | Inject Network Packet Corruption Into Application Pod |[pod-network-corruption](pod-network-corruption.md) |
### Application Chaos
While Chaos Experiments under the Generic category offer the ability to induce chaos into Kubernetes resources, it is difficult to analyze and conclude if the chaos induced found a weakness in a given application. The application specific chaos experiments are built with some checks on *pre-conditions* and some expected outcomes after the chaos injection. The result of the chaos experiment is determined by matching the outcome with the expected outcome.
<div class="danger">
<strong>NOTE:</strong> If the result of the chaos experiment is `pass`, it means that the application is resilient to that chaos.
</div>
#### Benefits of contributing an application chaos experiment
Application developers write negative tests in their CI pipelines to test the resiliency of the applications. These negative can be converted into Litmus Chaos Experiments and contributed to ChaosHub, so that the users of the application can use them in staging/pre-production/production environments to check the resilience. Application environments vary considerably from where they are tested (CI pipelines) to where they are deployed (Production). Hence, running the same chaos tests in the user's environment will help determine the weaknesses of the deployment and fixing such weaknesses leads to increased resilience.
Following Application Chaos experiments are available on ChaosHub
| Application | Description | Chaos Experiments |
| ----------- | ----------------------------------------- | --------------------------------------------------------- |
| OpenEBS | Container Attached Storage for Kubernetes | [openebs-pool-pod-failure](openebs-pool-pod-failure.md)<br>[openebs-pool-container-failure](openebs-pool-container-failure.md)<br>[openebs-target-pod-failure](openebs-target-pod-failure.md)<br>[openebs-target-container-failure](openebs-target-container-failure.md)<br>[openebs-target-network-delay](openebs-target-network-delay.md)<br>[openebs-target-network-loss](openebs-target-network-loss.md) |
| Kafka | Open-source stream processing software | [kafka-broker-pod-failure](kafka-broker-pod-failure.md)<br>[kafka-broker-disk-failure](kafka-broker-disk-failure.md)<br> |
| CoreDns | CoreDNS is a fast and flexible DNS server that chains plugins | [coredns-pod-delete](coredns-pod-delete.md)|
### Platform Chaos
Chaos experiments that inject chaos into the platform resources of Kubernetes are classified into this category. Management of platform resources vary significantly from each other, Chaos Charts may be maintained separately for each platform (For example, AWS, GCP, Azure, etc)
Following Platform Chaos experiments are available on ChaosHub
| Platform | Description | Chaos Experiments |
| -------- | ------------------------------------------- | ----------------- |
| AWS | Amazon Web Services platform. Includes EKS. | None |
| GCP | Google Cloud Platform. Includes GKE. | None |
| Azure | Microsoft Azure platform. Includes AKS. | None |

View File

@ -1,42 +0,0 @@
---
id: version-1.0.0-plugins
title: Using other chaos libraries as plugins
sidebar_label: Plugins
original_id: plugins
---
------
Litmus provides a way to use any chaos library or a tool to inject chaos. The chaos tool to be compatible with Litmus should satisfy the following requirements:
- Should be available as a Docker Image
- Should take configuration through a `config-map`
The `plugins` or `chaos-libraries` host the core logic to inject chaos.
These plugins are hosted at https://github.com/litmuschaos/litmus-ansible/tree/master/chaoslib
Litmus project has integration into the following chaos-libraries.
| Chaos Library | Logo | Experiments covered |
| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| <a href="https://github.com/litmuschaos/litmus-ansible" target="_blank">Litmus</a> | <img src="https://camo.githubusercontent.com/953211f24c1c246f7017703f67b9779e4589bf76/68747470733a2f2f6c616e6473636170652e636e63662e696f2f6c6f676f732f6c69746d75732e737667" width="50"> | Litmus native chaos libraries that encompasses the chaos capabilities for `pod-kill`, `container-kill`, `cpu-hog` |
| <a href="https://github.com/alexei-led/pumba" target="_blank">Pumba</a> | <img src="https://github.com/alexei-led/pumba/raw/master/docs/img/pumba_logo.png" width="50"> | Pumba provides chaos capabilities for `network-delay` |
| <a href="https://github.com/bloomberg/powerfulseal" target="_blank">PowerfulSeal</a> | <img src="https://github.com/bloomberg/powerfulseal/raw/master/media/powerful-seal.png" width="50"> | PowerfulSeal provides chaos capabilities for `pod-kill` |
| | | |
Usage of plugins is a configuration parameter inside the chaos experiment.
> Add an example snippet here.
<br>
<br>
<hr>
<br>
<br>

View File

@ -1,80 +0,0 @@
---
id: version-1.1.0-chaoshub
title: Using and contributing to ChaosHub
sidebar_label: ChaosHub
original_id: chaoshub
---
------
**Important links**
Chaos Hub is maintained at https://hub.litmuschaos.io
To contribute new chaos charts visit: https://github.com/litmuschaos/chaos-charts
**Introduction**
Litmus chaos hub is a place where the chaos engineering community members publish their chaos experiments. A set of related chaos experiments are bundled into a `Chaos Chart`. Chaos Charts are classified into the following categories.
- [Generic Chaos](#generic-chaos)
- [Application Chaos](#application-chaos)
- [Platform Chaos](#platform-chaos)
### Generic Chaos
Chaos actions that apply to generic Kubernetes resources are classified into this category. Following chaos experiments are supported under Generic Chaos Chart
| Experiment name | Description | User guide link |
| ----------- | ----------------------------------------- | --------------------------------------------------------- |
| Container Kill | Kill one container in the application pod | [container-kill](container-kill.md)|
| Pod Delete | Fail the application pod | [pod-delete](pod-delete.md) |
| Pod Network Latency | Experiment to inject network latency to the POD | [pod-network-latency](pod-network-latency.md) |
| Pod Network Loss | Experiment to inject network loss to the POD | [pod-network-loss](pod-network-loss.md) |
| Node CPU Hog | Exhaust CPU resources on the Kubernetes Node | [node-cpu-hog](node-cpu-hog.md) |
| Disk Fill | Fillup Ephemeral Storage of a Resource | [disk-fill](disk-fill.md) |
| Disk Loss | External disk loss from the node | [disk-loss](disk-loss.md)|
| Node Drain| Drain the node where application pod is scheduled | [node-drain](node-drain.md) |
| Pod CPU Hog | Consume CPU resources on the application container | [pod-cpu-hog](pod-cpu-hog.md) |
| Pod Network Corruption | Inject Network Packet Corruption Into Application Pod |[pod-network-corruption](pod-network-corruption.md) |
### Application Chaos
While Chaos Experiments under the Generic category offer the ability to induce chaos into Kubernetes resources, it is difficult to analyze and conclude if the chaos induced found a weakness in a given application. The application specific chaos experiments are built with some checks on *pre-conditions* and some expected outcomes after the chaos injection. The result of the chaos experiment is determined by matching the outcome with the expected outcome.
<div class="danger">
<strong>NOTE:</strong> If the result of the chaos experiment is `pass`, it means that the application is resilient to that chaos.
</div>
#### Benefits of contributing an application chaos experiment
Application developers write negative tests in their CI pipelines to test the resiliency of the applications. These negative can be converted into Litmus Chaos Experiments and contributed to ChaosHub, so that the users of the application can use them in staging/pre-production/production environments to check the resilience. Application environments vary considerably from where they are tested (CI pipelines) to where they are deployed (Production). Hence, running the same chaos tests in the user's environment will help determine the weaknesses of the deployment and fixing such weaknesses leads to increased resilience.
Following Application Chaos experiments are available on ChaosHub
| Application | Description | Chaos Experiments |
| ----------- | ----------------------------------------- | --------------------------------------------------------- |
| OpenEBS | Container Attached Storage for Kubernetes | [openebs-pool-pod-failure](openebs-pool-pod-failure.md)<br>[openebs-pool-container-failure](openebs-pool-container-failure.md)<br>[openebs-target-pod-failure](openebs-target-pod-failure.md)<br>[openebs-target-container-failure](openebs-target-container-failure.md)<br>[openebs-target-network-delay](openebs-target-network-delay.md)<br>[openebs-target-network-loss](openebs-target-network-loss.md) |
| Kafka | Open-source stream processing software | [kafka-broker-pod-failure](kafka-broker-pod-failure.md)<br>[kafka-broker-disk-failure](kafka-broker-disk-failure.md)<br> |
| CoreDns | CoreDNS is a fast and flexible DNS server that chains plugins | [coredns-pod-delete](coredns-pod-delete.md)|
### Platform Chaos
Chaos experiments that inject chaos into the platform resources of Kubernetes are classified into this category. Management of platform resources vary significantly from each other, Chaos Charts may be maintained separately for each platform (For example, AWS, GCP, Azure, etc)
Following Platform Chaos experiments are available on ChaosHub
| Platform | Description | Chaos Experiments |
| -------- | ------------------------------------------- | ----------------- |
| AWS | Amazon Web Services platform. Includes EKS. | None |
| GCP | Google Cloud Platform. Includes GKE. | None |
| Azure | Microsoft Azure platform. Includes AKS. | None |

View File

@ -1,131 +0,0 @@
---
id: version-1.2.0-admin-mode
title: Administrator Mode
sidebar_label: Administrator Mode
original_id: admin-mode
---
------
### What is Adminstator Mode?
Admin mode is one of the ways the chaos orchestration is set up in Litmus, wherein all chaos resources (i.e., install time resources like the operator, chaosexperiment CRs, chaosServiceAccount/rbac and runtime resources like chaosengine, chaos-runner, experiment jobs & chaosresults) are setup in a single admin namespace (typically, litmus). In other words, centralized administration of chaos.
This feature is aimed at making the SRE/Cluster Admins life easier by doing away with setting up chaos pre-requisites on a per namespace basis (which may be more relevant in an autonomous/self-service cluster sharing model in dev environments).
This mode typically needs a "wider" & "stronger" ClusterRole, albeit one that is still just a superset of the individual experiment permissions. In this mode, the applications in their respective namespaces are subjected to chaos while the chaos job runs elsewhere, i.e., admin namespace.
### How to use Adminstator Mode?
In order to use Admin Mode, you just have to create a ServiceAccount in the *admin* or so called *chaos* namespace (`litmus` itself can be used), which is tied to a ClusterRole that has the permissions to perform operations on Kubernetes resources involved in the selected experiments across namespaces.
Provide this ServiceAccount in ChaosEngine's .spec.chaosServiceAccount.
### Example
#### Prepare RBAC Manifest
Here is an RBAC definition, which in essence is a superset of individual experiments RBAC that has the permissions to run all chaos experiments across different namespaces.
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/pages/master/docs/litmus-admin-rbac.yaml)
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: litmus-admin
namespace: litmus
labels:
name: litmus-admin
---
# Source: openebs/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: litmus-admin
labels:
name: litmus-admin
rules:
- apiGroups: ["","apps","batch","extensions","litmuschaos.io","openebs.io","storage.k8s.io"]
resources: ["chaosengines","chaosexperiments","chaosresults","cstorpools","cstorvolumereplicas","configmaps","secrets","pods","pods/exec","pods/log","pods/eviction","jobs","replicasets","deployments","daemonsets","statefulsets","persistentvolumeclaims","persistentvolumes","storageclasses","services","events"]
verbs: ["create","delete","get","list","patch","update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list","patch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: litmus-admin
labels:
name: litmus-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: litmus-admin
subjects:
- kind: ServiceAccount
name: litmus-admin
namespace: litmus
```
#### Prepare ChaosEngine
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: litmus #Chaos Resources Namespace
spec:
appinfo:
appns: 'default' #Application Namespace
applabel: 'app=nginx'
appkind: 'deployment'
# It can be true/false
annotationCheck: 'true'
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
chaosServiceAccount: litmus-admin
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
experiments:
- name: pod-delete
spec:
components:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '30'
# set chaos interval (in sec) as desired
- name: CHAOS_INTERVAL
value: '10'
# pod failures without '--force' & default terminationGracePeriodSeconds
- name: FORCE
value: 'false'
```
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
### Watch Chaos Engine
- Describe Chaos Engine for chaos steps.
`kubectl describe chaosengine nginx-chaos -n litmus`
### Watch Chaos progress
- View pod terminations & recovery by setting up a watch on the pods in the application namespace
`watch -n 1 kubectl get pods -n default`
### Check Chaos Experiment Result
- Check whether the application is resilient to the pod failure, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult nginx-chaos-pod-delete -n litmus`

View File

@ -1,42 +0,0 @@
---
id: version-1.6.0-plugins
title: Using other chaos libraries as plugins
sidebar_label: Plugins
original_id: plugins
---
------
Litmus provides a way to use any chaos library or a tool to inject chaos. The chaos tool to be compatible with Litmus should satisfy the following requirements:
- Should be available as a Docker Image
- Should take configuration through a `config-map`
The `plugins` or `chaos-libraries` host the core logic to inject chaos.
These plugins are hosted at https://github.com/litmuschaos/litmus-ansible/tree/master/chaoslib
Litmus project has integration into the following chaos-libraries.
| Chaos Library | Logo | Experiments covered |
| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| <a href="https://github.com/litmuschaos/litmus" target="_blank">Litmus</a> | <img src="https://camo.githubusercontent.com/953211f24c1c246f7017703f67b9779e4589bf76/68747470733a2f2f6c616e6473636170652e636e63662e696f2f6c6f676f732f6c69746d75732e737667" width="50"> | Litmus native chaos libraries that encompasses the chaos capabilities for `pod-kill`, `container-kill`, `cpu-hog` |
| <a href="https://github.com/alexei-led/pumba" target="_blank">Pumba</a> | <img src="https://github.com/alexei-led/pumba/raw/master/docs/img/pumba_logo.png" width="50"> | Pumba provides chaos capabilities for `network-delay` |
| <a href="https://github.com/bloomberg/powerfulseal" target="_blank">PowerfulSeal</a> | <img src="https://github.com/bloomberg/powerfulseal/raw/master/media/powerful-seal.png" width="50"> | PowerfulSeal provides chaos capabilities for `pod-kill` |
| | | |
Usage of plugins is a configuration parameter inside the chaos experiment.
> Add an example snippet here.
<br>
<br>
<hr>
<br>
<br>

View File

@ -1,92 +0,0 @@
---
id: version-1.8.0-chaoshub
title: Using and contributing to ChaosHub
sidebar_label: ChaosHub
original_id: chaoshub
---
------
**Important links**
Chaos Hub is maintained at https://hub.litmuschaos.io
To contribute new chaos charts visit: https://github.com/litmuschaos/chaos-charts
**Introduction**
Litmus chaos hub is a place where the chaos engineering community members publish their chaos experiments. A set of related chaos experiments are bundled into a `Chaos Chart`. Chaos Charts are classified into the following categories.
- [Generic Chaos](#generic-chaos)
- [Application Chaos](#application-chaos)
- [Platform Chaos](#platform-chaos)
### Generic Chaos
Chaos actions that apply to generic Kubernetes resources are classified into this category. Following chaos experiments are supported under Generic Chaos Chart
| Experiment name | Description | User guide link |
| ----------- | ----------------------------------------- | --------------------------------------------------------- |
| Container Kill | Kills the container in the application pod | [container-kill](container-kill.md)|
| Pod Delete | Deletes the application pod | [pod-delete](pod-delete.md) |
| Pod Network Latency | Injects network latency into the pod | [pod-network-latency](pod-network-latency.md) |
| Pod Network Loss | Injects network loss into the pod | [pod-network-loss](pod-network-loss.md) |
| Node CPU Hog | Exhaust CPU resources on the Kubernetes Node | [node-cpu-hog](node-cpu-hog.md) |
| Node Memory Hog | Exhaust Memory resources on the Kubernetes Node | [node-memory-hog](node-memory-hog.md) |
| Disk Fill | Fillup Ephemeral Storage of a Resource | [disk-fill](disk-fill.md) |
| Disk Loss | External disk loss from the node | [disk-loss](disk-loss.md)|
| Node Drain| Drains the node where application pod is scheduled | [node-drain](node-drain.md) |
| Pod CPU Hog | Consumes CPU resources on the application container | [pod-cpu-hog](pod-cpu-hog.md) |
| Pod Memory Hog | Consumes Memory resources on the application container | [pod-memory-hog](pod-memory-hog.md) |
| Pod Network Corruption | Injects Network Packet Corruption into Application Pod |[pod-network-corruption](pod-network-corruption.md) |
| Kubelet Service Kill | Kills the kubelet service on the application node |[kubelet-service-kill](kubelet-service-kill.md) |
| Docker Service Kill | Kills the docker service on the application node |[docker-service-kill](docker-service-kill.md) |
| Node Taint| Taints the node where application pod is scheduled | [node-taint](node-taint.md) |
| Pod Autoscaler| Scales the application replicas and test the node autoscaling on cluster | [pod-autoscaler](pod-autoscaler.md) |
| Pod Network Duplication | Injects Network Packet Duplication into Application Pod |[pod-network-duplication](pod-network-duplication.md) |
| Pod IO Stress | Injects IO stress resources on the application container | [pod-io-stress](pod-io-stress.md) |
| Node IO stress| Injects IO stress resources on the application node |[node-io-stress](node-io-stress.md) |
### Application Chaos
While Chaos Experiments under the Generic category offer the ability to induce chaos into Kubernetes resources, it is difficult to analyze and conclude if the chaos induced found a weakness in a given application. The application specific chaos experiments are built with some checks on *pre-conditions* and some expected outcomes after the chaos injection. The result of the chaos experiment is determined by matching the outcome with the expected outcome.
<div class="danger">
<strong>NOTE:</strong> If the result of the chaos experiment is `pass`, it means that the application is resilient to that chaos.
</div>
#### Benefits of contributing an application chaos experiment
Application developers write negative tests in their CI pipelines to test the resiliency of the applications. These negative can be converted into Litmus Chaos Experiments and contributed to ChaosHub, so that the users of the application can use them in staging/pre-production/production environments to check the resilience. Application environments vary considerably from where they are tested (CI pipelines) to where they are deployed (Production). Hence, running the same chaos tests in the user's environment will help determine the weaknesses of the deployment and fixing such weaknesses leads to increased resilience.
Following Application Chaos experiments are available on ChaosHub
| Application | Description | Chaos Experiments |
| ----------- | ----------------------------------------- | --------------------------------------------------------- |
| OpenEBS | Container Attached Storage for Kubernetes | [openebs-pool-pod-failure](openebs-pool-pod-failure.md)<br>[openebs-pool-container-failure](openebs-pool-container-failure.md)<br>[openebs-target-pod-failure](openebs-target-pod-failure.md)<br>[openebs-target-container-failure](openebs-target-container-failure.md)<br>[openebs-target-network-delay](openebs-target-network-delay.md)<br>[openebs-target-network-loss](openebs-target-network-loss.md) <br>[openebs-control-plane-chaos](openebs-control-plane-chaos.md) <br>[openebs-nfs-provisioner-kill](openebs-nfs-provisioner-kill.md) <br>[openebs-target-network-loss](openebs-target-network-loss.md) <br>[openebs-pool-disk-loss](openebs-pool-disk-loss.md) <br>[openebs-pool-network-loss](openebs-pool-network-loss.md) <br>[openebs-pool-network-delay](openebs-pool-network-delay.md)|
| Kafka | Open-source stream processing software | [kafka-broker-pod-failure](kafka-broker-pod-failure.md)<br>[kafka-broker-disk-failure](kafka-broker-disk-failure.md)<br> |
| CoreDns | CoreDNS is a fast and flexible DNS server that chains plugins | [coredns-pod-delete](coredns-pod-delete.md)|
| Cassandra | Cassandra is an opensource distributed database | [cassandra-pod-delete](cassandra-pod-delete.md)|
### Platform Chaos
Chaos experiments that inject chaos into the platform resources of Kubernetes are classified into this category. Management of platform resources vary significantly from each other, Chaos Charts may be maintained separately for each platform (For example, AWS, GCP, Azure, etc)
Following Platform Chaos experiments are available on ChaosHub
| Platform | Description | Chaos Experiments |
| -------- | ------------------------------------------- | ----------------- |
| AWS | Amazon Web Services platform. Includes EKS. | None |
| GCP | Google Cloud Platform. Includes GKE. | None |
| Azure | Microsoft Azure platform. Includes AKS. | None |

View File

@ -1,42 +0,0 @@
---
id: version-1.8.0-plugins
title: Using other chaos libraries as plugins
sidebar_label: Plugins
original_id: plugins
---
------
Litmus provides a way to use any chaos library or a tool to inject chaos. The chaos tool to be compatible with Litmus should satisfy the following requirements:
- Should be available as a Docker Image
- Should take configuration through a `config-map`
The `plugins` or `chaos-libraries` host the core logic to inject chaos.
These plugins are hosted at https://github.com/litmuschaos/litmus-ansible/tree/master/chaoslib
Litmus project has integration into the following chaos-libraries.
| Chaos Library | Logo | Experiments covered |
| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| <a href="https://github.com/litmuschaos/litmus" target="_blank">Litmus</a> | <img src="https://camo.githubusercontent.com/953211f24c1c246f7017703f67b9779e4589bf76/68747470733a2f2f6c616e6473636170652e636e63662e696f2f6c6f676f732f6c69746d75732e737667" width="50"> | Litmus native chaos libraries that encompasses the chaos capabilities for `pod-kill`, `container-kill`, `cpu-hog`, `network-chaos`, `disk-chaos`, `memory-hog`|
| <a href="https://github.com/alexei-led/pumba" target="_blank">Pumba</a> | <img src="https://github.com/alexei-led/pumba/raw/master/docs/img/pumba_logo.png" width="50"> | Pumba provides chaos capabilities for `network-delay` |
| <a href="https://github.com/bloomberg/powerfulseal" target="_blank">PowerfulSeal</a> | <img src="https://github.com/bloomberg/powerfulseal/raw/master/media/powerful-seal.png" width="50"> | PowerfulSeal provides chaos capabilities for `pod-kill` |
| | | |
Usage of plugins is a configuration parameter inside the chaos experiment.
> Add an example snippet here.
<br>
<br>
<hr>
<br>
<br>

View File

@ -1,92 +0,0 @@
---
id: version-1.9.0-chaoshub
title: Using and contributing to ChaosHub
sidebar_label: ChaosHub
original_id: chaoshub
---
------
**Important links**
Chaos Hub is maintained at https://hub.litmuschaos.io
To contribute new ChaosCharts visit: https://github.com/litmuschaos/chaos-charts
**Introduction**
Litmus chaos hub is a place where the Chaos Engineering community members publish their chaos experiments. A set of related chaos experiments are bundled into a `Chaos Chart`. Chaos Charts are classified into the following categories.
- [Generic Chaos](#generic-chaos)
- [Application Chaos](#application-chaos)
- [Platform Chaos](#platform-chaos)
### Generic Chaos
Chaos actions that apply to generic Kubernetes resources are classified into this category. Following chaos experiments are supported under Generic Chaos Chart
| Experiment name | Description | User guide link |
| ----------- | ----------------------------------------- | --------------------------------------------------------- |
| Container Kill | Kills the container in the application pod | [container-kill](container-kill.md)|
| Pod Delete | Deletes the application pod | [pod-delete](pod-delete.md) |
| Pod Network Latency | Injects network latency into the pod | [pod-network-latency](pod-network-latency.md) |
| Pod Network Loss | Injects network loss into the pod | [pod-network-loss](pod-network-loss.md) |
| Node CPU Hog | Exhaust CPU resources on the Kubernetes Node | [node-cpu-hog](node-cpu-hog.md) |
| Node Memory Hog | Exhaust Memory resources on the Kubernetes Node | [node-memory-hog](node-memory-hog.md) |
| Disk Fill | Fillup Ephemeral Storage of a Resource | [disk-fill](disk-fill.md) |
| Disk Loss | External disk loss from the node | [disk-loss](disk-loss.md)|
| Node Drain| Drains the node where application pod is scheduled | [node-drain](node-drain.md) |
| Pod CPU Hog | Consumes CPU resources on the application container | [pod-cpu-hog](pod-cpu-hog.md) |
| Pod Memory Hog | Consumes Memory resources on the application container | [pod-memory-hog](pod-memory-hog.md) |
| Pod Network Corruption | Injects Network Packet Corruption into Application Pod |[pod-network-corruption](pod-network-corruption.md) |
| Kubelet Service Kill | Kills the kubelet service on the application node |[kubelet-service-kill](kubelet-service-kill.md) |
| Docker Service Kill | Kills the docker service on the application node |[docker-service-kill](docker-service-kill.md) |
| Node Taint| Taints the node where application pod is scheduled | [node-taint](node-taint.md) |
| Pod Autoscaler| Scales the application replicas and test the node autoscaling on cluster | [pod-autoscaler](pod-autoscaler.md) |
| Pod Network Duplication | Injects Network Packet Duplication into Application Pod |[pod-network-duplication](pod-network-duplication.md) |
| Pod IO Stress | Injects IO stress resources on the application container | [pod-io-stress](pod-io-stress.md) |
| Node IO stress| Injects IO stress resources on the application node |[node-io-stress](node-io-stress.md) |
### Application Chaos
While Chaos Experiments under the Generic category offer the ability to induce chaos into Kubernetes resources, it is difficult to analyze and conclude if the chaos induced found a weakness in a given application. The application specific chaos experiments are built with some checks on *pre-conditions* and some expected outcomes after the chaos injection. The result of the chaos experiment is determined by matching the outcome with the expected outcome.
<div class="danger">
<strong>NOTE:</strong> If the result of the chaos experiment is `pass`, it means that the application is resilient to that chaos.
</div>
#### Benefits of contributing an application chaos experiment
Application developers write negative tests in their CI pipelines to test the resiliency of the applications. These negative can be converted into Litmus Chaos Experiments and contributed to ChaosHub, so that the users of the application can use them in staging/pre-production/production environments to check the resilience. Application environments vary considerably from where they are tested (CI pipelines) to where they are deployed (Production). Hence, running the same chaos tests in the user's environment will help determine the weaknesses of the deployment and fixing such weaknesses leads to increased resilience.
Following Application Chaos experiments are available on ChaosHub
| Application | Description | Chaos Experiments |
| ----------- | ----------------------------------------- | --------------------------------------------------------- |
| OpenEBS | Container Attached Storage for Kubernetes | [openebs-pool-pod-failure](openebs-pool-pod-failure.md)<br>[openebs-pool-container-failure](openebs-pool-container-failure.md)<br>[openebs-target-pod-failure](openebs-target-pod-failure.md)<br>[openebs-target-container-failure](openebs-target-container-failure.md)<br>[openebs-target-network-delay](openebs-target-network-delay.md)<br>[openebs-target-network-loss](openebs-target-network-loss.md) <br>[openebs-control-plane-chaos](openebs-control-plane-chaos.md) <br>[openebs-nfs-provisioner-kill](openebs-nfs-provisioner-kill.md) <br>[openebs-target-network-loss](openebs-target-network-loss.md) <br>[openebs-pool-disk-loss](openebs-pool-disk-loss.md) <br>[openebs-pool-network-loss](openebs-pool-network-loss.md) <br>[openebs-pool-network-delay](openebs-pool-network-delay.md)|
| Kafka | Open-source stream processing software | [kafka-broker-pod-failure](kafka-broker-pod-failure.md)<br>[kafka-broker-disk-failure](kafka-broker-disk-failure.md)<br> |
| CoreDns | CoreDNS is a fast and flexible DNS server that chains plugins | [coredns-pod-delete](coredns-pod-delete.md)|
| Cassandra | Cassandra is an opensource distributed database | [cassandra-pod-delete](cassandra-pod-delete.md)|
### Platform Chaos
Chaos experiments that inject chaos into the platform resources of Kubernetes are classified into this category. Management of platform resources vary significantly from each other, Chaos Charts may be maintained separately for each platform (For example, AWS, GCP, Azure, etc)
Following Platform Chaos experiments are available on ChaosHub
| Platform | Description | Chaos Experiments |
| -------- | ------------------------------------------- | ----------------- |
| AWS | Amazon Web Services platform. Includes EKS. | None |
| GCP | Google Cloud Platform. Includes GKE. | None |
| Azure | Microsoft Azure platform. Includes AKS. | None |

View File

@ -1,59 +0,0 @@
{
"version-1.0.0-docs": {
"Getting Started": [
"version-1.0.0-getstarted",
"version-1.0.0-chaoshub",
"version-1.0.0-plugins",
"version-1.0.0-architecture",
"version-1.0.0-resources",
"version-1.0.0-community",
"version-1.0.0-devguide"
],
"Experiments": [
{
"type": "subcategory",
"label": "Generic",
"ids": [
"version-1.0.0-pod-delete",
"version-1.0.0-container-kill",
"version-1.0.0-pod-network-latency",
"version-1.0.0-pod-network-loss",
"version-1.0.0-pod-network-corruption",
"version-1.0.0-pod-cpu-hog",
"version-1.0.0-disk-fill",
"version-1.0.0-disk-loss",
"version-1.0.0-cpu-hog",
"version-1.0.0-node-drain"
]
},
{
"type": "subcategory",
"label": "OpenEBS",
"ids": [
"version-1.0.0-openebs-target-container-failure",
"version-1.0.0-openebs-target-network-delay",
"version-1.0.0-openebs-target-network-loss",
"version-1.0.0-openebs-target-pod-failure",
"version-1.0.0-openebs-pool-pod-failure",
"version-1.0.0-openebs-pool-container-failure",
"version-1.0.0-openebs-target-network-delay"
]
},
{
"type": "subcategory",
"label": "Kafka",
"ids": [
"version-1.0.0-kafka-broker-pod-failure",
"version-1.0.0-kafka-broker-disk-failure"
]
},
{
"type": "subcategory",
"label": "CoreDns",
"ids": [
"version-1.0.0-coredns-pod-delete"
]
}
]
}
}

View File

@ -1,66 +0,0 @@
{
"version-1.1.0-docs": {
"Getting Started": [
"version-1.1.0-getstarted",
"version-1.1.0-chaoshub",
"version-1.1.0-plugins",
"version-1.1.0-architecture",
"version-1.1.0-resources",
"version-1.1.0-community",
"version-1.1.0-devguide"
],
"Experiments": [
{
"type": "subcategory",
"label": "Generic",
"ids": [
"version-1.1.0-pod-delete",
"version-1.1.0-container-kill",
"version-1.1.0-pod-network-latency",
"version-1.1.0-pod-network-loss",
"version-1.1.0-pod-network-corruption",
"version-1.1.0-pod-cpu-hog",
"version-1.1.0-disk-fill",
"version-1.1.0-disk-loss",
"version-1.1.0-node-cpu-hog",
"version-1.1.0-node-drain"
]
},
{
"type": "subcategory",
"label": "OpenEBS",
"ids": [
"version-1.1.0-openebs-target-container-failure",
"version-1.1.0-openebs-target-network-delay",
"version-1.1.0-openebs-target-network-loss",
"version-1.1.0-openebs-target-pod-failure",
"version-1.1.0-openebs-pool-pod-failure",
"version-1.1.0-openebs-pool-container-failure",
"version-1.1.0-openebs-pool-network-delay",
"version-1.1.0-openebs-pool-network-loss",
"version-1.1.0-openebs-control-plane-chaos",
"version-1.1.0-cStor-pool-chaos"
]
},
{
"type": "subcategory",
"label": "Kafka",
"ids": [
"version-1.1.0-kafka-broker-pod-failure",
"version-1.1.0-kafka-broker-disk-failure"
]
},
{
"type": "subcategory",
"label": "CoreDns",
"ids": [
"version-1.1.0-coredns-pod-delete"
]
}
],
"Litmus FAQs": [
"version-1.1.0-faq-general",
"version-1.1.0-faq-troubleshooting"
]
}
}

View File

@ -1,74 +0,0 @@
{
"version-1.2.0-docs": {
"Getting Started": [
"version-1.2.0-getstarted",
"version-1.2.0-chaoshub",
"version-1.2.0-plugins",
"version-1.2.0-architecture",
"version-1.2.0-resources",
"version-1.2.0-community",
"version-1.2.0-devguide"
],
"Concepts": [
"version-1.2.0-chaosengine"
],
"Experiments": [
{
"type": "subcategory",
"label": "Generic",
"ids": [
"version-1.2.0-pod-delete",
"version-1.2.0-container-kill",
"version-1.2.0-pod-network-latency",
"version-1.2.0-pod-network-loss",
"version-1.2.0-pod-network-corruption",
"version-1.2.0-pod-cpu-hog",
"version-1.2.0-disk-fill",
"version-1.2.0-disk-loss",
"version-1.2.0-node-cpu-hog",
"version-1.2.0-node-memory-hog",
"version-1.2.0-node-drain"
]
},
{
"type": "subcategory",
"label": "OpenEBS",
"ids": [
"version-1.2.0-openebs-target-container-failure",
"version-1.2.0-openebs-target-network-delay",
"version-1.2.0-openebs-target-network-loss",
"version-1.2.0-openebs-target-pod-failure",
"version-1.2.0-openebs-pool-pod-failure",
"version-1.2.0-openebs-pool-container-failure",
"version-1.2.0-openebs-pool-network-delay",
"version-1.2.0-openebs-pool-network-loss",
"version-1.2.0-openebs-control-plane-chaos",
"version-1.2.0-cStor-pool-chaos",
"version-1.2.0-openebs-pool-disk-loss"
]
},
{
"type": "subcategory",
"label": "Kafka",
"ids": [
"version-1.2.0-kafka-broker-pod-failure",
"version-1.2.0-kafka-broker-disk-failure"
]
},
{
"type": "subcategory",
"label": "CoreDns",
"ids": [
"version-1.2.0-coredns-pod-delete"
]
}
],
"Litmus FAQs": [
"version-1.2.0-faq-general",
"version-1.2.0-faq-troubleshooting"
],
"Advanced": [
"version-1.2.0-admin-mode"
]
}
}

View File

@ -1,83 +0,0 @@
{
"version-1.3.0-docs": {
"Getting Started": [
"version-1.3.0-getstarted",
"version-1.3.0-chaoshub",
"version-1.3.0-plugins",
"version-1.3.0-architecture",
"version-1.3.0-resources",
"version-1.3.0-community",
"version-1.3.0-devguide"
],
"Concepts": [
"version-1.3.0-chaosengine"
],
"Experiments": [
{
"type": "subcategory",
"label": "Generic",
"ids": [
"version-1.3.0-pod-delete",
"version-1.3.0-container-kill",
"version-1.3.0-pod-network-latency",
"version-1.3.0-pod-network-loss",
"version-1.3.0-pod-network-corruption",
"version-1.3.0-pod-cpu-hog",
"version-1.3.0-pod-memory-hog",
"version-1.3.0-disk-fill",
"version-1.3.0-disk-loss",
"version-1.3.0-node-cpu-hog",
"version-1.3.0-node-memory-hog",
"version-1.3.0-node-drain"
]
},
{
"type": "subcategory",
"label": "OpenEBS",
"ids": [
"version-1.3.0-openebs-target-container-failure",
"version-1.3.0-openebs-target-network-delay",
"version-1.3.0-openebs-target-network-loss",
"version-1.3.0-openebs-target-pod-failure",
"version-1.3.0-openebs-pool-pod-failure",
"version-1.3.0-openebs-pool-container-failure",
"version-1.3.0-openebs-pool-network-delay",
"version-1.3.0-openebs-pool-network-loss",
"version-1.3.0-openebs-control-plane-chaos",
"version-1.3.0-cStor-pool-chaos",
"version-1.3.0-openebs-pool-disk-loss",
"version-1.3.0-openebs-nfs-provisioner-kill"
]
},
{
"type": "subcategory",
"label": "Kafka",
"ids": [
"version-1.3.0-kafka-broker-pod-failure",
"version-1.3.0-kafka-broker-disk-failure"
]
},
{
"type": "subcategory",
"label": "CoreDns",
"ids": [
"version-1.3.0-coredns-pod-delete"
]
},
{
"type": "subcategory",
"label": "Cassandra",
"ids": [
"version-1.3.0-cassandra-pod-delete"
]
}
],
"Litmus FAQs": [
"version-1.3.0-faq-general",
"version-1.3.0-faq-troubleshooting"
],
"Advanced": [
"version-1.3.0-admin-mode"
]
}
}

View File

@ -1,96 +0,0 @@
{
"version-1.4.0-docs": {
"Getting Started": [
"version-1.4.0-getstarted",
"version-1.4.0-chaoshub",
"version-1.4.0-plugins",
"version-1.4.0-architecture",
"version-1.4.0-resources",
"version-1.4.0-community",
"version-1.4.0-devguide"
],
"Concepts": [
"version-1.4.0-chaosengine",
"version-1.4.0-chaosschedule"
],
"Platforms": [
{
"type": "subcategory",
"label": "OpenShift",
"ids": [
"version-1.4.0-openshift-litmus"
]
}
],
"Experiments": [
{
"type": "subcategory",
"label": "Generic",
"ids": [
"version-1.4.0-pod-delete",
"version-1.4.0-container-kill",
"version-1.4.0-pod-network-latency",
"version-1.4.0-pod-network-loss",
"version-1.4.0-pod-network-corruption",
"version-1.4.0-pod-cpu-hog",
"version-1.4.0-pod-memory-hog",
"version-1.4.0-disk-fill",
"version-1.4.0-disk-loss",
"version-1.4.0-node-cpu-hog",
"version-1.4.0-node-memory-hog",
"version-1.4.0-node-drain"
]
},
{
"type": "subcategory",
"label": "OpenEBS",
"ids": [
"version-1.4.0-openebs-target-container-failure",
"version-1.4.0-openebs-target-network-delay",
"version-1.4.0-openebs-target-network-loss",
"version-1.4.0-openebs-target-pod-failure",
"version-1.4.0-openebs-pool-pod-failure",
"version-1.4.0-openebs-pool-container-failure",
"version-1.4.0-openebs-pool-network-delay",
"version-1.4.0-openebs-pool-network-loss",
"version-1.4.0-openebs-control-plane-chaos",
"version-1.4.0-cStor-pool-chaos",
"version-1.4.0-openebs-pool-disk-loss",
"version-1.4.0-openebs-nfs-provisioner-kill"
]
},
{
"type": "subcategory",
"label": "Kafka",
"ids": [
"version-1.4.0-kafka-broker-pod-failure",
"version-1.4.0-kafka-broker-disk-failure"
]
},
{
"type": "subcategory",
"label": "CoreDns",
"ids": [
"version-1.4.0-coredns-pod-delete"
]
},
{
"type": "subcategory",
"label": "Cassandra",
"ids": [
"version-1.4.0-cassandra-pod-delete"
]
}
],
"Scheduler (alpha)": [
"version-1.4.0-scheduling"
],
"Litmus FAQs": [
"version-1.4.0-faq-general",
"version-1.4.0-faq-troubleshooting"
],
"Advanced": [
"version-1.4.0-admin-mode"
]
}
}

View File

@ -1,97 +0,0 @@
{
"version-1.5.0-docs": {
"Getting Started": [
"version-1.5.0-getstarted",
"version-1.5.0-chaoshub",
"version-1.5.0-plugins",
"version-1.5.0-architecture",
"version-1.5.0-resources",
"version-1.5.0-community",
"version-1.5.0-devguide"
],
"Concepts": [
"version-1.5.0-chaosengine",
"version-1.5.0-chaosschedule"
],
"Platforms": [
{
"type": "subcategory",
"label": "OpenShift",
"ids": [
"version-1.5.0-openshift-litmus"
]
}
],
"Experiments": [
{
"type": "subcategory",
"label": "Generic",
"ids": [
"version-1.5.0-pod-delete",
"version-1.5.0-container-kill",
"version-1.5.0-pod-network-latency",
"version-1.5.0-pod-network-loss",
"version-1.5.0-pod-network-corruption",
"version-1.5.0-pod-cpu-hog",
"version-1.5.0-pod-memory-hog",
"version-1.5.0-disk-fill",
"version-1.5.0-disk-loss",
"version-1.5.0-node-cpu-hog",
"version-1.5.0-node-memory-hog",
"version-1.5.0-node-drain",
"version-1.5.0-kubelet-service-kill"
]
},
{
"type": "subcategory",
"label": "OpenEBS",
"ids": [
"version-1.5.0-openebs-target-container-failure",
"version-1.5.0-openebs-target-network-delay",
"version-1.5.0-openebs-target-network-loss",
"version-1.5.0-openebs-target-pod-failure",
"version-1.5.0-openebs-pool-pod-failure",
"version-1.5.0-openebs-pool-container-failure",
"version-1.5.0-openebs-pool-network-delay",
"version-1.5.0-openebs-pool-network-loss",
"version-1.5.0-openebs-control-plane-chaos",
"version-1.5.0-cStor-pool-chaos",
"version-1.5.0-openebs-pool-disk-loss",
"version-1.5.0-openebs-nfs-provisioner-kill"
]
},
{
"type": "subcategory",
"label": "Kafka",
"ids": [
"version-1.5.0-kafka-broker-pod-failure",
"version-1.5.0-kafka-broker-disk-failure"
]
},
{
"type": "subcategory",
"label": "CoreDns",
"ids": [
"version-1.5.0-coredns-pod-delete"
]
},
{
"type": "subcategory",
"label": "Cassandra",
"ids": [
"version-1.5.0-cassandra-pod-delete"
]
}
],
"Scheduler (alpha)": [
"version-1.5.0-scheduling"
],
"Litmus FAQs": [
"version-1.5.0-faq-general",
"version-1.5.0-faq-troubleshooting"
],
"Advanced": [
"version-1.5.0-admin-mode"
]
}
}

View File

@ -1,113 +0,0 @@
{
"version-1.6.0-docs": {
"Getting Started": [
"version-1.6.0-getstarted",
"version-1.6.0-chaoshub",
"version-1.6.0-plugins",
"version-1.6.0-architecture",
"version-1.6.0-resources",
"version-1.6.0-community",
"version-1.6.0-devguide"
],
"Litmus Demo": [
"version-1.6.0-litmus-demo"
],
"Concepts": [
"version-1.6.0-chaosengine",
"version-1.6.0-chaosschedule"
],
"Platforms": [
{
"type": "subcategory",
"label": "OpenShift",
"ids": [
"version-1.6.0-openshift-litmus"
]
},
{
"type": "subcategory",
"label": "Rancher",
"ids": [
"version-1.6.0-rancher-litmus"
]
}
],
"Experiments": [
{
"type": "subcategory",
"label": "Generic",
"ids": [
"version-1.6.0-pod-delete",
"version-1.6.0-container-kill",
"version-1.6.0-pod-network-latency",
"version-1.6.0-pod-network-loss",
"version-1.6.0-pod-network-corruption",
"version-1.6.0-pod-cpu-hog",
"version-1.6.0-pod-memory-hog",
"version-1.6.0-disk-fill",
"version-1.6.0-disk-loss",
"version-1.6.0-node-cpu-hog",
"version-1.6.0-node-memory-hog",
"version-1.6.0-node-drain",
"version-1.6.0-kubelet-service-kill",
"version-1.6.0-pod-network-duplication",
"version-1.6.0-node-taint",
"version-1.6.0-docker-service-kill"
]
},
{
"type": "subcategory",
"label": "OpenEBS",
"ids": [
"version-1.6.0-openebs-target-container-failure",
"version-1.6.0-openebs-target-network-delay",
"version-1.6.0-openebs-target-network-loss",
"version-1.6.0-openebs-target-pod-failure",
"version-1.6.0-openebs-pool-pod-failure",
"version-1.6.0-openebs-pool-container-failure",
"version-1.6.0-openebs-pool-network-delay",
"version-1.6.0-openebs-pool-network-loss",
"version-1.6.0-openebs-control-plane-chaos",
"version-1.6.0-cStor-pool-chaos",
"version-1.6.0-openebs-pool-disk-loss",
"version-1.6.0-openebs-nfs-provisioner-kill"
]
},
{
"type": "subcategory",
"label": "Kafka",
"ids": [
"version-1.6.0-kafka-broker-pod-failure",
"version-1.6.0-kafka-broker-disk-failure"
]
},
{
"type": "subcategory",
"label": "CoreDns",
"ids": [
"version-1.6.0-coredns-pod-delete"
]
},
{
"type": "subcategory",
"label": "Cassandra",
"ids": [
"version-1.6.0-cassandra-pod-delete"
]
}
],
"Scheduler": [
"version-1.6.0-scheduling"
],
"Chaos Workflow": [
"version-1.6.0-chaos-workflows"
],
"Litmus FAQs": [
"version-1.6.0-faq-general",
"version-1.6.0-faq-troubleshooting"
],
"Advanced": [
"version-1.6.0-admin-mode"
]
}
}

View File

@ -1,126 +0,0 @@
{
"version-1.7.0-docs": {
"Getting Started": [
"version-1.7.0-getstarted",
"version-1.7.0-chaoshub",
"version-1.7.0-plugins",
"version-1.7.0-architecture",
"version-1.7.0-resources",
"version-1.7.0-community",
"version-1.7.0-devguide"
],
"Litmus Demo": [
"version-1.7.0-litmus-demo"
],
"Concepts": [
"version-1.7.0-chaosengine",
"version-1.7.0-chaosschedule",
"version-1.7.0-litmus-probe"
],
"Platforms": [
{
"type": "subcategory",
"label": "OpenShift",
"ids": [
"version-1.7.0-openshift-litmus"
]
},
{
"type": "subcategory",
"label": "Rancher",
"ids": [
"version-1.7.0-rancher-litmus"
]
}
],
"Experiments": [
{
"type": "subcategory",
"label": "Generic",
"ids": [
"version-1.7.0-pod-delete",
"version-1.7.0-container-kill",
"version-1.7.0-pod-network-latency",
"version-1.7.0-pod-network-loss",
"version-1.7.0-pod-network-corruption",
"version-1.7.0-pod-cpu-hog",
"version-1.7.0-pod-memory-hog",
"version-1.7.0-disk-fill",
"version-1.7.0-disk-loss",
"version-1.7.0-node-cpu-hog",
"version-1.7.0-node-memory-hog",
"version-1.7.0-node-drain",
"version-1.7.0-kubelet-service-kill",
"version-1.7.0-pod-network-duplication",
"version-1.7.0-node-taint",
"version-1.7.0-docker-service-kill",
"version-1.7.0-pod-autoscaler"
]
},
{
"type": "subcategory",
"label": "Kubernetes",
"ids": [
"version-1.7.0-Kubernetes-Chaostoolkit-Application",
"version-1.7.0-Kubernetes-Chaostoolkit-Service",
"version-1.7.0-Kubernetes-Chaostoolkit-Cluster-Kiam",
"version-1.7.0-Kubernetes-Chaostoolkit-AWS"
]
},
{
"type": "subcategory",
"label": "OpenEBS",
"ids": [
"version-1.7.0-openebs-target-container-failure",
"version-1.7.0-openebs-target-network-delay",
"version-1.7.0-openebs-target-network-loss",
"version-1.7.0-openebs-target-pod-failure",
"version-1.7.0-openebs-pool-pod-failure",
"version-1.7.0-openebs-pool-container-failure",
"version-1.7.0-openebs-pool-network-delay",
"version-1.7.0-openebs-pool-network-loss",
"version-1.7.0-openebs-control-plane-chaos",
"version-1.7.0-cStor-pool-chaos",
"version-1.7.0-openebs-pool-disk-loss",
"version-1.7.0-openebs-nfs-provisioner-kill"
]
},
{
"type": "subcategory",
"label": "Kafka",
"ids": [
"version-1.7.0-kafka-broker-pod-failure",
"version-1.7.0-kafka-broker-disk-failure"
]
},
{
"type": "subcategory",
"label": "CoreDns",
"ids": [
"version-1.7.0-coredns-pod-delete"
]
},
{
"type": "subcategory",
"label": "Cassandra",
"ids": [
"version-1.7.0-cassandra-pod-delete"
]
}
],
"Scheduler": [
"version-1.7.0-scheduling"
],
"Chaos Workflow": [
"version-1.7.0-chaos-workflows"
],
"Litmus FAQs": [
"version-1.7.0-faq-general",
"version-1.7.0-faq-troubleshooting"
],
"Advanced": [
"version-1.7.0-admin-mode",
"version-1.7.0-namespaced-mode"
]
}
}

View File

@ -1,128 +0,0 @@
{
"version-1.8.0-docs": {
"Getting Started": [
"version-1.8.0-getstarted",
"version-1.8.0-chaoshub",
"version-1.8.0-plugins",
"version-1.8.0-architecture",
"version-1.8.0-resources",
"version-1.8.0-community",
"version-1.8.0-devguide"
],
"Litmus Demo": [
"version-1.8.0-litmus-demo"
],
"Concepts": [
"version-1.8.0-chaosengine",
"version-1.8.0-chaosschedule",
"version-1.8.0-litmus-probe"
],
"Platforms": [
{
"type": "subcategory",
"label": "OpenShift",
"ids": [
"version-1.8.0-openshift-litmus"
]
},
{
"type": "subcategory",
"label": "Rancher",
"ids": [
"version-1.8.0-rancher-litmus"
]
}
],
"Experiments": [
{
"type": "subcategory",
"label": "Generic",
"ids": [
"version-1.8.0-pod-delete",
"version-1.8.0-container-kill",
"version-1.8.0-pod-network-latency",
"version-1.8.0-pod-network-loss",
"version-1.8.0-pod-network-corruption",
"version-1.8.0-pod-cpu-hog",
"version-1.8.0-pod-memory-hog",
"version-1.8.0-disk-fill",
"version-1.8.0-disk-loss",
"version-1.8.0-node-cpu-hog",
"version-1.8.0-node-memory-hog",
"version-1.8.0-node-drain",
"version-1.8.0-kubelet-service-kill",
"version-1.8.0-pod-network-duplication",
"version-1.8.0-node-taint",
"version-1.8.0-docker-service-kill",
"version-1.8.0-pod-autoscaler",
"version-1.8.0-Kubernetes-Chaostoolkit-Application",
"version-1.8.0-Kubernetes-Chaostoolkit-Service",
"version-1.8.0-Kubernetes-Chaostoolkit-Cluster-Kiam",
"version-1.8.0-pod-io-stress",
"version-1.8.0-node-io-stress"
]
},
{
"type": "subcategory",
"label": "Kube-AWS",
"ids": [
"version-1.8.0-Kubernetes-Chaostoolkit-AWS"
]
},
{
"type": "subcategory",
"label": "OpenEBS",
"ids": [
"version-1.8.0-openebs-target-container-failure",
"version-1.8.0-openebs-target-network-delay",
"version-1.8.0-openebs-target-network-loss",
"version-1.8.0-openebs-target-pod-failure",
"version-1.8.0-openebs-pool-pod-failure",
"version-1.8.0-openebs-pool-container-failure",
"version-1.8.0-openebs-pool-network-delay",
"version-1.8.0-openebs-pool-network-loss",
"version-1.8.0-openebs-control-plane-chaos",
"version-1.8.0-cStor-pool-chaos",
"version-1.8.0-openebs-pool-disk-loss",
"version-1.8.0-openebs-nfs-provisioner-kill"
]
},
{
"type": "subcategory",
"label": "Kafka",
"ids": [
"version-1.8.0-kafka-broker-pod-failure",
"version-1.8.0-kafka-broker-disk-failure"
]
},
{
"type": "subcategory",
"label": "CoreDns",
"ids": [
"version-1.8.0-coredns-pod-delete"
]
},
{
"type": "subcategory",
"label": "Cassandra",
"ids": [
"version-1.8.0-cassandra-pod-delete"
]
}
],
"Scheduler": [
"version-1.8.0-scheduling"
],
"Chaos Workflow": [
"version-1.8.0-chaos-workflows"
],
"Litmus FAQs": [
"version-1.8.0-faq-general",
"version-1.8.0-faq-troubleshooting"
],
"Advanced": [
"version-1.8.0-admin-mode",
"version-1.8.0-namespaced-mode"
]
}
}

View File

@ -1,130 +0,0 @@
{
"version-1.9.0-docs": {
"Getting Started": [
"version-1.9.0-getstarted",
"version-1.9.0-chaoshub",
"version-1.9.0-plugins",
"version-1.9.0-architecture",
"version-1.9.0-resources",
"version-1.9.0-community",
"version-1.9.0-devguide"
],
"Litmus Demo": [
"version-1.9.0-litmus-demo"
],
"Concepts": [
"version-1.9.0-chaosengine",
"version-1.9.0-chaosexperiment",
"version-1.9.0-chaosschedule",
"version-1.9.0-chaosresult",
"version-1.9.0-litmus-probe"
],
"Platforms": [
{
"type": "subcategory",
"label": "OpenShift",
"ids": [
"version-1.9.0-openshift-litmus"
]
},
{
"type": "subcategory",
"label": "Rancher",
"ids": [
"version-1.9.0-rancher-litmus"
]
}
],
"Experiments": [
{
"type": "subcategory",
"label": "Generic",
"ids": [
"version-1.9.0-pod-delete",
"version-1.9.0-container-kill",
"version-1.9.0-pod-network-latency",
"version-1.9.0-pod-network-loss",
"version-1.9.0-pod-network-corruption",
"version-1.9.0-pod-cpu-hog",
"version-1.9.0-pod-memory-hog",
"version-1.9.0-disk-fill",
"version-1.9.0-disk-loss",
"version-1.9.0-node-cpu-hog",
"version-1.9.0-node-memory-hog",
"version-1.9.0-node-drain",
"version-1.9.0-kubelet-service-kill",
"version-1.9.0-pod-network-duplication",
"version-1.9.0-node-taint",
"version-1.9.0-docker-service-kill",
"version-1.9.0-pod-autoscaler",
"version-1.9.0-Kubernetes-Chaostoolkit-Application",
"version-1.9.0-Kubernetes-Chaostoolkit-Service",
"version-1.9.0-Kubernetes-Chaostoolkit-Cluster-Kiam",
"version-1.9.0-pod-io-stress",
"version-1.9.0-node-io-stress"
]
},
{
"type": "subcategory",
"label": "Kube-AWS",
"ids": [
"version-1.9.0-Kubernetes-Chaostoolkit-AWS"
]
},
{
"type": "subcategory",
"label": "OpenEBS",
"ids": [
"version-1.9.0-openebs-target-container-failure",
"version-1.9.0-openebs-target-network-delay",
"version-1.9.0-openebs-target-network-loss",
"version-1.9.0-openebs-target-pod-failure",
"version-1.9.0-openebs-pool-pod-failure",
"version-1.9.0-openebs-pool-container-failure",
"version-1.9.0-openebs-pool-network-delay",
"version-1.9.0-openebs-pool-network-loss",
"version-1.9.0-openebs-control-plane-chaos",
"version-1.9.0-cStor-pool-chaos",
"version-1.9.0-openebs-pool-disk-loss",
"version-1.9.0-openebs-nfs-provisioner-kill"
]
},
{
"type": "subcategory",
"label": "Kafka",
"ids": [
"version-1.9.0-kafka-broker-pod-failure",
"version-1.9.0-kafka-broker-disk-failure"
]
},
{
"type": "subcategory",
"label": "CoreDns",
"ids": [
"version-1.9.0-coredns-pod-delete"
]
},
{
"type": "subcategory",
"label": "Cassandra",
"ids": [
"version-1.9.0-cassandra-pod-delete"
]
}
],
"Scheduler": [
"version-1.9.0-scheduling"
],
"Chaos Workflow": [
"version-1.9.0-chaos-workflows"
],
"Litmus FAQs": [
"version-1.9.0-faq-general",
"version-1.9.0-faq-troubleshooting"
],
"Advanced": [
"version-1.9.0-admin-mode",
"version-1.9.0-namespaced-mode"
]
}
}

View File

@ -1,6 +1,6 @@
---
id: faq-troubleshooting
title: Troubleshooting Litmus
title: "Troubleshooting Litmus"
sidebar_label: Troubleshooting
---

View File

@ -1,7 +1,6 @@
---
id: getstarted
title: Getting Started with Litmus
slug: "/"
sidebar_label: Introduction
---

View File

@ -42,7 +42,7 @@ module.exports = {
tagline: "A website for testing",
url: "https://docs.litmuschaos.io",
baseUrl: "/",
onBrokenLinks: "ignore",
onBrokenLinks: "throw",
favicon: "img/favicon.ico",
organizationName: "litmuschaos",
projectName: "litmus",
@ -72,11 +72,11 @@ module.exports = {
},
...versions.slice(1).map((version) => ({
label: version,
to: `docs/${version}/`,
to: `docs/${version}/getstarted`,
})),
{
label: "Master/Unreleased",
to: "docs/next/",
to: "docs/next/getstarted",
},
],
},
@ -121,9 +121,9 @@ module.exports = {
"@docusaurus/preset-classic",
{
docs: {
routeBasePath: "docs",
sidebarPath: require.resolve("./sidebars.js"),
editUrl: "https://github.com/litmuschaos/litmus-docs-beta/edit/staging/",
editUrl:
"https://github.com/litmuschaos/litmus-docs-beta/edit/staging/",
showLastUpdateTime: true,
},
theme: {

View File

@ -0,0 +1,7 @@
import React from "react";
import { Redirect } from "@docusaurus/router";
function Docs() {
return <Redirect to="/docs/getstarted" />;
}
export default Docs;

View File

@ -1,21 +1,21 @@
---
id: version-1.0.0-architecture
id: architecture
title: Litmus Architecture
sidebar_label: Architecture
sidebar_label: Architecture
original_id: architecture
---
<hr>
<hr />
<img src="/docs/assets/architecture.png" width="800">
<img src={require('./assets/architecture.png').default} width="800" />
**Chaos-Operator**
Chaos-Operator watches for the ChaosEngine CR and executes the Chaos-Experiments mentioned in the CR. Chaos-Operator is namespace scoped. By default, it runs in `litmus` namespace. Once the experiment is completed, chaos-operator invokes chaos-exporter to export chaos metrics to a Prometheus database.
Chaos-Operator watches for the ChaosEngine CR and executes the Chaos-Experiments mentioned in the CR. Chaos-Operator is namespace scoped. By default, it runs in `litmus` namespace. Once the experiment is completed, chaos-operator invokes chaos-exporter to export chaos metrics to a Prometheus database.
**Chaos-CRDs**
During installation, the following three CRDs are installed on the Kubernetes cluster.
During installation, the following three CRDs are installed on the Kubernetes cluster.
`chaosengines.litmuschaos.io`
@ -23,32 +23,24 @@ During installation, the following three CRDs are installed on the Kubernetes cl
`chaosresults.litmuschaos.io`
**Chaos-Experiments**
Chaos Experiment is a CR and are available as YAML files on <a href=" https://hub.litmuschaos.io" target="_blank">Chaos Hub</a>. For more details visit Chaos Hub [documentation](chaoshub.md).
Chaos Experiment is a CR and are available as YAML files on <a href="https://hub.litmuschaos.io" target="_blank">Chaos Hub</a>. For more details visit Chaos Hub [documentation](chaoshub.md).
**Chaos-Engine**
ChaosEngine CR links application to experiments. User has to create ChaosEngine YAML by specifying the application label and experiments and create the CR. The CR is watched by Chaos-Operator and chaos-experiments are executed on a given application.
ChaosEngine CR links application to experiments. User has to create ChaosEngine YAML by specifying the application label and experiments and create the CR. The CR is watched by Chaos-Operator and chaos-experiments are executed on a given application.
**Chaos-Exporter**
Optionally metrics can be exported to a Prometheus database. Chaos-Exporter implements the Prometheus metrics endpoint.
Optionally metrics can be exported to a Prometheus database. Chaos-Exporter implements the Prometheus metrics endpoint.
<br />
<br />
<br>
<hr />
<br>
<br />
<hr>
<br>
<br>
<br />

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

View File

@ -0,0 +1,429 @@
<?xml version="1.0"?>
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="1349.33333328" height="862.4" font-family="Consolas, Menlo, 'Bitstream Vera Sans Mono', monospace, 'Powerline Symbols'" font-size="14px">
<style>
<!-- asciinema theme -->
.default-text-fill {fill: #cccccc}
.default-bg-fill {fill: #121314}
.c-0 {fill: #000000}
.c-1 {fill: #dd3c69}
.c-2 {fill: #4ebf22}
.c-3 {fill: #ddaf3c}
.c-4 {fill: #26b0d7}
.c-5 {fill: #b954e1}
.c-6 {fill: #54e1b9}
.c-7 {fill: #d9d9d9}
.c-8 {fill: #4d4d4d}
.c-9 {fill: #dd3c69}
.c-10 {fill: #4ebf22}
.c-11 {fill: #ddaf3c}
.c-12 {fill: #26b0d7}
.c-13 {fill: #b954e1}
.c-14 {fill: #54e1b9}
.c-15 {fill: #ffffff}
.c-8, .c-9, .c-10, .c-11, .c-12, .c-13, .c-14, .c-15 {font-weight: bold}
<!-- 256 colors -->
.c-16 {fill: #000000}
.c-17 {fill: #00005f}
.c-18 {fill: #000087}
.c-19 {fill: #0000af}
.c-20 {fill: #0000d7}
.c-21 {fill: #0000ff}
.c-22 {fill: #005f00}
.c-23 {fill: #005f5f}
.c-24 {fill: #005f87}
.c-25 {fill: #005faf}
.c-26 {fill: #005fd7}
.c-27 {fill: #005fff}
.c-28 {fill: #008700}
.c-29 {fill: #00875f}
.c-30 {fill: #008787}
.c-31 {fill: #0087af}
.c-32 {fill: #0087d7}
.c-33 {fill: #0087ff}
.c-34 {fill: #00af00}
.c-35 {fill: #00af5f}
.c-36 {fill: #00af87}
.c-37 {fill: #00afaf}
.c-38 {fill: #00afd7}
.c-39 {fill: #00afff}
.c-40 {fill: #00d700}
.c-41 {fill: #00d75f}
.c-42 {fill: #00d787}
.c-43 {fill: #00d7af}
.c-44 {fill: #00d7d7}
.c-45 {fill: #00d7ff}
.c-46 {fill: #00ff00}
.c-47 {fill: #00ff5f}
.c-48 {fill: #00ff87}
.c-49 {fill: #00ffaf}
.c-50 {fill: #00ffd7}
.c-51 {fill: #00ffff}
.c-52 {fill: #5f0000}
.c-53 {fill: #5f005f}
.c-54 {fill: #5f0087}
.c-55 {fill: #5f00af}
.c-56 {fill: #5f00d7}
.c-57 {fill: #5f00ff}
.c-58 {fill: #5f5f00}
.c-59 {fill: #5f5f5f}
.c-60 {fill: #5f5f87}
.c-61 {fill: #5f5faf}
.c-62 {fill: #5f5fd7}
.c-63 {fill: #5f5fff}
.c-64 {fill: #5f8700}
.c-65 {fill: #5f875f}
.c-66 {fill: #5f8787}
.c-67 {fill: #5f87af}
.c-68 {fill: #5f87d7}
.c-69 {fill: #5f87ff}
.c-70 {fill: #5faf00}
.c-71 {fill: #5faf5f}
.c-72 {fill: #5faf87}
.c-73 {fill: #5fafaf}
.c-74 {fill: #5fafd7}
.c-75 {fill: #5fafff}
.c-76 {fill: #5fd700}
.c-77 {fill: #5fd75f}
.c-78 {fill: #5fd787}
.c-79 {fill: #5fd7af}
.c-80 {fill: #5fd7d7}
.c-81 {fill: #5fd7ff}
.c-82 {fill: #5fff00}
.c-83 {fill: #5fff5f}
.c-84 {fill: #5fff87}
.c-85 {fill: #5fffaf}
.c-86 {fill: #5fffd7}
.c-87 {fill: #5fffff}
.c-88 {fill: #870000}
.c-89 {fill: #87005f}
.c-90 {fill: #870087}
.c-91 {fill: #8700af}
.c-92 {fill: #8700d7}
.c-93 {fill: #8700ff}
.c-94 {fill: #875f00}
.c-95 {fill: #875f5f}
.c-96 {fill: #875f87}
.c-97 {fill: #875faf}
.c-98 {fill: #875fd7}
.c-99 {fill: #875fff}
.c-100 {fill: #878700}
.c-101 {fill: #87875f}
.c-102 {fill: #878787}
.c-103 {fill: #8787af}
.c-104 {fill: #8787d7}
.c-105 {fill: #8787ff}
.c-106 {fill: #87af00}
.c-107 {fill: #87af5f}
.c-108 {fill: #87af87}
.c-109 {fill: #87afaf}
.c-110 {fill: #87afd7}
.c-111 {fill: #87afff}
.c-112 {fill: #87d700}
.c-113 {fill: #87d75f}
.c-114 {fill: #87d787}
.c-115 {fill: #87d7af}
.c-116 {fill: #87d7d7}
.c-117 {fill: #87d7ff}
.c-118 {fill: #87ff00}
.c-119 {fill: #87ff5f}
.c-120 {fill: #87ff87}
.c-121 {fill: #87ffaf}
.c-122 {fill: #87ffd7}
.c-123 {fill: #87ffff}
.c-124 {fill: #af0000}
.c-125 {fill: #af005f}
.c-126 {fill: #af0087}
.c-127 {fill: #af00af}
.c-128 {fill: #af00d7}
.c-129 {fill: #af00ff}
.c-130 {fill: #af5f00}
.c-131 {fill: #af5f5f}
.c-132 {fill: #af5f87}
.c-133 {fill: #af5faf}
.c-134 {fill: #af5fd7}
.c-135 {fill: #af5fff}
.c-136 {fill: #af8700}
.c-137 {fill: #af875f}
.c-138 {fill: #af8787}
.c-139 {fill: #af87af}
.c-140 {fill: #af87d7}
.c-141 {fill: #af87ff}
.c-142 {fill: #afaf00}
.c-143 {fill: #afaf5f}
.c-144 {fill: #afaf87}
.c-145 {fill: #afafaf}
.c-146 {fill: #afafd7}
.c-147 {fill: #afafff}
.c-148 {fill: #afd700}
.c-149 {fill: #afd75f}
.c-150 {fill: #afd787}
.c-151 {fill: #afd7af}
.c-152 {fill: #afd7d7}
.c-153 {fill: #afd7ff}
.c-154 {fill: #afff00}
.c-155 {fill: #afff5f}
.c-156 {fill: #afff87}
.c-157 {fill: #afffaf}
.c-158 {fill: #afffd7}
.c-159 {fill: #afffff}
.c-160 {fill: #d70000}
.c-161 {fill: #d7005f}
.c-162 {fill: #d70087}
.c-163 {fill: #d700af}
.c-164 {fill: #d700d7}
.c-165 {fill: #d700ff}
.c-166 {fill: #d75f00}
.c-167 {fill: #d75f5f}
.c-168 {fill: #d75f87}
.c-169 {fill: #d75faf}
.c-170 {fill: #d75fd7}
.c-171 {fill: #d75fff}
.c-172 {fill: #d78700}
.c-173 {fill: #d7875f}
.c-174 {fill: #d78787}
.c-175 {fill: #d787af}
.c-176 {fill: #d787d7}
.c-177 {fill: #d787ff}
.c-178 {fill: #d7af00}
.c-179 {fill: #d7af5f}
.c-180 {fill: #d7af87}
.c-181 {fill: #d7afaf}
.c-182 {fill: #d7afd7}
.c-183 {fill: #d7afff}
.c-184 {fill: #d7d700}
.c-185 {fill: #d7d75f}
.c-186 {fill: #d7d787}
.c-187 {fill: #d7d7af}
.c-188 {fill: #d7d7d7}
.c-189 {fill: #d7d7ff}
.c-190 {fill: #d7ff00}
.c-191 {fill: #d7ff5f}
.c-192 {fill: #d7ff87}
.c-193 {fill: #d7ffaf}
.c-194 {fill: #d7ffd7}
.c-195 {fill: #d7ffff}
.c-196 {fill: #ff0000}
.c-197 {fill: #ff005f}
.c-198 {fill: #ff0087}
.c-199 {fill: #ff00af}
.c-200 {fill: #ff00d7}
.c-201 {fill: #ff00ff}
.c-202 {fill: #ff5f00}
.c-203 {fill: #ff5f5f}
.c-204 {fill: #ff5f87}
.c-205 {fill: #ff5faf}
.c-206 {fill: #ff5fd7}
.c-207 {fill: #ff5fff}
.c-208 {fill: #ff8700}
.c-209 {fill: #ff875f}
.c-210 {fill: #ff8787}
.c-211 {fill: #ff87af}
.c-212 {fill: #ff87d7}
.c-213 {fill: #ff87ff}
.c-214 {fill: #ffaf00}
.c-215 {fill: #ffaf5f}
.c-216 {fill: #ffaf87}
.c-217 {fill: #ffafaf}
.c-218 {fill: #ffafd7}
.c-219 {fill: #ffafff}
.c-220 {fill: #ffd700}
.c-221 {fill: #ffd75f}
.c-222 {fill: #ffd787}
.c-223 {fill: #ffd7af}
.c-224 {fill: #ffd7d7}
.c-225 {fill: #ffd7ff}
.c-226 {fill: #ffff00}
.c-227 {fill: #ffff5f}
.c-228 {fill: #ffff87}
.c-229 {fill: #ffffaf}
.c-230 {fill: #ffffd7}
.c-231 {fill: #ffffff}
.c-232 {fill: #080808}
.c-233 {fill: #121212}
.c-234 {fill: #1c1c1c}
.c-235 {fill: #262626}
.c-236 {fill: #303030}
.c-237 {fill: #3a3a3a}
.c-238 {fill: #444444}
.c-239 {fill: #4e4e4e}
.c-240 {fill: #585858}
.c-241 {fill: #626262}
.c-242 {fill: #6c6c6c}
.c-243 {fill: #767676}
.c-244 {fill: #808080}
.c-245 {fill: #8a8a8a}
.c-246 {fill: #949494}
.c-247 {fill: #9e9e9e}
.c-248 {fill: #a8a8a8}
.c-249 {fill: #b2b2b2}
.c-250 {fill: #bcbcbc}
.c-251 {fill: #c6c6c6}
.c-252 {fill: #d0d0d0}
.c-253 {fill: #dadada}
.c-254 {fill: #e4e4e4}
.c-255 {fill: #eeeeee}
.br { font-weight: bold }
.it { font-style: italic }
.un { text-decoration: underline }
</style>
<rect width="100%" height="100%" class="default-bg-fill" />
<svg x="0.625%" y="1.136%" class="default-text-fill">
<g style="shape-rendering: optimizeSpeed">
<rect x="5.625%" y="6.818%" width="0.625%" height="19.7" class="c-7" />
<rect x="0.000%" y="95.455%" width="98.750%" height="19.7" class="c-2" />
</g>
<text class="default-text-fill">
<tspan y="0.000%">
<tspan dy="1em" x="0.000%">c</tspan><tspan x="0.625%">h</tspan><tspan x="1.250%">a</tspan><tspan x="1.875%">o</tspan><tspan x="2.500%">s</tspan><tspan x="3.125%">:</tspan><tspan x="3.750%">~</tspan><tspan x="4.375%">$</tspan><tspan x="5.625%">#</tspan><tspan x="6.875%">B</tspan><tspan x="7.500%">u</tspan><tspan x="8.125%">i</tspan><tspan x="8.750%">l</tspan><tspan x="9.375%">d</tspan><tspan x="10.625%">a</tspan><tspan x="11.250%">n</tspan><tspan x="11.875%">d</tspan><tspan x="13.125%">a</tspan><tspan x="13.750%">p</tspan><tspan x="14.375%">p</tspan><tspan x="15.000%">l</tspan><tspan x="15.625%">y</tspan><tspan x="16.875%">C</tspan><tspan x="17.500%">h</tspan><tspan x="18.125%">a</tspan><tspan x="18.750%">o</tspan><tspan x="19.375%">s</tspan><tspan x="20.000%">E</tspan><tspan x="20.625%">n</tspan><tspan x="21.250%">g</tspan><tspan x="21.875%">i</tspan><tspan x="22.500%">n</tspan><tspan x="23.125%">e</tspan><tspan x="24.375%">C</tspan><tspan x="25.000%">R</tspan><tspan x="26.250%">t</tspan><tspan x="26.875%">o</tspan><tspan x="28.125%">u</tspan><tspan x="28.750%">n</tspan><tspan x="29.375%">l</tspan><tspan x="30.000%">e</tspan><tspan x="30.625%">a</tspan><tspan x="31.250%">s</tspan><tspan x="31.875%">h</tspan><tspan x="33.125%">C</tspan><tspan x="33.750%">h</tspan><tspan x="34.375%">a</tspan><tspan x="35.000%">o</tspan><tspan x="35.625%">s</tspan><tspan x="49.375%" class="c-2"></tspan><tspan x="50.000%">E</tspan><tspan x="50.625%">v</tspan><tspan x="51.250%">e</tspan><tspan x="51.875%">r</tspan><tspan x="52.500%">y</tspan><tspan x="53.750%">1</tspan><tspan x="54.375%">.</tspan><tspan x="55.000%">0</tspan><tspan x="55.625%">s</tspan><tspan x="56.250%">:</tspan><tspan x="57.500%">k</tspan><tspan x="58.125%">u</tspan><tspan x="58.750%">b</tspan><tspan x="59.375%">e</tspan><tspan x="60.000%">c</tspan><tspan x="60.625%">t</tspan><tspan x="61.250%">l</tspan><tspan x="62.500%">g</tspan><tspan x="63.125%">e</tspan><tspan x="63.750%">t</tspan><tspan x="65.000%">p</tspan><tspan x="65.625%">o</tspan><tspan x="83.750%">F</tspan><tspan x="84.375%">r</tspan><tspan x="85.000%">i</tspan><tspan x="86.250%">O</tspan><tspan x="86.875%">c</tspan><tspan x="87.500%">t</tspan><tspan x="89.375%">4</tspan><tspan x="90.625%">1</tspan><tspan x="91.250%">9</tspan><tspan x="91.875%">:</tspan><tspan x="92.500%">3</tspan><tspan x="93.125%">2</tspan><tspan x="93.750%">:</tspan><tspan x="94.375%">3</tspan><tspan x="95.000%">5</tspan><tspan x="96.250%">2</tspan><tspan x="96.875%">0</tspan><tspan x="97.500%">1</tspan><tspan x="98.125%">9</tspan>
</tspan>
<tspan y="2.273%">
<tspan dy="1em" x="0.000%">c</tspan><tspan x="0.625%">h</tspan><tspan x="1.250%">a</tspan><tspan x="1.875%">o</tspan><tspan x="2.500%">s</tspan><tspan x="3.125%">:</tspan><tspan x="3.750%">~</tspan><tspan x="4.375%">$</tspan><tspan x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="4.545%">
<tspan dy="1em" x="0.000%">c</tspan><tspan x="0.625%">h</tspan><tspan x="1.250%">a</tspan><tspan x="1.875%">o</tspan><tspan x="2.500%">s</tspan><tspan x="3.125%">:</tspan><tspan x="3.750%">~</tspan><tspan x="4.375%">$</tspan><tspan x="5.625%">v</tspan><tspan x="6.250%">i</tspan><tspan x="7.500%">c</tspan><tspan x="8.125%">h</tspan><tspan x="8.750%">a</tspan><tspan x="9.375%">o</tspan><tspan x="10.000%">s</tspan><tspan x="10.625%">e</tspan><tspan x="11.250%">n</tspan><tspan x="11.875%">g</tspan><tspan x="12.500%">i</tspan><tspan x="13.125%">n</tspan><tspan x="13.750%">e</tspan><tspan x="14.375%">.</tspan><tspan x="15.000%">y</tspan><tspan x="15.625%">a</tspan><tspan x="16.250%">m</tspan><tspan x="16.875%">l</tspan><tspan x="49.375%" class="c-2"></tspan><tspan x="50.000%">N</tspan><tspan x="50.625%">A</tspan><tspan x="51.250%">M</tspan><tspan x="51.875%">E</tspan><tspan x="69.375%">R</tspan><tspan x="70.000%">E</tspan><tspan x="70.625%">A</tspan><tspan x="71.250%">D</tspan><tspan x="71.875%">Y</tspan><tspan x="74.375%">S</tspan><tspan x="75.000%">T</tspan><tspan x="75.625%">A</tspan><tspan x="76.250%">T</tspan><tspan x="76.875%">U</tspan><tspan x="77.500%">S</tspan><tspan x="80.625%">R</tspan><tspan x="81.250%">E</tspan><tspan x="81.875%">S</tspan><tspan x="82.500%">T</tspan><tspan x="83.125%">A</tspan><tspan x="83.750%">R</tspan><tspan x="84.375%">T</tspan><tspan x="85.000%">S</tspan><tspan x="87.500%">A</tspan><tspan x="88.125%">G</tspan><tspan x="88.750%">E</tspan>
</tspan>
<tspan y="6.818%">
<tspan dy="1em" x="0.000%">c</tspan><tspan x="0.625%">h</tspan><tspan x="1.250%">a</tspan><tspan x="1.875%">o</tspan><tspan x="2.500%">s</tspan><tspan x="3.125%">:</tspan><tspan x="3.750%">~</tspan><tspan x="4.375%">$</tspan><tspan x="49.375%" class="c-2"></tspan><tspan x="50.000%">h</tspan><tspan x="50.625%">e</tspan><tspan x="51.250%">l</tspan><tspan x="51.875%">l</tspan><tspan x="52.500%">o</tspan><tspan x="53.125%">-</tspan><tspan x="53.750%">d</tspan><tspan x="54.375%">e</tspan><tspan x="55.000%">p</tspan><tspan x="55.625%">l</tspan><tspan x="56.250%">o</tspan><tspan x="56.875%">y</tspan><tspan x="57.500%">-</tspan><tspan x="58.125%">d</tspan><tspan x="58.750%">d</tspan><tspan x="59.375%">5</tspan><tspan x="60.000%">9</tspan><tspan x="60.625%">b</tspan><tspan x="61.250%">8</tspan><tspan x="61.875%">9</tspan><tspan x="62.500%">5</tspan><tspan x="63.125%">6</tspan><tspan x="63.750%">-</tspan><tspan x="64.375%">h</tspan><tspan x="65.000%">x</tspan><tspan x="65.625%">c</tspan><tspan x="66.250%">j</tspan><tspan x="66.875%">v</tspan><tspan x="69.375%">1</tspan><tspan x="70.000%">/</tspan><tspan x="70.625%">1</tspan><tspan x="74.375%">R</tspan><tspan x="75.000%">u</tspan><tspan x="75.625%">n</tspan><tspan x="76.250%">n</tspan><tspan x="76.875%">i</tspan><tspan x="77.500%">n</tspan><tspan x="78.125%">g</tspan><tspan x="80.625%">0</tspan><tspan x="87.500%">1</tspan><tspan x="88.125%">9</tspan><tspan x="88.750%">m</tspan>
</tspan>
<tspan y="9.091%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="11.364%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="13.636%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="15.909%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="18.182%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="20.455%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="22.727%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="25.000%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="27.273%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="29.545%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="31.818%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="34.091%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="36.364%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="38.636%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="40.909%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="43.182%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="45.455%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="47.727%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%"></tspan><tspan x="50.625%"></tspan><tspan x="51.250%"></tspan><tspan x="51.875%"></tspan><tspan x="52.500%"></tspan><tspan x="53.125%"></tspan><tspan x="53.750%"></tspan><tspan x="54.375%"></tspan><tspan x="55.000%"></tspan><tspan x="55.625%"></tspan><tspan x="56.250%"></tspan><tspan x="56.875%"></tspan><tspan x="57.500%"></tspan><tspan x="58.125%"></tspan><tspan x="58.750%"></tspan><tspan x="59.375%"></tspan><tspan x="60.000%"></tspan><tspan x="60.625%"></tspan><tspan x="61.250%"></tspan><tspan x="61.875%"></tspan><tspan x="62.500%"></tspan><tspan x="63.125%"></tspan><tspan x="63.750%"></tspan><tspan x="64.375%"></tspan><tspan x="65.000%"></tspan><tspan x="65.625%"></tspan><tspan x="66.250%"></tspan><tspan x="66.875%"></tspan><tspan x="67.500%"></tspan><tspan x="68.125%"></tspan><tspan x="68.750%"></tspan><tspan x="69.375%"></tspan><tspan x="70.000%"></tspan><tspan x="70.625%"></tspan><tspan x="71.250%"></tspan><tspan x="71.875%"></tspan><tspan x="72.500%"></tspan><tspan x="73.125%"></tspan><tspan x="73.750%"></tspan><tspan x="74.375%"></tspan><tspan x="75.000%"></tspan><tspan x="75.625%"></tspan><tspan x="76.250%"></tspan><tspan x="76.875%"></tspan><tspan x="77.500%"></tspan><tspan x="78.125%"></tspan><tspan x="78.750%"></tspan><tspan x="79.375%"></tspan><tspan x="80.000%"></tspan><tspan x="80.625%"></tspan><tspan x="81.250%"></tspan><tspan x="81.875%"></tspan><tspan x="82.500%"></tspan><tspan x="83.125%"></tspan><tspan x="83.750%"></tspan><tspan x="84.375%"></tspan><tspan x="85.000%"></tspan><tspan x="85.625%"></tspan><tspan x="86.250%"></tspan><tspan x="86.875%"></tspan><tspan x="87.500%"></tspan><tspan x="88.125%"></tspan><tspan x="88.750%"></tspan><tspan x="89.375%"></tspan><tspan x="90.000%"></tspan><tspan x="90.625%"></tspan><tspan x="91.250%"></tspan><tspan x="91.875%"></tspan><tspan x="92.500%"></tspan><tspan x="93.125%"></tspan><tspan x="93.750%"></tspan><tspan x="94.375%"></tspan><tspan x="95.000%"></tspan><tspan x="95.625%"></tspan><tspan x="96.250%"></tspan><tspan x="96.875%"></tspan><tspan x="97.500%"></tspan><tspan x="98.125%"></tspan>
</tspan>
<tspan y="50.000%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="52.273%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="54.545%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="56.818%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="59.091%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="61.364%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="63.636%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="65.909%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="68.182%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="70.455%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="72.727%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="75.000%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="77.273%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="79.545%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="81.818%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="84.091%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="86.364%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="88.636%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="90.909%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="93.182%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="95.455%">
<tspan dy="1em" x="0.000%" class="c-0">[</tspan><tspan x="0.625%" class="c-0">d</tspan><tspan x="1.250%" class="c-0">e</tspan><tspan x="1.875%" class="c-0">m</tspan><tspan x="2.500%" class="c-0">o</tspan><tspan x="3.125%" class="c-0">]</tspan><tspan x="4.375%" class="c-0">0</tspan><tspan x="5.000%" class="c-0">:</tspan><tspan x="5.625%" class="c-0">s</tspan><tspan x="6.250%" class="c-0">s</tspan><tspan x="6.875%" class="c-0">h</tspan><tspan x="7.500%" class="c-0">*</tspan><tspan x="75.625%" class="c-0">&quot;</tspan><tspan x="76.250%" class="c-0">r</tspan><tspan x="76.875%" class="c-0">a</tspan><tspan x="77.500%" class="c-0">h</tspan><tspan x="78.125%" class="c-0">u</tspan><tspan x="78.750%" class="c-0">l</tspan><tspan x="79.375%" class="c-0">-</tspan><tspan x="80.000%" class="c-0">T</tspan><tspan x="80.625%" class="c-0">h</tspan><tspan x="81.250%" class="c-0">i</tspan><tspan x="81.875%" class="c-0">n</tspan><tspan x="82.500%" class="c-0">k</tspan><tspan x="83.125%" class="c-0">P</tspan><tspan x="83.750%" class="c-0">a</tspan><tspan x="84.375%" class="c-0">d</tspan><tspan x="85.000%" class="c-0">-</tspan><tspan x="85.625%" class="c-0">E</tspan><tspan x="86.250%" class="c-0">4</tspan><tspan x="86.875%" class="c-0">9</tspan><tspan x="87.500%" class="c-0">0</tspan><tspan x="88.125%" class="c-0">&quot;</tspan><tspan x="89.375%" class="c-0">0</tspan><tspan x="90.000%" class="c-0">1</tspan><tspan x="90.625%" class="c-0">:</tspan><tspan x="91.250%" class="c-0">0</tspan><tspan x="91.875%" class="c-0">2</tspan><tspan x="93.125%" class="c-0">0</tspan><tspan x="93.750%" class="c-0">5</tspan><tspan x="94.375%" class="c-0">-</tspan><tspan x="95.000%" class="c-0">O</tspan><tspan x="95.625%" class="c-0">c</tspan><tspan x="96.250%" class="c-0">t</tspan><tspan x="96.875%" class="c-0">-</tspan><tspan x="97.500%" class="c-0">1</tspan><tspan x="98.125%" class="c-0">9</tspan>
</tspan>
</text>
<g transform="translate(-50 -50)">
<svg x="50%" y="50%" width="100" height="100">
<svg version="1.1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 866.0254037844387 866.0254037844387">
<defs>
<mask id="small-triangle-mask">
<rect width="100%" height="100%" fill="white"/>
<polygon points="508.01270189221935 433.01270189221935, 208.0127018922194 259.8076211353316, 208.01270189221927 606.217782649107" fill="black"></polygon>
</mask>
</defs>
<polygon points="808.0127018922194 433.01270189221935, 58.01270189221947 -1.1368683772161603e-13, 58.01270189221913 866.0254037844386" mask="url(#small-triangle-mask)" fill="white"></polygon>
<polyline points="481.2177826491071 333.0127018922194, 134.80762113533166 533.0127018922194" stroke="white" stroke-width="90"></polyline>
</svg>
</svg>
</g>
</svg>
</svg>

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 354 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

View File

@ -0,0 +1,71 @@
---
id: chaoshub
title: Using and contributing to ChaosHub
sidebar_label: ChaosHub
original_id: chaoshub
---
---
**Important links**
Chaos Hub is maintained at https://hub.litmuschaos.io
To contribute new chaos charts visit: https://github.com/litmuschaos/chaos-charts
**Introduction**
Litmus chaos hub is a place where the chaos engineering community members publish their chaos experiments. A set of related chaos experiments are bundled into a `Chaos Chart`. Chaos Charts are classified into the following categories.
- [Generic Chaos](#generic-chaos)
- [Application Chaos](#application-chaos)
- [Platform Chaos](#platform-chaos)
### Generic Chaos
Chaos actions that apply to generic Kubernetes resources are classified into this category. Following chaos experiments are supported under Generic Chaos Chart
| Experiment name | Description | User guide link |
| ---------------------- | ----------------------------------------------------- | --------------------------------------------------- |
| Container Kill | Kill one container in the application pod | [container-kill](container-kill.md) |
| Pod Delete | Fail the application pod | [pod-delete](pod-delete.md) |
| Pod Network Latency | Experiment to inject network latency to the POD | [pod-network-latency](pod-network-latency.md) |
| Pod Network Loss | Experiment to inject network loss to the POD | [pod-network-loss](pod-network-loss.md) |
| CPU Hog | Exhaust CPU resources on the Kubernetes Node | [cpu-hog](cpu-hog.md) |
| Disk Fill | Fillup Ephemeral Storage of a Resource | [disk-fill](disk-fill.md) |
| Disk Loss | External disk loss from the node | [disk-loss](disk-loss.md) |
| Node Drain | Drain the node where application pod is scheduled | [node-drain](node-drain.md) |
| Pod CPU Hog | Consume CPU resources on the application container | [pod-cpu-hog](pod-cpu-hog.md) |
| Pod Network Corruption | Inject Network Packet Corruption Into Application Pod | [pod-network-corruption](pod-network-corruption.md) |
### Application Chaos
While Chaos Experiments under the Generic category offer the ability to induce chaos into Kubernetes resources, it is difficult to analyze and conclude if the chaos induced found a weakness in a given application. The application specific chaos experiments are built with some checks on _pre-conditions_ and some expected outcomes after the chaos injection. The result of the chaos experiment is determined by matching the outcome with the expected outcome.
<div class="danger">
<strong>NOTE:</strong> If the result of the chaos experiment is `pass`, it means that the application is resilient to that chaos.
</div>
#### Benefits of contributing an application chaos experiment
Application developers write negative tests in their CI pipelines to test the resiliency of the applications. These negative can be converted into Litmus Chaos Experiments and contributed to ChaosHub, so that the users of the application can use them in staging/pre-production/production environments to check the resilience. Application environments vary considerably from where they are tested (CI pipelines) to where they are deployed (Production). Hence, running the same chaos tests in the user's environment will help determine the weaknesses of the deployment and fixing such weaknesses leads to increased resilience.
Following Application Chaos experiments are available on ChaosHub
| Application | Description | Chaos Experiments |
| ----------- | ------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| OpenEBS | Container Attached Storage for Kubernetes | [openebs-pool-pod-failure](openebs-pool-pod-failure.md)<br/>[openebs-pool-container-failure](openebs-pool-container-failure.md)<br/>[openebs-target-pod-failure](openebs-target-pod-failure.md)<br/>[openebs-target-container-failure](openebs-target-container-failure.md)<br/>[openebs-target-network-delay](openebs-target-network-delay.md)<br/>[openebs-target-network-loss](openebs-target-network-loss.md) |
| Kafka | Open-source stream processing software | [kafka-broker-pod-failure](kafka-broker-pod-failure.md)<br/>[kafka-broker-disk-failure](kafka-broker-disk-failure.md)<br/> |
| CoreDns | CoreDNS is a fast and flexible DNS server that chains plugins | [coredns-pod-delete](coredns-pod-delete.md) |
### Platform Chaos
Chaos experiments that inject chaos into the platform resources of Kubernetes are classified into this category. Management of platform resources vary significantly from each other, Chaos Charts may be maintained separately for each platform (For example, AWS, GCP, Azure, etc)
Following Platform Chaos experiments are available on ChaosHub
| Platform | Description | Chaos Experiments |
| -------- | ------------------------------------------- | ----------------- |
| AWS | Amazon Web Services platform. Includes EKS. | None |
| GCP | Google Cloud Platform. Includes GKE. | None |
| Azure | Microsoft Azure platform. Includes AKS. | None |

View File

@ -0,0 +1,24 @@
---
id: community
title: Join Litmus Community
sidebar_label: Community
original_id: community
---
---
Litmus community is a subset of the larger Kubernetes community. Have a question? Want to stay in touch with the happenings on Chaos Engineering on Kubernetes? Join `#litmus` channel on Kubernetes Slack.
<br/><br/>
<a href="https://kubernetes.slack.com/messages/CNXNB0ZTN" target="_blank"><img src={require("./assets/join-community.png").default} width="400"/></a>
<br/>
<br/>
<hr/>
<br/>
<br/>

View File

@ -1,5 +1,5 @@
---
id: version-1.0.0-container-kill
id: container-kill
title: Container Kill Experiment Details
sidebar_label: Container Kill
original_id: container-kill

View File

@ -1,5 +1,5 @@
---
id: version-1.0.0-coredns-pod-delete
id: coredns-pod-delete
title: CoreDNS Pod Delete Experiment Details
sidebar_label: CoreDNS Pod Delete
original_id: coredns-pod-delete

View File

@ -1,5 +1,5 @@
---
id: version-1.0.0-cpu-hog
id: cpu-hog
title: CPU Hog Experiment Details
sidebar_label: CPU Hog
original_id: cpu-hog

View File

@ -1,11 +1,11 @@
---
id: version-1.0.0-devguide
id: devguide
title: Developer Guide for Chaos Charts
sidebar_label: Developer Guide
sidebar_label: Developer Guide
original_id: devguide
---
------
---
This page serves as a guide to develop either a new Chaos Chart or a new experiment in a Chaos Chart which are published at <a href="https://hub.litmuschaos.io" target="_blank">ChaosHub</a>.
@ -18,34 +18,33 @@ Below are some key points to remember before understanding how to write a new ch
> Website rendering code repository: https://github.com/litmuschaos/charthub.litmuschaos.io
The experiments & chaos libraries are typically written in Ansible, though not mandatory. Ensure that
the experiments can be executed in a container & can read/update the litmuschaos custom resources. For example,
if you are writing an experiment in Go, use this [clientset](https://github.com/litmuschaos/chaos-operator/tree/master/pkg/client)
the experiments can be executed in a container & can read/update the litmuschaos custom resources. For example,
if you are writing an experiment in Go, use this [clientset](https://github.com/litmuschaos/chaos-operator/tree/master/pkg/client)
<hr>
<hr/>
## Glossary
### Chaos Chart
A group of Choas Experiments put together in a YAML file. Each group or chart has a metadata manifest called `ChartServiceVersion`
that holds data such as `ChartVersion`, `Contributors`, `Description`, `links` etc.., This metadata is rendered on the ChartHub.
A group of Choas Experiments put together in a YAML file. Each group or chart has a metadata manifest called `ChartServiceVersion`
that holds data such as `ChartVersion`, `Contributors`, `Description`, `links` etc.., This metadata is rendered on the ChartHub.
A chaos chart also consists of a `package` manifest that is an index of available experiments in the chart.
Here is an example of the [ChartServiceVersion](https://github.com/litmuschaos/chaos-charts/blob/master/charts/generic/generic.chartserviceversion.yaml) & [package](https://github.com/litmuschaos/chaos-charts/blob/master/charts/generic/generic.package.yaml) manifests of the generic chaos chart.
### Chaos Experiment
ChaosExperiment is a CRD that specifies the nature of a Chaos Experiment. The YAML file that constitutes a Chaos Experiment CR
ChaosExperiment is a CRD that specifies the nature of a Chaos Experiment. The YAML file that constitutes a Chaos Experiment CR
is stored under a Chaos Chart of ChaosHub and typically consists of low-level chaos parameters specific to that experiment, set
to their default values.
to their default values.
Here is an example chaos experiment CR for a [pod-delete](https://github.com/litmuschaos/chaos-charts/blob/master/charts/generic/pod-delete/experiment.yaml) experiment
### Litmus Book
Litmus book is an `ansible` playbook that encompasses the logic of pre-checks, chaos-injection, post-checks, and result-updates.
Typically, these are accompanied by a Kubernetes job that can execute the respective playbook.
Litmus book is an `ansible` playbook that encompasses the logic of pre-checks, chaos-injection, post-checks, and result-updates.
Typically, these are accompanied by a Kubernetes job that can execute the respective playbook.
Here is an example of the litmus book for the [pod-delete](https://github.com/litmuschaos/litmus-ansible/tree/master/experiments/generic/pod_delete) experiment.
@ -53,17 +52,16 @@ Here is an example of the litmus book for the [pod-delete](https://github.com/li
The `ansible` business logic inside Litmus books can make use of readily available chaos functions. The chaos functions are available as `task-files` which are wrapped in one of the chaos libraries. See [plugins](plugins.md) for more details.
<hr>
<hr/>
## Developing a Chaos Experiment
A detailed how-to guide on developing chaos experiments is available [here](https://github.com/litmuschaos/litmus-ansible/tree/master/contribute/developer_guide)
<br>
<br/>
<hr>
<hr/>
<br>
<br>
<br/>
<br/>

View File

@ -1,10 +1,11 @@
---
id: version-1.0.0-disk-fill
id: disk-fill
title: Disk Fill Experiment Details
sidebar_label: Disk Fill
original_id: disk-fill
---
------
---
## Experiment Metadata
@ -27,9 +28,9 @@ original_id: disk-fill
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://raw.githubusercontent.com/litmuschaos/pages/master/docs/litmus-operator-latest.yaml)
- Ensure that the `disk-fill` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace If not, install from [here](https://hub.litmuschaos.io/charts/generic/experiments/disk-fill)
- Cluster must run docker container runtime
- Appropriate Ephemeral Storage Requests and Limits should be set for the application before running the experiment.
- Appropriate Ephemeral Storage Requests and Limits should be set for the application before running the experiment.
An example specification is shown below:
```
apiVersion: v1
kind: Pod
@ -54,28 +55,28 @@ spec:
ephemeral-storage: "2Gi"
limits:
ephemeral-storage: "4Gi"
```
```
## Entry-Criteria
- Application pods are healthy before chaos injection.
- Application pods are healthy before chaos injection.
## Exit-Criteria
- Application pods are healthy post chaos injection.
- Application pods are healthy post chaos injection.
## Details
- Causes Disk Stress by filling up the ephemeral storage of the pod (in the /var/lib/docker/container/{{container_id}}) on any given node.
- Causes the application pod to get evicted if the capacity filled exceeds the pod's ephemeral storage limit.
- Tests the Ephemeral Storage Limits, to ensure those parameters are sufficient.
- Tests the application's resiliency to disk stress/replica evictions.
- Causes Disk Stress by filling up the ephemeral storage of the pod (in the /var/lib/docker/container/{{container_id}}) on any given node.
- Causes the application pod to get evicted if the capacity filled exceeds the pod's ephemeral storage limit.
- Tests the Ephemeral Storage Limits, to ensure those parameters are sufficient.
- Tests the application's resiliency to disk stress/replica evictions.
## Integrations
- Disk Fill can be effected using the chaos library: `litmus`, which makes use of `dd` to create a file of
specified capacity on the node.
- The desired chaoslib can be selected by setting the above options as value for the env variable `LIB`
- Disk Fill can be effected using the chaos library: `litmus`, which makes use of `dd` to create a file of
specified capacity on the node.
- The desired chaoslib can be selected by setting the above options as value for the env variable `LIB`
## Steps to Execute the Chaos Experiment
@ -106,9 +107,18 @@ metadata:
labels:
name: nginx-sa
rules:
- apiGroups: ["","apps","litmuschaos.io","batch"]
resources: ["pods","jobs","pods/exec","daemonsets","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: ["", "apps", "litmuschaos.io", "batch"]
resources:
[
"pods",
"jobs",
"pods/exec",
"daemonsets",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -121,10 +131,9 @@ roleRef:
kind: ClusterRole
name: nginx-sa
subjects:
- kind: ServiceAccount
name: nginx-sa
namespace: default
- kind: ServiceAccount
name: nginx-sa
namespace: default
```
### Prepare ChaosEngine
@ -184,12 +193,12 @@ metadata:
namespace: default
spec:
# It can be app/infra
chaosType: 'infra'
#ex. values: ns1:name=percona,ns2:run=nginx
chaosType: "infra"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appns: default
applabel: 'app=nginx'
applabel: "app=nginx"
appkind: deployment
chaosServiceAccount: nginx-sa
monitoring: false
@ -203,7 +212,7 @@ spec:
- name: disk-fill
spec:
components:
# specify the fill percentage according to the disk pressure required
# specify the fill percentage according to the disk pressure required
- name: FILL_PERCENTAGE
value: "80"
- name: TARGET_CONTAINER
@ -218,11 +227,11 @@ spec:
### Watch Chaos progress
- View the status of the pods as they are subjected to disk stress.
- View the status of the pods as they are subjected to disk stress.
`watch -n 1 kubectl get pods -n <application-namespace>`
- Monitor the capacity filled up on the host filesystem
- Monitor the capacity filled up on the host filesystem
`watch -n 1 du -kh /var/lib/docker/containers/<container-id>`

View File

@ -1,10 +1,12 @@
---
id: version-1.0.0-disk-loss
id: disk-loss
title: Disk Loss Experiment Details
sidebar_label: Disk Loss
original_id: disk-loss
---
------
---
## Experiment Metadata
<table>
@ -21,10 +23,11 @@ original_id: disk-loss
</table>
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://raw.githubusercontent.com/litmuschaos/pages/master/docs/litmus-operator-latest.yaml)
- Ensure that the `disk-loss` experiment resource is available in the cluster by `kubectl get chaosexperiments` in the desired namespace. If not, install from <a href="https://hub.litmuschaos.io/charts/generic/experiments/disk-loss" target="_blank">here</a>
- Ensure to create a Kubernetes secret having the gcloud/aws access configuration(key) in the namespace of `CHAOS_NAMESPACE`.
- There should be administrative access to the platform on which the cluster is hosted, as the recovery of the affected node could be manual. Example gcloud access to the project
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://raw.githubusercontent.com/litmuschaos/pages/master/docs/litmus-operator-latest.yaml)
- Ensure that the `disk-loss` experiment resource is available in the cluster by `kubectl get chaosexperiments` in the desired namespace. If not, install from <a href="https://hub.litmuschaos.io/charts/generic/experiments/disk-loss" target="_blank">here</a>
- Ensure to create a Kubernetes secret having the gcloud/aws access configuration(key) in the namespace of `CHAOS_NAMESPACE`.
- There should be administrative access to the platform on which the cluster is hosted, as the recovery of the affected node could be manual. Example gcloud access to the project
```yaml
apiVersion: v1
@ -39,30 +42,30 @@ stringData:
## Entry-Criteria
- The disk is healthy before chaos injection
- The disk is healthy before chaos injection
## Exit-Criteria
- The disk is healthy post chaos injection
- If `APP_CHECK` is true, the application pod health is checked post chaos injection
- The disk is healthy post chaos injection
- If `APP_CHECK` is true, the application pod health is checked post chaos injection
## Details
- In this experiment, the external disk is detached from the node for a period equal to the `TOTAL_CHAOS_DURATION`.
- This chaos experiment is supported on GKE and AWS platforms.
- If the disk is created as part of dynamic persistent volume, it is expected to re-attach automatically. The experiment re-attaches the disk if it is not already attached.
- In this experiment, the external disk is detached from the node for a period equal to the `TOTAL_CHAOS_DURATION`.
- This chaos experiment is supported on GKE and AWS platforms.
- If the disk is created as part of dynamic persistent volume, it is expected to re-attach automatically. The experiment re-attaches the disk if it is not already attached.
<b>Note:</b> Especially with mounted disk. The remount of disk is a manual step that the user has to perform.
<b>Note:</b> Especially with mounted disk. The remount of disk is a manual step that the user has to perform.
## Integrations
- Disk loss is effected using the litmus chaoslib that internally makes use of the aws/gcloud commands
- Disk loss is effected using the litmus chaoslib that internally makes use of the aws/gcloud commands
## Steps to Execute the Chaos Experiment
- This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
### Prepare chaosServiceAccount
@ -87,9 +90,17 @@ metadata:
labels:
name: nginx-sa
rules:
- apiGroups: ["","litmuschaos.io","batch"]
resources: ["pods","jobs","secrets","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: ["", "litmuschaos.io", "batch"]
resources:
[
"pods",
"jobs",
"secrets",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -102,17 +113,16 @@ roleRef:
kind: ClusterRole
name: nginx-sa
subjects:
- kind: ServiceAccount
name: nginx-sa
namespace: default
- kind: ServiceAccount
name: nginx-sa
namespace: default
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- Provide the auxiliary applications info (ns & labels) in `spec.auxiliaryAppInfo`
- Override the experiment tunables if desired
- Provide the application info in `spec.appinfo`
- Provide the auxiliary applications info (ns & labels) in `spec.auxiliaryAppInfo`
- Override the experiment tunables if desired
### Supported Experiment Tunables for application
@ -213,8 +223,8 @@ metadata:
namespace: default
spec:
# It can be app/infra
chaosType: 'infra'
#ex. values: ns1:name=percona,ns2:run=nginx
chaosType: 'infra'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appns: default
@ -250,34 +260,38 @@ spec:
# Node name of the cluster
- name: NODE_NAME
value: 'demo-node-123'
# Disk Name of the node, it must be an external disk.
# Disk Name of the node, it must be an external disk.
- name: DISK_NAME
value: 'demo-disk-123'
# Enter the device name which you wanted to mount only for AWS.
# Enter the device name which you wanted to mount only for AWS.
- name: DEVICE_NAME
value: '/dev/sdb'
# Name of Zone in which node is present (GCP)
# Use Region Name when running with AWS (ex: us-central1)
- name: ZONE_NAME
value: 'us-central1-a'
# ChaosEngine CR name associated with the experiment instance
# ChaosEngine CR name associated with the experiment instance
- name: CHAOSENGINE
value: ''
# Service account used by the litmus
# Service account used by the litmus
- name: CHAOS_SERVICE_ACCOUNT
value: ''
```
## Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
## Watch Chaos progress
- Setting up a watch of the app which is using the disk in the Kubernetes Cluster
- Setting up a watch of the app which is using the disk in the Kubernetes Cluster
`watch -n 1 kubectl get pods`
## Check Chaos Experiment Result
- Check whether the application is resilient to the disk loss, once the experiment (job) is completed. The ChaosResult resource name is derived like this: <ChaosEngine-Name>-<ChaosExperiment-Name>.
- Check whether the application is resilient to the disk loss, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult nginx-chaos-disk-loss -n <CHAOS_NAMESPACE>`

View File

@ -1,10 +1,11 @@
---
id: version-1.0.0-getstarted
id: getstarted
title: Getting Started with Litmus
sidebar_label: Introduction
original_id: getstarted
---
------
---
## Pre-requisites
@ -28,11 +29,9 @@ Running chaos on your application involves the following steps:
[Observe chaos results](#observe-chaos-results)
<hr>
<hr/>
### Install Litmus
### Install Litmus
```
kubectl apply -f https://litmuschaos.github.io/pages/litmus-operator-v1.0.0.yaml
@ -42,20 +41,15 @@ The above command install all the CRDs, required service account configuration,
**Verify your installation**
- Verify if the chaos operator is running
- Verify if the chaos operator is running
```
kubectl get pods -n litmus
```
Expected output:
>chaos-operator-ce-554d6c8f9f-slc8k 1/1 Running 0 6m41s
Expected output:
> chaos-operator-ce-554d6c8f9f-slc8k 1/1 Running 0 6m41s
- Verify if chaos CRDs are installed
@ -65,29 +59,27 @@ kubectl get crds | grep chaos
Expected output:
> chaosengines.litmuschaos.io 2019-10-02T08:45:25Z
> chaosengines.litmuschaos.io 2019-10-02T08:45:25Z
>
> chaosexperiments.litmuschaos.io 2019-10-02T08:45:26Z
> chaosexperiments.litmuschaos.io 2019-10-02T08:45:26Z
>
> chaosresults.litmuschaos.io 2019-10-02T08:45:26Z
> chaosresults.litmuschaos.io 2019-10-02T08:45:26Z
- Verify if the chaos api resources are successfully created in the desired (application) namespace.
*Note*: Sometimes, it can take a few seconds for the resources to be available post the CRD installation
_Note_: Sometimes, it can take a few seconds for the resources to be available post the CRD installation
```
kubectl api-resources | grep chaos
```
Expected output:
Expected output:
> chaosengines litmuschaos.io true ChaosEngine
> chaosengines litmuschaos.io true ChaosEngine
>
> chaosexperiments litmuschaos.io true ChaosExperiment
> chaosexperiments litmuschaos.io true ChaosExperiment
>
> chaosresults litmuschaos.io true ChaosResult
> chaosresults litmuschaos.io true ChaosResult
<div class="danger">
<strong>NOTE</strong>:
@ -97,9 +89,9 @@ deployed in the default namespace.
### Install Chaos Experiments
Chaos experiments contain the actual chaos details. These experiments are installed on your cluster as Kubernetes CRs (Custom Resources). The Chaos Experiments are grouped as Chaos Charts and are published on <a href=" https://hub.litmuschaos.io" target="_blank">Chaos Hub</a>.
Chaos experiments contain the actual chaos details. These experiments are installed on your cluster as Kubernetes CRs (Custom Resources). The Chaos Experiments are grouped as Chaos Charts and are published on <a href="https://hub.litmuschaos.io" target="_blank">Chaos Hub</a>.
The generic chaos experiments such as `pod-kill`, `container-kill`,` network-delay` are available under Generic Chaos Chart. This is the first chart you install. You can later install application specific chaos charts for running application oriented chaos.
The generic chaos experiments such as `pod-kill`, `container-kill`,` network-delay` are available under Generic Chaos Chart. This is the first chart you install. You can later install application specific chaos charts for running application oriented chaos.
```
kubectl apply -f https://hub.litmuschaos.io/api/chaos?file=charts/generic/experiments.yaml
@ -108,7 +100,7 @@ kubectl apply -f https://hub.litmuschaos.io/api/chaos?file=charts/generic/experi
Verify if the chaos experiments are installed.
```
kubectl get chaosexperiments
kubectl get chaosexperiments
```
### Setup Service Account
@ -132,9 +124,18 @@ metadata:
labels:
name: nginx-sa
rules:
- apiGroups: ["","litmuschaos.io","batch","apps"]
resources: ["pods","jobs","daemonsets","pods/exec","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: ["", "litmuschaos.io", "batch", "apps"]
resources:
[
"pods",
"jobs",
"daemonsets",
"pods/exec",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
@ -148,9 +149,9 @@ roleRef:
kind: Role
name: nginx-sa
subjects:
- kind: ServiceAccount
name: nginx-sa
namespace: default
- kind: ServiceAccount
name: nginx-sa
namespace: default
```
### Annotate your application
@ -161,7 +162,7 @@ Your application has to be annotated with `litmuschaos.io/chaos="true"`. As a se
kubectl annotate deploy/nginx litmuschaos.io/chaos="true"
```
### Prepare ChaosEngine
### Prepare ChaosEngine
ChaosEngine connects application to the Chaos Experiment. Copy the following YAML snippet into a file called `chaosengine.yaml` and update the values of `applabel` , `appns`, `appkind` and `experiments` as per your choice. Toggle `monitoring` between `true`/`false`, to allow the chaos-exporter to fetch experiment related metrics. Change the `chaosServiceAccount` to the name of Service Account created in above step, if applicable.
@ -174,28 +175,28 @@ metadata:
namespace: default
spec:
# It can be app/infra
chaosType: 'app'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
chaosType: "app"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
components:
runner:
image: 'litmuschaos/chaos-executor:1.0.0'
type: 'go'
image: "litmuschaos/chaos-executor:1.0.0"
type: "go"
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
monitoring: false
appinfo:
appns: 'default'
appinfo:
appns: "default"
# FYI, To see app label, apply kubectl get pods --show-labels
applabel: 'app=nginx'
appkind: 'deployment'
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: nginx-sa
experiments:
- name: container-kill
spec:
components:
- name: TARGET_CONTAINER
value: 'nginx'
value: "nginx"
```
### Override Default Chaos Experiments Variables
@ -212,11 +213,8 @@ experiments:
value: nginx
```
### Run Chaos
```console
kubectl apply -f chaosengine.yaml
```
@ -227,9 +225,9 @@ kubectl apply -f chaosengine.yaml
### Observe Chaos results
Describe the ChaosResult CR to know the status of each experiment. The ```spec.verdict``` is set to Running when the experiment is in progress, eventually changing to either pass or fail.
Describe the ChaosResult CR to know the status of each experiment. The `spec.verdict` is set to Running when the experiment is in progress, eventually changing to either pass or fail.
<strong> NOTE:</strong> ChaosResult CR name will be `<chaos-engine-name>-<chaos-experiment-name>`
<strong> NOTE:</strong> ChaosResult CR name will be `{"<chaos-engine-name>-<chaos-experiment-name>"}`
```console
kubectl describe chaosresult engine-nginx-container-kill

View File

@ -1,5 +1,5 @@
---
id: version-1.0.0-kafka-broker-disk-failure
id: kafka-broker-disk-failure
title: Kafka Broker Disk Failure Experiment Details
sidebar_label: Broker Disk Failure
original_id: kafka-broker-disk-failure
@ -7,27 +7,27 @@ original_id: kafka-broker-disk-failure
## Experiment Metadata
| Type | Description | Kafka Distribution | Tested K8s Platform
| ----- | -------------------------------|----------------------|---------------------
| Kafka | Fail kafka broker disk/storage | Confluent, Kudo-Kafka| GKE
| Type | Description | Kafka Distribution | Tested K8s Platform |
| ----- | ------------------------------ | --------------------- | ------------------- |
| Kafka | Fail kafka broker disk/storage | Confluent, Kudo-Kafka | GKE |
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://raw.githubusercontent.com/litmuschaos/pages/master/docs/litmus-operator-latest.yaml)
- Ensure that Kafka & Zookeeper are deployed as Statefulsets
- If Confluent/Kudo Operators have been used to deploy Kafka, note the instance name, which will be
used as the value of `KAFKA_INSTANCE_NAME` experiment environment variable
- If Confluent/Kudo Operators have been used to deploy Kafka, note the instance name, which will be
used as the value of `KAFKA_INSTANCE_NAME` experiment environment variable
- In case of Confluent, specified by the `--name` flag
- In case of Kudo, specified by the `--instance` flag
Zookeeper uses this to construct a path in which kafka cluster data is stored.
Zookeeper uses this to construct a path in which kafka cluster data is stored.
- Ensure that the kafka-broker-disk failure experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/charts/kafka/experiments/kafka-broker-disk-failure)
- Create a secret with the gcloud serviceaccount key (placed in a file `cloud_config.yml`) named `kafka-broker-disk-failure` in the namespace where the experiment CRs are created. This is necessary to perform the disk-detach steps from the litmus experiment container.
`kubectl create secret generic kafka-broker-disk-failure --from-file=cloud_config.yml -n <kafka-namespace>`
`kubectl create secret generic kafka-broker-disk-failure --from-file=cloud_config.yml -n <kafka-namespace>`
## Entry Criteria
@ -46,12 +46,12 @@ original_id: kafka-broker-disk-failure
## Integrations
- Currently, the disk detach is supported only on GKE using LitmusLib, which internally uses the gcloud tools.
- Currently, the disk detach is supported only on GKE using LitmusLib, which internally uses the gcloud tools.
## Steps to Execute the Chaos Experiment
- This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster.
To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
@ -78,9 +78,19 @@ metadata:
labels:
name: kafka-sa
rules:
- apiGroups: ["","litmuschaos.io","batch","apps"]
resources: ["pods","jobs","pod/exec","statefulsets","secrets","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["", "litmuschaos.io", "batch", "apps"]
resources:
[
"pods",
"jobs",
"pod/exec",
"statefulsets",
"secrets",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -93,10 +103,9 @@ roleRef:
kind: ClusterRole
name: kafka-role
subjects:
- kind: ServiceAccount
name: kafka-sa
namespace: default
- kind: ServiceAccount
name: kafka-sa
namespace: default
```
### Prepare ChaosEngine
@ -106,28 +115,28 @@ subjects:
#### Supported Experiment Tunables
| Variables | Description |Type | Notes |
| ----------------------| ------------------------------------------------------------ |-----------|---------------------------------------------------------|
| KAFKA_NAMESPACE | Namespace of Kafka Brokers | Mandatory | May be same as value for `spec.appinfo.appns` |
| KAFKA_LABEL | Unique label of Kafka Brokers | Mandatory | May be same as value for `spec.appinfo.applabel` |
| KAFKA_SERVICE | Headless service of the Kafka Statefulset | Mandatory | |
| KAFKA_PORT | Port of the Kafka ClusterIP service | Mandatory | |
| ZOOKEEPER_NAMESPACE | Namespace of Zookeeper Cluster | Mandatory | May be same as value for KAFKA_NAMESPACE or other |
| ZOOKEEPER_LABEL | Unique label of Zokeeper statefulset | Mandatory | |
| ZOOKEEPER_SERVICE | Headless service of the Zookeeper Statefulset | Mandatory | |
| ZOOKEEPER_PORT | Port of the Zookeeper ClusterIP service | Mandatory | |
| CLOUD_PLATFORM | Cloud platform type on which to inject disk loss | Mandatory | Supported platforms: GCP |
| PROJECT_ID | GCP Project ID in which the Cluster is created | Mandatory | |
| DISK_NAME | GCloud Disk attached to the Cluster Node where specified broker is scheduled | Mandatory | |
| ZONE_NAME | Zone in which the Disks/Cluster are created | Mandatory | |
| KAFKA_BROKER | Kafka broker pod which is using the specified disk | Mandatory | Experiment verifies this by mapping node details |
| KAFKA_KIND | Kafka deployment type | Optional | Same as `spec.appinfo.appkind`. Supported: `statefulset`|
| KAFKA_LIVENESS_STREAM | Kafka liveness message stream | Optional | Supported: `enabled`, `disabled` |
| KAFKA_LIVENESS_IMAGE | Image used for liveness message stream | Optional | Image as `<registry_url>/<repository>/<image>:<tag>` |
| KAFKA_REPLICATION_FACTOR| Number of partition replicas for liveness topic partition | Optional | Necessary if KAFKA_LIVENESS_STREAM is `enabled` |
| KAFKA_INSTANCE_NAME | Name of the Kafka chroot path on zookeeper | Optional | Necessary if installation involves use of such path |
| KAFKA_CONSUMER_TIMEOUT| Kafka consumer message timeout, post which it terminates | Optional | Defaults to 30000ms |
| TOTAL_CHAOS_DURATION | The time duration for chaos insertion (seconds) | Optional | Defaults to 15s |
| Variables | Description | Type | Notes |
| ------------------------ | ---------------------------------------------------------------------------- | --------- | -------------------------------------------------------- |
| KAFKA_NAMESPACE | Namespace of Kafka Brokers | Mandatory | May be same as value for `spec.appinfo.appns` |
| KAFKA_LABEL | Unique label of Kafka Brokers | Mandatory | May be same as value for `spec.appinfo.applabel` |
| KAFKA_SERVICE | Headless service of the Kafka Statefulset | Mandatory | |
| KAFKA_PORT | Port of the Kafka ClusterIP service | Mandatory | |
| ZOOKEEPER_NAMESPACE | Namespace of Zookeeper Cluster | Mandatory | May be same as value for KAFKA_NAMESPACE or other |
| ZOOKEEPER_LABEL | Unique label of Zokeeper statefulset | Mandatory | |
| ZOOKEEPER_SERVICE | Headless service of the Zookeeper Statefulset | Mandatory | |
| ZOOKEEPER_PORT | Port of the Zookeeper ClusterIP service | Mandatory | |
| CLOUD_PLATFORM | Cloud platform type on which to inject disk loss | Mandatory | Supported platforms: GCP |
| PROJECT_ID | GCP Project ID in which the Cluster is created | Mandatory | |
| DISK_NAME | GCloud Disk attached to the Cluster Node where specified broker is scheduled | Mandatory | |
| ZONE_NAME | Zone in which the Disks/Cluster are created | Mandatory | |
| KAFKA_BROKER | Kafka broker pod which is using the specified disk | Mandatory | Experiment verifies this by mapping node details |
| KAFKA_KIND | Kafka deployment type | Optional | Same as `spec.appinfo.appkind`. Supported: `statefulset` |
| KAFKA_LIVENESS_STREAM | Kafka liveness message stream | Optional | Supported: `enabled`, `disabled` |
| KAFKA_LIVENESS_IMAGE | Image used for liveness message stream | Optional | Image as `<registry_url>/<repository>/<image>:<tag>` |
| KAFKA_REPLICATION_FACTOR | Number of partition replicas for liveness topic partition | Optional | Necessary if KAFKA_LIVENESS_STREAM is `enabled` |
| KAFKA_INSTANCE_NAME | Name of the Kafka chroot path on zookeeper | Optional | Necessary if installation involves use of such path |
| KAFKA_CONSUMER_TIMEOUT | Kafka consumer message timeout, post which it terminates | Optional | Defaults to 30000ms |
| TOTAL_CHAOS_DURATION | The time duration for chaos insertion (seconds) | Optional | Defaults to 15s |
#### Sample ChaosEngine Manifest
@ -139,12 +148,12 @@ metadata:
namespace: default
spec:
# It can be app/infra
chaosType: 'app'
#ex. values: ns1:name=percona,ns2:run=nginx
chaosType: "app"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appinfo:
appns: default
applabel: 'app=cp-kafka'
applabel: "app=cp-kafka"
appkind: statefulset
chaosServiceAccount: kafka-sa
monitoring: false
@ -153,105 +162,105 @@ spec:
image: "litmuschaos/chaos-executor:1.0.0"
type: "go"
# It can be delete/retain
jobCleanUpPolicy: delete
jobCleanUpPolicy: delete
experiments:
- name: kafka-broker-disk-failure
spec:
components:
# choose based on available kafka broker replicas
components:
# choose based on available kafka broker replicas
- name: KAFKA_REPLICATION_FACTOR
value: '3'
value: "3"
# get via "kubectl get pods --show-labels -n <kafka-namespace>"
- name: KAFKA_LABEL
value: 'app=cp-kafka'
value: "app=cp-kafka"
- name: KAFKA_NAMESPACE
value: 'default'
# get via "kubectl get svc -n <kafka-namespace>"
value: "default"
# get via "kubectl get svc -n <kafka-namespace>"
- name: KAFKA_SERVICE
value: 'kafka-cp-kafka-headless'
value: "kafka-cp-kafka-headless"
# get via "kubectl get svc -n <kafka-namespace>
# get via "kubectl get svc -n <kafka-namespace>
- name: KAFKA_PORT
value: '9092'
value: "9092"
# in milliseconds
# in milliseconds
- name: KAFKA_CONSUMER_TIMEOUT
value: '70000'
value: "70000"
# ensure to set the instance name if using KUDO operator
- name: KAFKA_INSTANCE_NAME
value: ''
value: ""
- name: ZOOKEEPER_NAMESPACE
value: 'default'
value: "default"
# get via "kubectl get pods --show-labels -n <zk-namespace>"
- name: ZOOKEEPER_LABEL
value: 'app=cp-zookeeper'
value: "app=cp-zookeeper"
# get via "kubectl get svc -n <zk-namespace>
# get via "kubectl get svc -n <zk-namespace>
- name: ZOOKEEPER_SERVICE
value: 'kafka-cp-zookeeper-headless'
value: "kafka-cp-zookeeper-headless"
# get via "kubectl get svc -n <zk-namespace>
# get via "kubectl get svc -n <zk-namespace>
- name: ZOOKEEPER_PORT
value: '2181'
value: "2181"
# get from google cloud console or "gcloud projects list"
- name: PROJECT_ID
value: 'argon-tractor-237811'
value: "argon-tractor-237811"
# attached to (in use by) node where 'kafka-0' is scheduled
- name: DISK_NAME
value: 'disk-1'
value: "disk-1"
- name: ZONE_NAME
value: 'us-central1-a'
value: "us-central1-a"
# Uses "disk-1" attached to the node on which it is scheduled
- name: KAFKA_BROKER
value: 'kafka-0'
value: "kafka-0"
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '60'
value: "60"
```
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
### Watch Chaos progress
### Watch Chaos progress
- View broker pod termination upon disk loss by setting up a watch on the pods in the Kafka namespace
`watch -n 1 kubectl get pods -n <kafka-namespace>`
`watch -n 1 kubectl get pods -n <kafka-namespace>`
### Check Chaos Experiment Result
### Check Chaos Experiment Result
- Check whether the kafka deployment is resilient to the broker disk failure, once the experiment (job) is completed.
The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult kafka-chaos-kafka-broker-disk-failure -n <kafka-namespace>`
`kubectl describe chaosresult kafka-chaos-kafka-broker-disk-failure -n <kafka-namespace>`
### Kafka Broker Recovery Post Experiment Execution
- The experiment re-attaches the detached disk to the same node as part of recovery steps. However, if the disk is not provisioned
as a Persistent Volume & instead provides the backing store to a PV carved out of it, the brokers may continue to stay in `CrashLoopBackOff`
state (example: as hostPath directory for a Kubernetes Local PV)
as a Persistent Volume & instead provides the backing store to a PV carved out of it, the brokers may continue to stay in `CrashLoopBackOff`
state (example: as hostPath directory for a Kubernetes Local PV)
- The complete recovery steps involve:
- The complete recovery steps involve:
- Remounting the disk into the desired mount point
- Deleting the affected broker pod to force reschedule
- Deleting the affected broker pod to force reschedule
## Kafka Broker Disk Failure Demo
## Kafka Broker Disk Failure Demo
- TODO: A sample recording of this experiment execution is provided here.
------
---

View File

@ -1,5 +1,5 @@
---
id: version-1.0.0-kafka-broker-pod-failure
id: kafka-broker-pod-failure
title: Kafka Broker Pod Failure Experiment Details
sidebar_label: Broker Pod Failure
original_id: kafka-broker-pod-failure
@ -7,25 +7,24 @@ original_id: kafka-broker-pod-failure
## Experiment Metadata
| Type | Description | Kafka Distribution | Tested K8s Platform
| ----- | -------------------------------|----------------------|---------------------
| Kafka | Fail kafka leader-broker pods | Confluent, Kudo-Kafka| AWS Konvoy, GKE
| Type | Description | Kafka Distribution | Tested K8s Platform |
| ----- | ----------------------------- | --------------------- | ------------------- |
| Kafka | Fail kafka leader-broker pods | Confluent, Kudo-Kafka | AWS Konvoy, GKE |
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://raw.githubusercontent.com/litmuschaos/pages/master/docs/litmus-operator-latest.yaml)
- Ensure that the `kafka-broker-pod-failure` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/charts/kafka/experiments/kafka-broker-pod-failure)
- Ensure that the `kafka-broker-pod-failure` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/charts/kafka/experiments/kafka-broker-pod-failure)
- Ensure that Kafka & Zookeeper are deployed as Statefulsets
- If Confluent/Kudo Operators have been used to deploy Kafka, note the instance name, which will be
used as the value of `KAFKA_INSTANCE_NAME` experiment environment variable
- If Confluent/Kudo Operators have been used to deploy Kafka, note the instance name, which will be
used as the value of `KAFKA_INSTANCE_NAME` experiment environment variable
- In case of Confluent, specified by the `--name` flag
- In case of Kudo, specified by the `--instance` flag
Zookeeper uses this to construct a path in which kafka cluster data is stored.
- Ensure that the kafka-broker-disk failure experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/charts/kafka/experiments/kafka-broker-pod-failure)
Zookeeper uses this to construct a path in which kafka cluster data is stored.
- Ensure that the kafka-broker-disk failure experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/charts/kafka/experiments/kafka-broker-pod-failure)
## Entry Criteria
@ -50,7 +49,7 @@ original_id: kafka-broker-pod-failure
## Steps to Execute the Chaos Experiment
- This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster.
To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
@ -77,12 +76,23 @@ metadata:
labels:
name: kafka-sa
rules:
- apiGroups: ["","litmuschaos.io","batch","apps"]
resources: ["pods","deployments","jobs","pod/exec","statefulsets","configmaps","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
- apiGroups: ["", "litmuschaos.io", "batch", "apps"]
resources:
[
"pods",
"deployments",
"jobs",
"pod/exec",
"statefulsets",
"configmaps",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -95,10 +105,9 @@ roleRef:
kind: ClusterRole
name: kafka-role
subjects:
- kind: ServiceAccount
name: kafka-sa
namespace: default
- kind: ServiceAccount
name: kafka-sa
namespace: default
```
### Prepare ChaosEngine
@ -108,27 +117,27 @@ subjects:
#### Supported Experiment Tunables
| Variables | Description |Type | Notes |
| ----------------------| ------------------------------------------------------------ |-----------|---------------------------------------------------------|
| KAFKA_NAMESPACE | Namespace of Kafka Brokers | Mandatory | May be same as value for `spec.appinfo.appns` |
| KAFKA_LABEL | Unique label of Kafka Brokers | Mandatory | May be same as value for `spec.appinfo.applabel` |
| KAFKA_SERVICE | Headless service of the Kafka Statefulset | Mandatory | |
| KAFKA_PORT | Port of the Kafka ClusterIP service | Mandatory | |
| ZOOKEEPER_NAMESPACE | Namespace of Zookeeper Cluster | Mandatory | May be same as value for KAFKA_NAMESPACE or other |
| ZOOKEEPER_LABEL | Unique label of Zokeeper statefulset | Mandatory | |
| ZOOKEEPER_SERVICE | Headless service of the Zookeeper Statefulset | Mandatory | |
| ZOOKEEPER_PORT | Port of the Zookeeper ClusterIP service | Mandatory | |
| KAFKA_KIND | Kafka deployment type | Optional | Same as `spec.appinfo.appkind`. Supported: `statefulset`|
| KAFKA_LIVENESS_STREAM | Kafka liveness message stream | Optional | Supported: `enabled`, `disabled` |
| KAFKA_LIVENESS_IMAGE | Image used for liveness message stream | Optional | Image as `<registry_url>/<repository>/<image>:<tag>` |
| KAFKA_REPLICATION_FACTOR| Number of partition replicas for liveness topic partition | Optional | Necessary if KAFKA_LIVENESS_STREAM is `enabled` |
| KAFKA_INSTANCE_NAME | Name of the Kafka chroot path on zookeeper | Optional | Necessary if installation involves use of such path |
| KAFKA_CONSUMER_TIMEOUT| Kafka consumer message timeout, post which it terminates | Optional | Defaults to 30000ms |
| KAFKA_BROKER | Kafka broker pod (name) to be deleted | Optional | A target selection mode (random/liveness-based/specific)|
| TOTAL_CHAOS_DURATION | The time duration for chaos insertion (seconds) | Optional | Defaults to 15s |
| CHAOS_INTERVAL | Time interval b/w two successive broker failures (sec) | Optional | Defaults to 5s |
| LIB | The chaos lib used to inject the chaos | Optional | Defaults to `litmus`. Supported: `litmus`, `powerfulseal|
| CHAOS_SERVICE_ACCOUNT | Service account used by the powerfulseal deployment | Optional | Defaults to `default` on namespace `spec.appinfo.appns` |
| Variables | Description | Type | Notes |
| ------------------------ | --------------------------------------------------------- | --------- | -------------------------------------------------------- |
| KAFKA_NAMESPACE | Namespace of Kafka Brokers | Mandatory | May be same as value for `spec.appinfo.appns` |
| KAFKA_LABEL | Unique label of Kafka Brokers | Mandatory | May be same as value for `spec.appinfo.applabel` |
| KAFKA_SERVICE | Headless service of the Kafka Statefulset | Mandatory | |
| KAFKA_PORT | Port of the Kafka ClusterIP service | Mandatory | |
| ZOOKEEPER_NAMESPACE | Namespace of Zookeeper Cluster | Mandatory | May be same as value for KAFKA_NAMESPACE or other |
| ZOOKEEPER_LABEL | Unique label of Zokeeper statefulset | Mandatory | |
| ZOOKEEPER_SERVICE | Headless service of the Zookeeper Statefulset | Mandatory | |
| ZOOKEEPER_PORT | Port of the Zookeeper ClusterIP service | Mandatory | |
| KAFKA_KIND | Kafka deployment type | Optional | Same as `spec.appinfo.appkind`. Supported: `statefulset` |
| KAFKA_LIVENESS_STREAM | Kafka liveness message stream | Optional | Supported: `enabled`, `disabled` |
| KAFKA_LIVENESS_IMAGE | Image used for liveness message stream | Optional | Image as `<registry_url>/<repository>/<image>:<tag>` |
| KAFKA_REPLICATION_FACTOR | Number of partition replicas for liveness topic partition | Optional | Necessary if KAFKA_LIVENESS_STREAM is `enabled` |
| KAFKA_INSTANCE_NAME | Name of the Kafka chroot path on zookeeper | Optional | Necessary if installation involves use of such path |
| KAFKA_CONSUMER_TIMEOUT | Kafka consumer message timeout, post which it terminates | Optional | Defaults to 30000ms |
| KAFKA_BROKER | Kafka broker pod (name) to be deleted | Optional | A target selection mode (random/liveness-based/specific) |
| TOTAL_CHAOS_DURATION | The time duration for chaos insertion (seconds) | Optional | Defaults to 15s |
| CHAOS_INTERVAL | Time interval b/w two successive broker failures (sec) | Optional | Defaults to 5s |
| LIB | The chaos lib used to inject the chaos | Optional | Defaults to `litmus`. Supported: `litmus`, `powerfulseal |
| CHAOS_SERVICE_ACCOUNT | Service account used by the powerfulseal deployment | Optional | Defaults to `default` on namespace `spec.appinfo.appns` |
#### Sample ChaosEngine Manifest
@ -140,12 +149,12 @@ metadata:
namespace: default
spec:
# It can be app/infra
chaosType: 'app'
#ex. values: ns1:name=percona,ns2:run=nginx
chaosType: "app"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appinfo:
appns: default
applabel: 'app=cp-kafka'
applabel: "app=cp-kafka"
appkind: statefulset
chaosServiceAccount: kafka-sa
monitoring: false
@ -154,87 +163,87 @@ spec:
image: "litmuschaos/chaos-executor:1.0.0"
type: "go"
# It can be delete/retain
jobCleanUpPolicy: delete
jobCleanUpPolicy: delete
experiments:
- name: kafka-broker-pod-failure
spec:
components:
# choose based on available kafka broker replicas
components:
# choose based on available kafka broker replicas
- name: KAFKA_REPLICATION_FACTOR
value: '3'
value: "3"
# get via "kubectl get pods --show-labels -n <kafka-namespace>"
- name: KAFKA_LABEL
value: 'app=cp-kafka'
value: "app=cp-kafka"
- name: KAFKA_NAMESPACE
value: 'default'
# get via "kubectl get svc -n <kafka-namespace>"
value: "default"
# get via "kubectl get svc -n <kafka-namespace>"
- name: KAFKA_SERVICE
value: 'kafka-cp-kafka-headless'
value: "kafka-cp-kafka-headless"
# get via "kubectl get svc -n <kafka-namespace>
# get via "kubectl get svc -n <kafka-namespace>
- name: KAFKA_PORT
value: '9092'
value: "9092"
# in milliseconds
# in milliseconds
- name: KAFKA_CONSUMER_TIMEOUT
value: '70000'
value: "70000"
# ensure to set the instance name if using KUDO operator
- name: KAFKA_INSTANCE_NAME
value: ''
value: ""
- name: ZOOKEEPER_NAMESPACE
value: 'default'
value: "default"
# get via "kubectl get pods --show-labels -n <zk-namespace>"
- name: ZOOKEEPER_LABEL
value: 'app=cp-zookeeper'
value: "app=cp-zookeeper"
# get via "kubectl get svc -n <zk-namespace>
# get via "kubectl get svc -n <zk-namespace>
- name: ZOOKEEPER_SERVICE
value: 'kafka-cp-zookeeper-headless'
value: "kafka-cp-zookeeper-headless"
# get via "kubectl get svc -n <zk-namespace>
# get via "kubectl get svc -n <zk-namespace>
- name: ZOOKEEPER_PORT
value: '2181'
value: "2181"
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '60'
value: "60"
# set chaos interval (in sec) as desired
- name: CHAOS_INTERVAL
value: '20'
value: "20"
# pod failures without '--force' & default terminationGracePeriodSeconds
- name: FORCE
value: "false"
```
### Create the ChaosEngine Resource
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
### Watch Chaos progress
### Watch Chaos progress
- View pod terminations & recovery by setting up a watch on the pods in the Kafka namespace
`watch -n 1 kubectl get pods -n <kafka-namespace>`
`watch -n 1 kubectl get pods -n <kafka-namespace>`
### Check Chaos Experiment Result
### Check Chaos Experiment Result
- Check whether the kafka deployment is resilient to the broker pod failure, once the experiment (job) is completed.
The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult kafka-chaos-kafka-broker-pod-failure -n <kafka-namespace>`
`kubectl describe chaosresult kafka-chaos-kafka-broker-pod-failure -n <kafka-namespace>`
## Kafka Broker Pod Failure Demo
## Kafka Broker Pod Failure Demo
- TODO: A sample recording of this experiment execution is provided here.
------
---

View File

@ -1,25 +1,26 @@
---
id: version-1.0.0-node-drain
id: node-drain
title: Node Drain Experiment Details
sidebar_label: Node Drain
original_id: node-drain
---
------
---
## Experiment Metadata
| Type | Description | Tested K8s Platform |
| ----------| -------------------------------------------- | ------------------------------------------------------------------|
| Generic | Drain the node where application pod is scheduled. | GKE, AWS, Packet(Kubeadm), Konvoy(AWS)|
| Type | Description | Tested K8s Platform |
| ------- | -------------------------------------------------- | -------------------------------------- |
| Generic | Drain the node where application pod is scheduled. | GKE, AWS, Packet(Kubeadm), Konvoy(AWS) |
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://raw.githubusercontent.com/litmuschaos/pages/master/docs/litmus-operator-latest.yaml)
- Ensure that the `node-drain` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/charts/generic/experiments/drain-node)
- Ensure that the node specified in the experiment ENV variable `APP_NODE` (the node which will be drained) should be cordoned before execution of the chaos experiment (before applying the chaosengine manifest) to ensure that the litmus experiment runner pods are not scheduled on it / subjected to eviction. This can be achieved with the following steps:
- Ensure that the node specified in the experiment ENV variable `APP_NODE` (the node which will be drained) should be cordoned before execution of the chaos experiment (before applying the chaosengine manifest) to ensure that the litmus experiment runner pods are not scheduled on it / subjected to eviction. This can be achieved with the following steps:
- Get node names against the applications pods: `kubectl get pods -o wide`
- Cordon the node `kubectl cordon <nodename>`
- Cordon the node `kubectl cordon <nodename>`
## Entry Criteria
@ -34,7 +35,6 @@ original_id: node-drain
- This experiment drains the node where application pod is running and verifies if it is scheduled on another available node.
- In the end of experiment it uncordons the specified node so that it can be utilised in future.
## Integrations
- Drain node can be effected using the chaos library: `litmus`
@ -68,12 +68,21 @@ metadata:
labels:
name: nginx-sa
rules:
- apiGroups: ["","litmuschaos.io","batch","extensions"]
resources: ["pods","jobs","chaosengines","daemonsets","pods/eviction","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["patch","get","list"]
- apiGroups: ["", "litmuschaos.io", "batch", "extensions"]
resources:
[
"pods",
"jobs",
"chaosengines",
"daemonsets",
"pods/eviction",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["patch", "get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -86,16 +95,16 @@ roleRef:
kind: ClusterRole
name: nginx-sa
subjects:
- kind: ServiceAccount
name: nginx-sa
namespace: default
- kind: ServiceAccount
name: nginx-sa
namespace: default
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- Provide the auxiliary applications info (ns & labels) in `spec.auxiliaryAppInfo`
- Override the experiment tunables if desired
- Override the experiment tunables if desired
#### Supported Experiment Tunables
@ -130,12 +139,12 @@ metadata:
namespace: default
spec:
# It can be app/infra
chaosType: 'infra'
#ex. values: ns1:name=percona,ns2:run=nginx
chaosType: "infra"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appns: default
applabel: 'app=nginx'
applabel: "app=nginx"
appkind: deployment
chaosServiceAccount: nginx-sa
monitoring: false
@ -149,9 +158,9 @@ spec:
- name: node-drain
spec:
components:
# set node name
# set node name
- name: APP_NODE
value: 'node-1'
value: "node-1"
```
### Create the ChaosEngine Resource
@ -174,4 +183,4 @@ spec:
## Node Drain Experiment Demo [TODO]
- A sample recording of this experiment execution is provided here.
- A sample recording of this experiment execution is provided here.

View File

@ -1,16 +1,17 @@
---
id: version-1.0.0-openebs-pool-container-failure
id: openebs-pool-container-failure
title: OpenEBS Pool Container Failure Experiment Details
sidebar_label: Pool Container Failure
original_id: openebs-pool-container-failure
---
------
---
## Experiment Metadata
| Type | Description | Tested K8s Platform |
| ----------| ------------------------ | ------------------------------------------------------------------|
| OpenEBS | Kill the cstor pool pod container and check if gets created again | GKE, Konvoy(AWS), Packet(Kubeadm), Minikube, OpenShift(Baremetal) |
| Type | Description | Tested K8s Platform |
| ------- | ----------------------------------------------------------------- | ----------------------------------------------------------------- |
| OpenEBS | Kill the cstor pool pod container and check if gets created again | GKE, Konvoy(AWS), Packet(Kubeadm), Minikube, OpenShift(Baremetal) |
## Prerequisites
@ -26,13 +27,14 @@ original_id: openebs-pool-container-failure
metadata:
name: openebs-pool-container-failure
data:
parameters.yml: |
parameters.yml: |
dbuser: root
dbpassword: k8sDem0
dbname: test
```
- Ensure that the chaosServiceAccount used for the experiment has cluster-scope permissions as the experiment may involve carrying out the chaos in the `openebs` namespace
while performing application health checks in its respective namespace.
while performing application health checks in its respective namespace.
## Entry Criteria
@ -46,8 +48,8 @@ original_id: openebs-pool-container-failure
If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
## Details
@ -74,13 +76,13 @@ If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
#### Supported Experiment Tunables
| Variables | Description | Type | Notes |
| ----------------------| ------------------------------------------------------------ |-----------|------------------------------------------------------------|
| APP_PVC | The PersistentVolumeClaim used by the stateful application | Mandatory | PVC must use OpenEBS cStor storage class |
| DEPLOY_TYPE | Type of Kubernetes resource used by the stateful application | Optional | Defaults to `deployment`. Supported: `deployment`, `statefulset`| |
| LIB_IMAGE | The chaos library image used to inject the latency | Optional | Defaults to `gaiaadm/pumba:0.4.8`. Supported: `gaiaadm/pumba:0.4.8`|
| TOTAL_CHAOS_DURATION | Amount of soak time for I/O post pod kill | Optional | Defaults to 600 seconds |
| DATA_PERSISTENCE | Flag to perform data consistency checks on the application | Optional | Default value is disabled (empty/unset). Set to `enabled` to perform data checks. Ensure configmap with app details are created |
| Variables | Description | Type | Notes |
| -------------------- | ------------------------------------------------------------ | --------- | ------------------------------------------------------------------------------------------------------------------------------- | --- |
| APP_PVC | The PersistentVolumeClaim used by the stateful application | Mandatory | PVC must use OpenEBS cStor storage class |
| DEPLOY_TYPE | Type of Kubernetes resource used by the stateful application | Optional | Defaults to `deployment`. Supported: `deployment`, `statefulset` | |
| LIB_IMAGE | The chaos library image used to inject the latency | Optional | Defaults to `gaiaadm/pumba:0.4.8`. Supported: `gaiaadm/pumba:0.4.8` |
| TOTAL_CHAOS_DURATION | Amount of soak time for I/O post pod kill | Optional | Defaults to 600 seconds |
| DATA_PERSISTENCE | Flag to perform data consistency checks on the application | Optional | Default value is disabled (empty/unset). Set to `enabled` to perform data checks. Ensure configmap with app details are created |
#### Sample ChaosEngine Manifest
@ -93,7 +95,7 @@ metadata:
spec:
appinfo:
appns: default
applabel: 'app=percona'
applabel: "app=percona"
appkind: deployment
chaosServiceAccount: percona-sa
monitoring: false
@ -103,9 +105,9 @@ spec:
spec:
components:
- name: APP_PVC
value: 'pvc-c466262a-a5f2-4f0f-b594-5daddfc2e29d'
value: "pvc-c466262a-a5f2-4f0f-b594-5daddfc2e29d"
- name: DEPLOY_TYPE
value: deployment
value: deployment
```
### Create the ChaosEngine Resource
@ -122,11 +124,11 @@ spec:
### Check Chaos Experiment Result
- Check whether the application is resilient to the pool pod container failure, once the experiment (job) is completed. The ChaosResult resource naming convention
- Check whether the application is resilient to the pool pod container failure, once the experiment (job) is completed. The ChaosResult resource naming convention
is: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult target-chaos-openebs-pool-container-failure -n <application-namespace>`
## OpenEBS Pool Container Failure Demo [TODO]
- A sample recording of this experiment execution is provided here.
- A sample recording of this experiment execution is provided here.

View File

@ -1,16 +1,17 @@
---
id: version-1.0.0-openebs-pool-pod-failure
id: openebs-pool-pod-failure
title: OpenEBS Pool Pod Failure Experiment Details
sidebar_label: Pool Pod Failure
original_id: openebs-pool-pod-failure
---
------
---
## Experiment Metadata
| Type | Description | Tested K8s Platform |
| ----------| ------------------------ | ------------------------------------------------------------------|
| OpenEBS | Kill the cstor pool pod and check if gets created again | GKE, Konvoy(AWS), Packet(Kubeadm), Minikube, OpenShift(Baremetal) |
| Type | Description | Tested K8s Platform |
| ------- | ------------------------------------------------------- | ----------------------------------------------------------------- |
| OpenEBS | Kill the cstor pool pod and check if gets created again | GKE, Konvoy(AWS), Packet(Kubeadm), Minikube, OpenShift(Baremetal) |
## Prerequisites
@ -26,13 +27,14 @@ original_id: openebs-pool-pod-failure
metadata:
name: openebs-pool-pod-failure
data:
parameters.yml: |
parameters.yml: |
dbuser: root
dbpassword: k8sDem0
dbname: test
```
- Ensure that the chaosServiceAccount used for the experiment has cluster-scope permissions as the experiment may involve carrying out the chaos in the `openebs` namespace
while performing application health checks in its respective namespace.
while performing application health checks in its respective namespace.
## Entry Criteria
@ -46,8 +48,8 @@ original_id: openebs-pool-pod-failure
If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
## Details
@ -72,12 +74,12 @@ If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
#### Supported Experiment Tunables
| Variables | Description | Type | Notes |
| ----------------------| ------------------------------------------------------------ |-----------|------------------------------------------------------------|
| APP_PVC | The PersistentVolumeClaim used by the stateful application | Mandatory | PVC must use OpenEBS cStor storage class |
| DEPLOY_TYPE | Type of Kubernetes resource used by the stateful application | Optional | Defaults to `deployment`. Supported: `deployment`, `statefulset`| |
| TOTAL_CHAOS_DURATION | Amount of soak time for I/O post pod kill | Optional | Defaults to 600 seconds |
| DATA_PERSISTENCE | Flag to perform data consistency checks on the application | Optional | Default value is disabled (empty/unset). Set to `enabled` to perform data checks. Ensure configmap with app details are created |
| Variables | Description | Type | Notes |
| -------------------- | ------------------------------------------------------------ | --------- | ------------------------------------------------------------------------------------------------------------------------------- | --- |
| APP_PVC | The PersistentVolumeClaim used by the stateful application | Mandatory | PVC must use OpenEBS cStor storage class |
| DEPLOY_TYPE | Type of Kubernetes resource used by the stateful application | Optional | Defaults to `deployment`. Supported: `deployment`, `statefulset` | |
| TOTAL_CHAOS_DURATION | Amount of soak time for I/O post pod kill | Optional | Defaults to 600 seconds |
| DATA_PERSISTENCE | Flag to perform data consistency checks on the application | Optional | Default value is disabled (empty/unset). Set to `enabled` to perform data checks. Ensure configmap with app details are created |
#### Sample ChaosEngine Manifest
@ -90,7 +92,7 @@ metadata:
spec:
appinfo:
appns: default
applabel: 'app=percona'
applabel: "app=percona"
appkind: deployment
chaosServiceAccount: percona-sa
monitoring: false
@ -100,11 +102,11 @@ spec:
spec:
components:
- name: FORCE
value: 'true'
value: "true"
- name: APP_PVC
value: 'pvc-c466262a-a5f2-4f0f-b594-5daddfc2e29d'
value: "pvc-c466262a-a5f2-4f0f-b594-5daddfc2e29d"
- name: DEPLOY_TYPE
value: deployment
value: deployment
```
### Create the ChaosEngine Resource
@ -121,7 +123,7 @@ spec:
### Check Chaos Experiment Result
- Check whether the application is resilient to the pool pod failure, once the experiment (job) is completed. The ChaosResult resource naming convention
- Check whether the application is resilient to the pool pod failure, once the experiment (job) is completed. The ChaosResult resource naming convention
is: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult target-chaos-openebs-pool-pod-failure -n <application-namespace>`

View File

@ -1,16 +1,17 @@
---
id: version-1.0.0-openebs-target-container-failure
id: openebs-target-container-failure
title: OpenEBS Target Failure Experiment Details
sidebar_label: Target Container Failure
original_id: openebs-target-container-failure
---
------
---
## Experiment Metadata
| Type | Description | Tested K8s Platform |
| ----------| ------------------------ | ------------------------------------------------------------------|
| OpenEBS | Kill the cStor target/Jiva controller container | GKE, Konvoy(AWS), Packet(Kubeadm), Minikube, OpenShift(Baremetal) |
| Type | Description | Tested K8s Platform |
| ------- | ----------------------------------------------- | ----------------------------------------------------------------- |
| OpenEBS | Kill the cStor target/Jiva controller container | GKE, Konvoy(AWS), Packet(Kubeadm), Minikube, OpenShift(Baremetal) |
## Prerequisites
@ -26,13 +27,14 @@ original_id: openebs-target-container-failure
metadata:
name: openebs-target-container-failure
data:
parameters.yml: |
parameters.yml: |
dbuser: root
dbpassword: k8sDem0
dbname: test
```
- Ensure that the chaosServiceAccount used for the experiment has cluster-scope permissions as the experiment may involve carrying out the chaos in the `openebs` namespace
while performing application health checks in its respective namespace.
while performing application health checks in its respective namespace.
## Entry Criteria
@ -46,8 +48,8 @@ original_id: openebs-target-container-failure
If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
## Details
@ -59,8 +61,8 @@ If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
## Integrations
- Container kill is achieved using the `pumba` chaos library in case of docker runtime, & `litmuslib` using `crictl` tool in case of containerd runtime.
- The desired lib image can be configured in the env variable `LIB_IMAGE`.
- Container kill is achieved using the `pumba` chaos library in case of docker runtime, & `litmuslib` using `crictl` tool in case of containerd runtime.
- The desired lib image can be configured in the env variable `LIB_IMAGE`.
## Steps to Execute the Chaos Experiment
@ -75,15 +77,15 @@ If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
#### Supported Experiment Tunables
| Variables | Description | Type | Notes |
| ----------------------| ------------------------------------------------------------ |-----------|------------------------------------------------------------|
| APP_PVC | The PersistentVolumeClaim used by the stateful application | Mandatory | PVC may use either OpenEBS Jiva/cStor storage class |
| DEPLOY_TYPE | Type of Kubernetes resource used by the stateful application | Optional | Defaults to `deployment`. Supported: `deployment`, `statefulset`|
| CONTAINER_RUNTIME | The container runtime used in the Kubernetes Cluster | Optional | Defaults to `docker`. Supported: `docker`, `containerd` |
| LIB_IMAGE | The chaos library image used to run the kill command | Optional | Defaults to `gaiaadm/pumba:0.4.8`. Supported: `{docker : gaiaadm/pumba:0.4.8, containerd: gprasath/crictl:ci}` |
| TARGET_CONTAINER | The container to be killed in the storage controller pod | Optional | Defaults to `cstor-volume-mgmt` |
| TOTAL_CHAOS_DURATION | Amount of soak time for I/O post container kill | Optional | Defaults to 60 seconds |
| DATA_PERSISTENCE | Flag to perform data consistency checks on the application | Optional | Default value is disabled (empty/unset). Set to `enabled` to perform data checks. Ensure configmap with app details are created |
| Variables | Description | Type | Notes |
| -------------------- | ------------------------------------------------------------ | --------- | ------------------------------------------------------------------------------------------------------------------------------- |
| APP_PVC | The PersistentVolumeClaim used by the stateful application | Mandatory | PVC may use either OpenEBS Jiva/cStor storage class |
| DEPLOY_TYPE | Type of Kubernetes resource used by the stateful application | Optional | Defaults to `deployment`. Supported: `deployment`, `statefulset` |
| CONTAINER_RUNTIME | The container runtime used in the Kubernetes Cluster | Optional | Defaults to `docker`. Supported: `docker`, `containerd` |
| LIB_IMAGE | The chaos library image used to run the kill command | Optional | Defaults to `gaiaadm/pumba:0.4.8`. Supported: `{docker : gaiaadm/pumba:0.4.8, containerd: gprasath/crictl:ci}` |
| TARGET_CONTAINER | The container to be killed in the storage controller pod | Optional | Defaults to `cstor-volume-mgmt` |
| TOTAL_CHAOS_DURATION | Amount of soak time for I/O post container kill | Optional | Defaults to 60 seconds |
| DATA_PERSISTENCE | Flag to perform data consistency checks on the application | Optional | Default value is disabled (empty/unset). Set to `enabled` to perform data checks. Ensure configmap with app details are created |
#### Sample ChaosEngine Manifest
@ -96,7 +98,7 @@ metadata:
spec:
appinfo:
appns: default
applabel: 'app=percona'
applabel: "app=percona"
appkind: deployment
chaosServiceAccount: percona-sa
monitoring: false
@ -106,11 +108,11 @@ spec:
spec:
components:
- name: TARGET_CONTAINER
value: 'cstor-istgt'
value: "cstor-istgt"
- name: APP_PVC
value: 'pvc-c466262a-a5f2-4f0f-b594-5daddfc2e29d'
value: "pvc-c466262a-a5f2-4f0f-b594-5daddfc2e29d"
- name: DEPLOY_TYPE
value: deployment
value: deployment
```
### Create the ChaosEngine Resource
@ -127,7 +129,7 @@ spec:
### Check Chaos Experiment Result
- Check whether the application is resilient to the target container kill, once the experiment (job) is completed. The ChaosResult resource naming convention
- Check whether the application is resilient to the target container kill, once the experiment (job) is completed. The ChaosResult resource naming convention
is: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult target-chaos-openebs-target-container-failure -n <application-namespace>`
@ -135,4 +137,3 @@ spec:
## OpenEBS Target Container Failure Demo [TODO]
- A sample recording of this experiment execution is provided here.

View File

@ -1,16 +1,17 @@
---
id: version-1.0.0-openebs-target-network-delay
id: openebs-target-network-delay
title: OpenEBS Target Network Latency Experiment Details
sidebar_label: Target Network Latency
original_id: openebs-target-network-delay
---
------
---
## Experiment Metadata
| Type | Description | Tested K8s Platform |
| ----------| ------------------------ | ------------------------------------------------------------------|
| OpenEBS | Induce latency into the cStor target/Jiva controller container | GKE, Konvoy(AWS), Packet(Kubeadm), OpenShift(Baremetal) |
| Type | Description | Tested K8s Platform |
| ------- | -------------------------------------------------------------- | ------------------------------------------------------- |
| OpenEBS | Induce latency into the cStor target/Jiva controller container | GKE, Konvoy(AWS), Packet(Kubeadm), OpenShift(Baremetal) |
## Prerequisites
@ -27,13 +28,14 @@ original_id: openebs-target-network-delay
metadata:
name: openebs-target-network-delay
data:
parameters.yml: |
parameters.yml: |
dbuser: root
dbpassword: k8sDem0
dbname: test
```
- Ensure that the chaosServiceAccount used for the experiment has cluster-scope permissions as the experiment may involve carrying out the chaos in the `openebs` namespace
while performing application health checks in its respective namespace.
while performing application health checks in its respective namespace.
## Entry Criteria
@ -47,8 +49,8 @@ original_id: openebs-target-network-delay
If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
## Details
@ -59,8 +61,8 @@ If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
## Integrations
- Network delay is achieved using the `pumba` chaos library in case of docker runtime. Support for other other runtimes via tc direct invocation of `tc` will be added soon.
- The desired lib image can be configured in the env variable `LIB_IMAGE`.
- Network delay is achieved using the `pumba` chaos library in case of docker runtime. Support for other other runtimes via tc direct invocation of `tc` will be added soon.
- The desired lib image can be configured in the env variable `LIB_IMAGE`.
## Steps to Execute the Chaos Experiment
@ -75,16 +77,16 @@ If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
#### Supported Experiment Tunables
| Variables | Description | Type | Notes |
| ----------------------| ------------------------------------------------------------ |-----------|------------------------------------------------------------|
| APP_PVC | The PersistentVolumeClaim used by the stateful application | Mandatory | PVC may use either OpenEBS Jiva/cStor storage class |
| DEPLOY_TYPE | Type of Kubernetes resource used by the stateful application | Optional | Defaults to `deployment`. Supported: `deployment`, `statefulset`|
| CONTAINER_RUNTIME | The container runtime used in the Kubernetes Cluster | Optional | Defaults to `docker`. Supported: `docker` |
| LIB_IMAGE | The chaos library image used to inject the latency | Optional | Defaults to `gaiaadm/pumba:0.4.8`. Supported: `gaiaadm/pumba:0.4.8`|
| TARGET_CONTAINER | The container into which delays are injected in the storage controller pod | Optional | Defaults to `cstor-istgt` |
| NETWORK_DELAY | Egress delay injected into the target container | Optional | Defaults to 60000 milliseconds (60s) |
| TOTAL_CHAOS_DURATION | Total duration for which latency is injected | Optional | Defaults to 60000 milliseconds (60s) |
| DATA_PERSISTENCE | Flag to perform data consistency checks on the application | Optional | Default value is disabled (empty/unset). Set to `enabled` to perform data checks. Ensure configmap with app details are created |
| Variables | Description | Type | Notes |
| -------------------- | -------------------------------------------------------------------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------- |
| APP_PVC | The PersistentVolumeClaim used by the stateful application | Mandatory | PVC may use either OpenEBS Jiva/cStor storage class |
| DEPLOY_TYPE | Type of Kubernetes resource used by the stateful application | Optional | Defaults to `deployment`. Supported: `deployment`, `statefulset` |
| CONTAINER_RUNTIME | The container runtime used in the Kubernetes Cluster | Optional | Defaults to `docker`. Supported: `docker` |
| LIB_IMAGE | The chaos library image used to inject the latency | Optional | Defaults to `gaiaadm/pumba:0.4.8`. Supported: `gaiaadm/pumba:0.4.8` |
| TARGET_CONTAINER | The container into which delays are injected in the storage controller pod | Optional | Defaults to `cstor-istgt` |
| NETWORK_DELAY | Egress delay injected into the target container | Optional | Defaults to 60000 milliseconds (60s) |
| TOTAL_CHAOS_DURATION | Total duration for which latency is injected | Optional | Defaults to 60000 milliseconds (60s) |
| DATA_PERSISTENCE | Flag to perform data consistency checks on the application | Optional | Default value is disabled (empty/unset). Set to `enabled` to perform data checks. Ensure configmap with app details are created |
#### Sample ChaosEngine Manifest
@ -97,7 +99,7 @@ metadata:
spec:
appinfo:
appns: default
applabel: 'app=percona'
applabel: "app=percona"
appkind: deployment
chaosServiceAccount: percona-sa
monitoring: false
@ -107,15 +109,15 @@ spec:
spec:
components:
- name: TARGET_CONTAINER
value: 'cstor-istgt'
value: "cstor-istgt"
- name: APP_PVC
value: 'pvc-c466262a-a5f2-4f0f-b594-5daddfc2e29d'
value: "pvc-c466262a-a5f2-4f0f-b594-5daddfc2e29d"
- name: DEPLOY_TYPE
value: deployment
value: deployment
- name: NETWORK_DELAY
value: '30000'
value: "30000"
- name: TOTAL_CHAOS_DURATION
value: '60000'
value: "60000"
```
### Create the ChaosEngine Resource
@ -133,7 +135,7 @@ spec:
### Check Chaos Experiment Result
- Check whether the application is resilient to the target network delays, once the experiment (job) is completed. The ChaosResult resource naming
- Check whether the application is resilient to the target network delays, once the experiment (job) is completed. The ChaosResult resource naming
convention is: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult target-chaos-openebs-target-network-delay -n <application-namespace>`

View File

@ -1,5 +1,5 @@
---
id: version-1.0.0-openebs-target-network-loss
id: openebs-target-network-loss
title: OpenEBS Target Network Loss Experiment Details
sidebar_label: Target Network Loss
original_id: openebs-target-network-loss

View File

@ -1,16 +1,17 @@
---
id: version-1.0.0-openebs-target-pod-failure
id: openebs-target-pod-failure
title: OpenEBS Target Pod Failure Experiment Details
sidebar_label: Target Pod Failure
original_id: openebs-target-pod-failure
---
------
---
## Experiment Metadata
| Type | Description | Tested K8s Platform |
| ----------| ------------------------ | ------------------------------------------------------------------|
| OpenEBS | Kill the cstor/jiva target/controller pod and check if gets created again | GKE, Konvoy(AWS), Packet(Kubeadm), Minikube, OpenShift(Baremetal) |
| Type | Description | Tested K8s Platform |
| ------- | ------------------------------------------------------------------------- | ----------------------------------------------------------------- |
| OpenEBS | Kill the cstor/jiva target/controller pod and check if gets created again | GKE, Konvoy(AWS), Packet(Kubeadm), Minikube, OpenShift(Baremetal) |
## Prerequisites
@ -26,13 +27,14 @@ original_id: openebs-target-pod-failure
metadata:
name: openebs-target-pod-failure
data:
parameters.yml: |
parameters.yml: |
dbuser: root
dbpassword: k8sDem0
dbname: test
```
- Ensure that the chaosServiceAccount used for the experiment has cluster-scope permissions as the experiment may involve carrying out the chaos in the `openebs` namespace
while performing application health checks in its respective namespace.
while performing application health checks in its respective namespace.
## Entry Criteria
@ -46,8 +48,8 @@ original_id: openebs-target-pod-failure
If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
## Details
@ -72,12 +74,12 @@ If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
#### Supported Experiment Tunables
| Variables | Description | Type | Notes |
| ----------------------| ------------------------------------------------------------ |-----------|------------------------------------------------------------|
| APP_PVC | The PersistentVolumeClaim used by the stateful application | Mandatory | PVC may use either OpenEBS Jiva/cStor storage class |
| DEPLOY_TYPE | Type of Kubernetes resource used by the stateful application | Optional | Defaults to `deployment`. Supported: `deployment`, `statefulset`| |
| TOTAL_CHAOS_DURATION | Amount of soak time for I/O post container kill | Optional | Defaults to 60 seconds |
| DATA_PERSISTENCE | Flag to perform data consistency checks on the application | Optional | Default value is disabled (empty/unset). Set to `enabled` to perform data checks. Ensure configmap with app details are created |
| Variables | Description | Type | Notes |
| -------------------- | ------------------------------------------------------------ | --------- | ------------------------------------------------------------------------------------------------------------------------------- | --- |
| APP_PVC | The PersistentVolumeClaim used by the stateful application | Mandatory | PVC may use either OpenEBS Jiva/cStor storage class |
| DEPLOY_TYPE | Type of Kubernetes resource used by the stateful application | Optional | Defaults to `deployment`. Supported: `deployment`, `statefulset` | |
| TOTAL_CHAOS_DURATION | Amount of soak time for I/O post container kill | Optional | Defaults to 60 seconds |
| DATA_PERSISTENCE | Flag to perform data consistency checks on the application | Optional | Default value is disabled (empty/unset). Set to `enabled` to perform data checks. Ensure configmap with app details are created |
#### Sample ChaosEngine Manifest
@ -90,7 +92,7 @@ metadata:
spec:
appinfo:
appns: default
applabel: 'app=percona'
applabel: "app=percona"
appkind: deployment
chaosServiceAccount: percona-sa
monitoring: false
@ -100,11 +102,11 @@ spec:
spec:
components:
- name: FORCE
value: 'true'
value: "true"
- name: APP_PVC
value: 'pvc-c466262a-a5f2-4f0f-b594-5daddfc2e29d'
value: "pvc-c466262a-a5f2-4f0f-b594-5daddfc2e29d"
- name: DEPLOY_TYPE
value: deployment
value: deployment
```
### Create the ChaosEngine Resource
@ -121,11 +123,11 @@ spec:
### Check Chaos Experiment Result
- Check whether the application is resilient to the target container kill, once the experiment (job) is completed. The ChaosResult resource naming convention
- Check whether the application is resilient to the target container kill, once the experiment (job) is completed. The ChaosResult resource naming convention
is: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult target-chaos-openebs-target-pod-failure -n <application-namespace>`
## OpenEBS Target Pod Failure Demo [TODO]
- A sample recording of this experiment execution is provided here.
- A sample recording of this experiment execution is provided here.

View File

@ -0,0 +1,40 @@
---
id: plugins
title: Using other chaos libraries as plugins
sidebar_label: Plugins
original_id: plugins
---
---
Litmus provides a way to use any chaos library or a tool to inject chaos. The chaos tool to be compatible with Litmus should satisfy the following requirements:
- Should be available as a Docker Image
- Should take configuration through a `config-map`
The `plugins` or `chaos-libraries` host the core logic to inject chaos.
These plugins are hosted at https://github.com/litmuschaos/litmus-ansible/tree/master/chaoslib
Litmus project has integration into the following chaos-libraries.
| Chaos Library | Logo | Experiments covered |
| ------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
| <a href="https://github.com/litmuschaos/litmus-ansible" target="_blank">Litmus</a> | <img src="https://camo.githubusercontent.com/953211f24c1c246f7017703f67b9779e4589bf76/68747470733a2f2f6c616e6473636170652e636e63662e696f2f6c6f676f732f6c69746d75732e737667" width="50"/> | Litmus native chaos libraries that encompasses the chaos capabilities for `pod-kill`, `container-kill`, `cpu-hog` |
| <a href="https://github.com/alexei-led/pumba" target="_blank">Pumba</a> | <img src="https://github.com/alexei-led/pumba/raw/master/docs/img/pumba_logo.png" width="50"/> | Pumba provides chaos capabilities for `network-delay` |
| <a href="https://github.com/bloomberg/powerfulseal" target="_blank">PowerfulSeal</a> | <img src="https://github.com/bloomberg/powerfulseal/raw/master/media/powerful-seal.png" width="50"/> | PowerfulSeal provides chaos capabilities for `pod-kill` |
| | | |
Usage of plugins is a configuration parameter inside the chaos experiment.
> Add an example snippet here.
<br/>
<br/>
<hr/>
<br/>
<br/>

View File

@ -1,16 +1,17 @@
---
id: version-1.0.0-pod-cpu-hog
id: pod-cpu-hog
title: Pod CPU Hog Details
sidebar_label: Pod CPU Hog
original_id: pod-cpu-hog
---
------
---
## Experiment Metadata
| Type | Description | Tested K8s Platform |
| ----------| -------------------------------------------- | ------------------------------------------------------------------|
| Generic | Consume CPU resources on the application container| GKE, Packet(Kubeadm), Minikube |
| Type | Description | Tested K8s Platform |
| ------- | -------------------------------------------------- | ------------------------------ |
| Generic | Consume CPU resources on the application container | GKE, Packet(Kubeadm), Minikube |
## Prerequisites
@ -28,10 +29,9 @@ original_id: pod-cpu-hog
## Details
- This experiment consumes the CPU resources on the application container (upward of 80%) on specified number of cores
- This experiment consumes the CPU resources on the application container (upward of 80%) on specified number of cores
- It simulates conditions where app pods experience CPU spikes either due to expected/undesired processes thereby testing how the
overall application stack behaves when this occurs.
overall application stack behaves when this occurs.
## Integrations
@ -66,9 +66,10 @@ metadata:
labels:
name: nginx-sa
rules:
- apiGroups: ["","litmuschaos.io","batch"]
resources: ["pods","jobs","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: ["", "litmuschaos.io", "batch"]
resources:
["pods", "jobs", "chaosengines", "chaosexperiments", "chaosresults"]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
@ -81,16 +82,16 @@ roleRef:
kind: Role
name: nginx-sa
subjects:
- kind: ServiceAccount
name: nginx-sa
namespace: default
- kind: ServiceAccount
name: nginx-sa
namespace: default
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- Provide the auxiliary applications info (ns & labels) in `spec.auxiliaryAppInfo`
- Override the experiment tunables if desired
- Override the experiment tunables if desired
#### Supported Experiment Tunables
@ -138,12 +139,12 @@ metadata:
namespace: default
spec:
# It can be app/infra
chaosType: 'app'
#ex. values: ns1:name=percona,ns2:run=nginx
chaosType: "app"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appns: default
applabel: 'app=nginx'
applabel: "app=nginx"
appkind: deployment
chaosServiceAccount: nginx-sa
monitoring: false
@ -158,15 +159,14 @@ spec:
spec:
components:
- name: TARGET_CONTAINER
value: 'nginx'
value: "nginx"
#number of cpu cores to be consumed
#verify the resources the app has been launched with
- name: CPU_CORES
value: "1"
# in ms
# in ms
- name: TOTAL_CHAOS_DURATION
value: "60000"
value: "60000"
```
### Create the ChaosEngine Resource
@ -187,6 +187,6 @@ spec:
`kubectl describe chaosresult nginx-chaos-pod-cpu-hog -n <application-namespace>`
## Pod CPU Hog Experiment Demo
## Pod CPU Hog Experiment Demo
- A sample recording of this experiment execution is provided [here](https://youtu.be/MBGSPmZKb2I).
- A sample recording of this experiment execution is provided [here](https://youtu.be/MBGSPmZKb2I).

View File

@ -1,21 +1,22 @@
---
id: version-1.0.0-pod-delete
id: pod-delete
title: Pod Delete Experiment Details
sidebar_label: Pod Delete
original_id: pod-delete
---
------
---
## Experiment Metadata
| Type | Description | Tested K8s Platform |
| ----------| ------------------------ | ------------------------------------------------------------------|
| Generic | Fail the application pod | GKE, Konvoy(AWS), Packet(Kubeadm), Minikube |
| Type | Description | Tested K8s Platform |
| ------- | ------------------------ | ------------------------------------------- |
| Generic | Fail the application pod | GKE, Konvoy(AWS), Packet(Kubeadm), Minikube |
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`).If not, install from [here](https://raw.githubusercontent.com/litmuschaos/pages/master/docs/litmus-operator-latest.yaml)
- Ensure that the `pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/charts/generic/experiments/pod-delete)
- Ensure that the `pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/charts/generic/experiments/pod-delete)
## Entry Criteria
@ -65,12 +66,21 @@ metadata:
labels:
name: nginx-sa
rules:
- apiGroups: ["","litmuschaos.io","batch","apps"]
resources: ["pods","deployments","jobs","configmaps","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
- apiGroups: ["", "litmuschaos.io", "batch", "apps"]
resources:
[
"pods",
"deployments",
"jobs",
"configmaps",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
@ -83,10 +93,9 @@ roleRef:
kind: Role
name: nginx-sa
subjects:
- kind: ServiceAccount
name: nginx-sa
namespace: default
- kind: ServiceAccount
name: nginx-sa
namespace: default
```
### Prepare ChaosEngine
@ -96,14 +105,13 @@ subjects:
#### Supported Experiment Tunables
| Variables | Description | Type | Notes |
| ----------------------| ------------------------------------------------------------ |-----------|------------------------------------------------------------|
| TOTAL_CHAOS_DURATION | The time duration for chaos insertion (seconds) | Optional | Defaults to 15s |
| CHAOS_INTERVAL | Time interval b/w two successive pod failures (sec) | Optional | Defaults to 5s |
| LIB | The chaos lib used to inject the chaos | Optional | Defaults to `litmus`. Supported: `litmus`, `powerfulseal` |
| FORCE | Application Pod failures type | Optional | Default to `true`, With `terminationGracePeriodSeconds=0` |
| KILL_COUNT | No. of application pods to be deleted | Optional | Default to `1`, kill_count > 1 is only supported by litmus lib , not by the powerfulseal |
| Variables | Description | Type | Notes |
| -------------------- | --------------------------------------------------- | -------- | ---------------------------------------------------------------------------------------- |
| TOTAL_CHAOS_DURATION | The time duration for chaos insertion (seconds) | Optional | Defaults to 15s |
| CHAOS_INTERVAL | Time interval b/w two successive pod failures (sec) | Optional | Defaults to 5s |
| LIB | The chaos lib used to inject the chaos | Optional | Defaults to `litmus`. Supported: `litmus`, `powerfulseal` |
| FORCE | Application Pod failures type | Optional | Default to `true`, With `terminationGracePeriodSeconds=0` |
| KILL_COUNT | No. of application pods to be deleted | Optional | Default to `1`, kill_count > 1 is only supported by litmus lib , not by the powerfulseal |
#### Sample ChaosEngine Manifest
@ -116,11 +124,11 @@ metadata:
spec:
appinfo:
appns: default
applabel: 'app=nginx'
applabel: "app=nginx"
appkind: deployment
# It can be app/infra
chaosType: 'app'
#ex. values: ns1:name=percona,ns2:run=nginx
chaosType: "app"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
chaosServiceAccount: nginx-sa
monitoring: false
@ -129,17 +137,17 @@ spec:
image: "litmuschaos/chaos-executor:1.0.0"
type: "go"
# It can be delete/retain
jobCleanUpPolicy: delete
jobCleanUpPolicy: delete
experiments:
- name: pod-delete
spec:
components:
# set chaos duration (in sec) as desired
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '30'
value: "30"
# set chaos interval (in sec) as desired
- name: CHAOS_INTERVAL
value: '10'
value: "10"
# pod failures without '--force' & default terminationGracePeriodSeconds
- name: FORCE
value: "false"
@ -165,4 +173,4 @@ spec:
## Application Pod Failure Demo
- A sample recording of this experiment execution is provided [here](https://youtu.be/X3JvY_58V9A)
- A sample recording of this experiment execution is provided [here](https://youtu.be/X3JvY_58V9A)

View File

@ -1,21 +1,23 @@
---
id: version-1.0.0-pod-network-corruption
id: pod-network-corruption
title: Pod Network Corruption Experiment Details
sidebar_label: Pod Network Corruption
original_id: pod-network-corruption
---
------
---
## Experiment Metadata
| Type | Description | Tested K8s Platform |
| ----------| ------------------------ | ------------------------------------------------------------------|
| Generic | Inject Network Packet Corruption Into Application Pod | GKE, Packet(Kubeadm), Minikube > v1.6.0 |
| Type | Description | Tested K8s Platform |
| ------- | ----------------------------------------------------- | --------------------------------------- |
| Generic | Inject Network Packet Corruption Into Application Pod | GKE, Packet(Kubeadm), Minikube > v1.6.0 |
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://raw.githubusercontent.com/litmuschaos/pages/master/docs/litmus-operator-latest.yaml)
- Ensure that the `pod-network-corruption` experiment resource is available in the cluster by `kubectl get chaosexperiments` command. If not, install from [here](https://hub.litmuschaos.io/charts/generic/experiments/pod-network-corruption)
- Cluster must run docker container runtime
- Cluster must run docker container runtime
<div class="danger">
<strong>NOTE</strong>:
@ -32,7 +34,7 @@ original_id: pod-network-corruption
## Details
- The application pod should be healthy once chaos is stopped. Service-requests should be served despite chaos.
- The application pod should be healthy once chaos is stopped. Service-requests should be served despite chaos.
- Injects packet corruption on the specified container by starting a traffic control (tc) process with netem rules to add egress packet corruption
- Corruption is injected via pumba library with command pumba netem corruption by passing the relevant network interface, packet-corruption-percentage, chaos duration and regex filter for container name
- Can test the application's resilience to lossy/flaky network
@ -66,9 +68,10 @@ metadata:
labels:
name: nginx-sa
rules:
- apiGroups: ["","litmuschaos.io","batch"]
resources: ["pods","jobs","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: ["", "litmuschaos.io", "batch"]
resources:
["pods", "jobs", "chaosengines", "chaosexperiments", "chaosresults"]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
@ -81,10 +84,9 @@ roleRef:
kind: Role
name: nginx-sa
subjects:
- kind: ServiceAccount
name: nginx-sa
namespace: default
- kind: ServiceAccount
name: nginx-sa
namespace: default
```
### Prepare ChaosEngine
@ -94,16 +96,16 @@ subjects:
#### Supported Experiment Tunables
| Variables | Description | Type | Notes |
| ----------------------| ------------------------------------------------------------ |-----------|------------------------------------------------------------|
| NETWORK_INTERFACE | Name of ethernet interface considered for shaping traffic | Mandatory | |
| TARGET_CONTAINER | Name of container which is subjected to network latency | Mandatory | |
| NETWORK_PACKET_CORRUPTION_PERCENTAGE | Packet corruption in percentage | Mandatory | Default (100)
| LIB | The chaos lib used to inject the chaos eg. Pumba | Optional | only `pumba` supported currently |
| CHAOSENGINE | ChaosEngine CR name associated with the experiment instance | Optional | |
| CHAOS_SERVICE_ACCOUNT | Service account used by the pumba daemonset Optional | Optional | |
| TOTAL_CHAOS_DURATION | The time duration for chaos insertion in milliseconds | Optional| Default (60000ms)|
| LIB_IMAGE | The pumba image used to run the kill command | Optional | Defaults to `gaiaadm/pumba:0.6.5` |
| Variables | Description | Type | Notes |
| ------------------------------------ | ----------------------------------------------------------- | --------- | --------------------------------- |
| NETWORK_INTERFACE | Name of ethernet interface considered for shaping traffic | Mandatory | |
| TARGET_CONTAINER | Name of container which is subjected to network latency | Mandatory | |
| NETWORK_PACKET_CORRUPTION_PERCENTAGE | Packet corruption in percentage | Mandatory | Default (100) |
| LIB | The chaos lib used to inject the chaos eg. Pumba | Optional | only `pumba` supported currently |
| CHAOSENGINE | ChaosEngine CR name associated with the experiment instance | Optional | |
| CHAOS_SERVICE_ACCOUNT | Service account used by the pumba daemonset Optional | Optional | |
| TOTAL_CHAOS_DURATION | The time duration for chaos insertion in milliseconds | Optional | Default (60000ms) |
| LIB_IMAGE | The pumba image used to run the kill command | Optional | Defaults to `gaiaadm/pumba:0.6.5` |
#### Sample ChaosEngine Manifest
@ -111,21 +113,21 @@ subjects:
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-network-chaos
name: nginx-network-chaos
namespace: default
spec:
# It can be delete/retain
jobCleanUpPolicy: delete
# It can be app/infra
chaosType: 'app'
#ex. values: ns1:name=percona,ns2:run=nginx
chaosType: "app"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
monitoring: false
components:
runner:
image: "litmuschaos/chaos-executor:1.0.0"
type: "go"
appinfo:
appinfo:
appns: default
# FYI, To see app label, apply kubectl get pods --show-labels
applabel: "app=nginx"
@ -135,15 +137,16 @@ spec:
- name: pod-network-corruption
spec:
components:
- name: ANSIBLE_STDOUT_CALLBACK
value: default
- name: TARGET_CONTAINER
#Container name where chaos has to be injected
value: "nginx"
- name: NETWORK_INTERFACE
#Network interface inside target container
value: eth0
- name: ANSIBLE_STDOUT_CALLBACK
value: default
- name: TARGET_CONTAINER
#Container name where chaos has to be injected
value: "nginx"
- name: NETWORK_INTERFACE
#Network interface inside target container
value: eth0
```
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
@ -152,7 +155,7 @@ spec:
### Watch Chaos progress
- View impact of network packet corruption on the affected pod from the cluster nodes (alternate is to setup ping to a remote IP from inside the target pod)
- View impact of network packet corruption on the affected pod from the cluster nodes (alternate is to setup ping to a remote IP from inside the target pod)
`ping <pod_ip_address>`
@ -162,7 +165,6 @@ spec:
`kubectl describe chaosresult <ChaosEngine-Name>-<ChaosExperiment-Name> -n <application-namespace>`
## Application Pod Network Packet Corruption Demo
## Application Pod Network Packet Corruption Demo
- A sample recording of this experiment execution is provided [here](https://youtu.be/kSiLrIaILvs).

View File

@ -1,22 +1,23 @@
---
id: version-1.0.0-pod-network-latency
id: pod-network-latency
title: Pod Network Latency Experiment Details
sidebar_label: Pod Network Latency
original_id: pod-network-latency
---
------
---
## Experiment Metadata
| Type | Description | Tested K8s Platform |
| ----------| ------------------------ | ------------------------------------------------------------------|
| Generic | Inject Network Latency Into Application Pod | GKE, Packet(Kubeadm), Minikube > v1.6.0 |
| Type | Description | Tested K8s Platform |
| ------- | ------------------------------------------- | --------------------------------------- |
| Generic | Inject Network Latency Into Application Pod | GKE, Packet(Kubeadm), Minikube > v1.6.0 |
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://raw.githubusercontent.com/litmuschaos/pages/master/docs/litmus-operator-latest.yaml)
- Ensure that the `pod-network-latency` experiment resource is available in the cluster by executing kubectl `get chaosexperiments` in the desired namespace. . If not, install from [here](https://hub.litmuschaos.io/charts/generic/experiments/pod-network-latency)
<div class="danger">
<strong>NOTE</strong>:
Experiment is supported only on Docker Runtime. Support for containerd/CRIO runtimes will be added in subsequent releases.
@ -32,10 +33,10 @@ original_id: pod-network-latency
## Details
- The application pod should be healthy once chaos is stopped. Service-requests should be served despite chaos.
- The application pod should be healthy once chaos is stopped. Service-requests should be served despite chaos.
- Causes flaky access to application replica by injecting network delay using pumba.
- Injects latency on the specified container by starting a traffic control (tc) process with netem rules to add egress delays
- Latency is injected via pumba library with command pumba netem delay by passing the relevant network interface, latency, chaos duration and regex filter for container name
- Injects latency on the specified container by starting a traffic control (tc) process with netem rules to add egress delays
- Latency is injected via pumba library with command pumba netem delay by passing the relevant network interface, latency, chaos duration and regex filter for container name
- Can test the application's resilience to lossy/flaky network
## Steps to Execute the Chaos Experiment
@ -67,9 +68,10 @@ metadata:
labels:
name: nginx-sa
rules:
- apiGroups: ["","litmuschaos.io","batch"]
resources: ["pods","jobs","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: ["", "litmuschaos.io", "batch"]
resources:
["pods", "jobs", "chaosengines", "chaosexperiments", "chaosresults"]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
@ -82,10 +84,9 @@ roleRef:
kind: Role
name: nginx-sa
subjects:
- kind: ServiceAccount
name: nginx-sa
namespace: default
- kind: ServiceAccount
name: nginx-sa
namespace: default
```
### Prepare ChaosEngine
@ -95,15 +96,15 @@ subjects:
#### Supported Experiment Tunables
| Variables | Description | Type | Notes |
| ----------------------| ------------------------------------------------------------ |-----------|------------------------------------------------------------|
| NETWORK_INTERFACE | Name of ethernet interface considered for shaping traffic | Mandatory | |
| TARGET_CONTAINER | Name of container which is subjected to network latency | Mandatory | |
| TOTAL_CHAOS_DURATION | The time duration for chaos insertion in milliseconds | Optional| Default (60000ms)|
| NETWORK_LATENCY | The latency/delay in milliseconds | Optional | Default (60000ms)
| LIB | The chaos lib used to inject the chaos eg. Pumba | Optional | |
| CHAOSENGINE | ChaosEngine CR name associated with the experiment instance | Optional | |
| CHAOS_SERVICE_ACCOUNT | Service account used by the pumba daemonset Optional | Optional | |
| Variables | Description | Type | Notes |
| --------------------- | ----------------------------------------------------------- | --------- | ----------------- |
| NETWORK_INTERFACE | Name of ethernet interface considered for shaping traffic | Mandatory | |
| TARGET_CONTAINER | Name of container which is subjected to network latency | Mandatory | |
| TOTAL_CHAOS_DURATION | The time duration for chaos insertion in milliseconds | Optional | Default (60000ms) |
| NETWORK_LATENCY | The latency/delay in milliseconds | Optional | Default (60000ms) |
| LIB | The chaos lib used to inject the chaos eg. Pumba | Optional | |
| CHAOSENGINE | ChaosEngine CR name associated with the experiment instance | Optional | |
| CHAOS_SERVICE_ACCOUNT | Service account used by the pumba daemonset Optional | Optional | |
#### Sample ChaosEngine Manifest
@ -111,21 +112,21 @@ subjects:
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-network-chaos
name: nginx-network-chaos
namespace: default
spec:
spec:
# It can be delete/retain
jobCleanUpPolicy: delete
# It can be app/infra
chaosType: 'app'
#ex. values: ns1:name=percona,ns2:run=nginx
chaosType: "app"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
monitoring: false
components:
runner:
image: "litmuschaos/chaos-executor:1.0.0"
type: "go"
appinfo:
appinfo:
appns: default
# FYI, To see app label, apply kubectl get pods --show-labels
applabel: "app=nginx"
@ -135,23 +136,24 @@ spec:
- name: pod-network-latency
spec:
components:
- name: ANSIBLE_STDOUT_CALLBACK
value: default
- name: TARGET_CONTAINER
#Container name where chaos has to be injected
value: "nginx"
- name: NETWORK_INTERFACE
#Network interface inside target container
value: eth0
- name: LIB_IMAGE
value: gaiaadm/pumba:0.6.5
- name: NETWORK_LATENCY
value: "2000"
- name: TOTAL_CHAOS_DURATION
value: "60000"
- name: LIB
value: pumba
- name: ANSIBLE_STDOUT_CALLBACK
value: default
- name: TARGET_CONTAINER
#Container name where chaos has to be injected
value: "nginx"
- name: NETWORK_INTERFACE
#Network interface inside target container
value: eth0
- name: LIB_IMAGE
value: gaiaadm/pumba:0.6.5
- name: NETWORK_LATENCY
value: "2000"
- name: TOTAL_CHAOS_DURATION
value: "60000"
- name: LIB
value: pumba
```
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
@ -160,7 +162,7 @@ spec:
### Watch Chaos progress
- View network latency by setting up a ping on the affected pod from the cluster nodes
- View network latency by setting up a ping on the affected pod from the cluster nodes
`ping <pod_ip_address>`
@ -170,7 +172,6 @@ spec:
`kubectl describe chaosresult <ChaosEngine-Name>-<ChaosExperiment-Name> -n <application-namespace>`
## Application Pod Network Latency Demo
- A sample recording of this experiment execution is provided [here](https://youtu.be/QsQZyXVCcCw).

View File

@ -1,21 +1,22 @@
---
id: version-1.0.0-pod-network-loss
id: pod-network-loss
title: Pod Network Loss Experiment Details
sidebar_label: Pod Network Loss
original_id: pod-network-loss
---
------
---
## Experiment Metadata
| Type | Description | Tested K8s Platform |
| ----------| ------------------------ | ------------------------------------------------------------------|
| Generic | Inject Packet Loss Into Application Pod | GKE, Packet(Kubeadm), Minikube > v1.6.0 |
| Type | Description | Tested K8s Platform |
| ------- | --------------------------------------- | --------------------------------------- |
| Generic | Inject Packet Loss Into Application Pod | GKE, Packet(Kubeadm), Minikube > v1.6.0 |
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://raw.githubusercontent.com/litmuschaos/pages/master/docs/litmus-operator-latest.yaml)
- Ensure that the `pod-network-loss` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/charts/generic/experiments/pod-network-loss)
- Ensure that the `pod-network-loss` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/charts/generic/experiments/pod-network-loss)
<div class="danger">
<strong>NOTE</strong>:
Experiment is supported only on Docker Runtime. Support for containerd/CRIO runtimes will be added in subsequent releases.
@ -32,10 +33,9 @@ original_id: pod-network-loss
## Details
- Pod-network-loss injects chaos to disrupt network connectivity to kubernetes pods.
- The application pod should be healthy once chaos is stopped. Service-requests should be served despite chaos.
- The application pod should be healthy once chaos is stopped. Service-requests should be served despite chaos.
- Causes loss of access to application replica by injecting packet loss using pumba
## Steps to Execute the Chaos Experiment
- This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
@ -65,9 +65,10 @@ metadata:
labels:
name: nginx-sa
rules:
- apiGroups: ["","litmuschaos.io","batch"]
resources: ["pods","jobs","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: ["", "litmuschaos.io", "batch"]
resources:
["pods", "jobs", "chaosengines", "chaosexperiments", "chaosresults"]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
@ -80,10 +81,9 @@ roleRef:
kind: Role
name: nginx-sa
subjects:
- kind: ServiceAccount
name: nginx-sa
namespace: default
- kind: ServiceAccount
name: nginx-sa
namespace: default
```
### Prepare ChaosEngine
@ -93,16 +93,16 @@ subjects:
#### Supported Experiment Tunables
| Variables | Description | Type | Notes |
| ----------------------| ------------------------------------------------------------ |-----------|------------------------------------------------------------|
| NETWORK_INTERFACE | Name of ethernet interface considered for shaping traffic | Mandatory | |
| TARGET_CONTAINER | Name of container which is subjected to network latency | Mandatory | |
| NETWORK_PACKET_LOSS_PERCENTAGE | The packet loss in percentage | Mandatory | |
| TOTAL_CHAOS_DURATION | The time duration for chaos insertion in milliseconds | Optional | Default (60000ms) |
| LIB | The chaos lib used to inject the chaos eg. Pumba | Optional | |
| LIB_IMAGE | The image used by the chaoslib to inject the chaos | Optional | Default: `gaiaadm/pumba:0.6.5` |
| CHAOSENGINE | ChaosEngine CR name associated with the experiment instance | Optional | |
| CHAOS_SERVICE_ACCOUNT | Service account used by the pumba daemonset Optional | Optional | |
| Variables | Description | Type | Notes |
| ------------------------------ | ----------------------------------------------------------- | --------- | ------------------------------ |
| NETWORK_INTERFACE | Name of ethernet interface considered for shaping traffic | Mandatory | |
| TARGET_CONTAINER | Name of container which is subjected to network latency | Mandatory | |
| NETWORK_PACKET_LOSS_PERCENTAGE | The packet loss in percentage | Mandatory | |
| TOTAL_CHAOS_DURATION | The time duration for chaos insertion in milliseconds | Optional | Default (60000ms) |
| LIB | The chaos lib used to inject the chaos eg. Pumba | Optional | |
| LIB_IMAGE | The image used by the chaoslib to inject the chaos | Optional | Default: `gaiaadm/pumba:0.6.5` |
| CHAOSENGINE | ChaosEngine CR name associated with the experiment instance | Optional | |
| CHAOS_SERVICE_ACCOUNT | Service account used by the pumba daemonset Optional | Optional | |
#### Sample ChaosEngine Manifest
@ -117,41 +117,42 @@ spec:
# It can be delete/retain
jobCleanUpPolicy: delete
# It can be app/infra
chaosType: 'app'
#ex. values: ns1:name=percona,ns2:run=nginx
chaosType: "app"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
monitoring: false
components:
runner:
image: "litmuschaos/chaos-executor:1.0.0"
type: "go"
appinfo:
appinfo:
appns: default
# FYI, To see app label, apply kubectl get pods --show-labels
applabel: "app=nginx"
appkind: deployment
chaosServiceAccount: nginx-sa
chaosServiceAccount: nginx-sa
experiments:
- name: pod-network-loss
spec:
components:
- name: ANSIBLE_STDOUT_CALLBACK
value: default
- name: TARGET_CONTAINER
#Container name where chaos has to be injected
value: "nginx"
- name: LIB_IMAGE
value: gaiaadm/pumba:0.6.5
- name: NETWORK_INTERFACE
#Network interface inside target container
value: eth0
- name: NETWORK_PACKET_LOSS_PERCENTAGE
value: "100"
- name: TOTAL_CHAOS_DURATION
value: "60000"
- name: LIB
value: pumba
- name: ANSIBLE_STDOUT_CALLBACK
value: default
- name: TARGET_CONTAINER
#Container name where chaos has to be injected
value: "nginx"
- name: LIB_IMAGE
value: gaiaadm/pumba:0.6.5
- name: NETWORK_INTERFACE
#Network interface inside target container
value: eth0
- name: NETWORK_PACKET_LOSS_PERCENTAGE
value: "100"
- name: TOTAL_CHAOS_DURATION
value: "60000"
- name: LIB
value: pumba
```
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
@ -160,7 +161,7 @@ spec:
### Watch Chaos progress
- View network latency by setting up a ping on the affected pod from the cluster nodes
- View network latency by setting up a ping on the affected pod from the cluster nodes
`ping <pod_ip_address>`
@ -170,7 +171,6 @@ spec:
`kubectl describe chaosresult <ChaosEngine-Name>-<ChaosExperiment-Name> -n <application-namespace>`
## Application Pod Network Loss Demo
## Application Pod Network Loss Demo
- A sample recording of this experiment execution is provided [here](https://youtu.be/jqvYy-nWc_I).

View File

@ -1,42 +1,40 @@
---
id: version-1.0.0-resources
id: resources
title: Resources related to Chaos Engineering on Kubernetes
sidebar_label: Resources
sidebar_label: Resources
original_id: resources
---
------
---
## Chaos Demos
### Getting Started
Use this video to learn how to get started with Litmus. You will learn how to install Litmus, how to inject a fault into your application using one of the experiments available at ChaosHub.
<a href="https://asciinema.org/a/G9TcXpgikLuGTBY7btIUNSuWN" target="_blank">
<img src="/docs/assets/getstarted.svg" width="300">
<img src={require("./assets/getstarted.svg").default} width="300"/>
</a>
<hr>
<hr/>
## Reference Implementations
| Reference | Description |
| ------------------ | ------------------------------------------------------------ |
| Reference | Description |
| ------------------ | ---------------------------------------------------------------------- |
| https://openebs.ci | CNCF SandBox project uses Litmus chaos experiments in its CI pipelines |
| | |
| | |
| | |
| | |
<br>
<br/>
<br>
<br/>
<hr>
<hr/>
<br>
<br>
<br/>
<br/>

View File

@ -1,20 +1,21 @@
---
id: version-1.5.0-architecture
original_id: architecture
id: architecture
title: Litmus Architecture
sidebar_label: Architecture
sidebar_label: Architecture
original_id: architecture
---
<hr>
<img src="/docs/assets/litmus-schematic.png" width="800">
<hr />
<img src={require("./assets/architecture.png").default} width="800" />
**Chaos-Operator**
Chaos-Operator watches for the ChaosEngine CR and executes the Chaos-Experiments mentioned in the CR. Chaos-Operator is namespace scoped. By default, it runs in `litmus` namespace. Once the experiment is completed, chaos-operator invokes chaos-exporter to export chaos metrics to a Prometheus database.
Chaos-Operator watches for the ChaosEngine CR and executes the Chaos-Experiments mentioned in the CR. Chaos-Operator is namespace scoped. By default, it runs in `litmus` namespace. Once the experiment is completed, chaos-operator invokes chaos-exporter to export chaos metrics to a Prometheus database.
**Chaos-CRDs**
During installation, the following three CRDs are installed on the Kubernetes cluster.
During installation, the following three CRDs are installed on the Kubernetes cluster.
`chaosengines.litmuschaos.io`
@ -22,34 +23,24 @@ During installation, the following three CRDs are installed on the Kubernetes cl
`chaosresults.litmuschaos.io`
**Chaos-Experiments**
Chaos Experiment is a CR and are available as YAML files on <a href=" https://hub.litmuschaos.io" target="_blank">Chaos Hub</a>. For more details visit Chaos Hub [documentation](chaoshub.md).
Chaos Experiment is a CR and are available as YAML files on <a href="https://hub.litmuschaos.io" target="_blank">Chaos Hub</a>. For more details visit Chaos Hub [documentation](chaoshub.md).
**Chaos-Engine**
ChaosEngine CR links application to experiments. User has to create ChaosEngine YAML by specifying the application label and experiments and create the CR. The CR is watched by Chaos-Operator and chaos-experiments are executed on a given application.
ChaosEngine CR links application to experiments. User has to create ChaosEngine YAML by specifying the application label and experiments and create the CR. The CR is watched by Chaos-Operator and chaos-experiments are executed on a given application.
**Chaos-Exporter**
Optionally metrics can be exported to a Prometheus database. Chaos-Exporter implements the Prometheus metrics endpoint.
Optionally metrics can be exported to a Prometheus database. Chaos-Exporter implements the Prometheus metrics endpoint.
<br />
<br />
<br>
<br>
<hr>
<br>
<br>
<hr />
<br />
<br />

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

View File

@ -0,0 +1,429 @@
<?xml version="1.0"?>
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="1349.33333328" height="862.4" font-family="Consolas, Menlo, 'Bitstream Vera Sans Mono', monospace, 'Powerline Symbols'" font-size="14px">
<style>
<!-- asciinema theme -->
.default-text-fill {fill: #cccccc}
.default-bg-fill {fill: #121314}
.c-0 {fill: #000000}
.c-1 {fill: #dd3c69}
.c-2 {fill: #4ebf22}
.c-3 {fill: #ddaf3c}
.c-4 {fill: #26b0d7}
.c-5 {fill: #b954e1}
.c-6 {fill: #54e1b9}
.c-7 {fill: #d9d9d9}
.c-8 {fill: #4d4d4d}
.c-9 {fill: #dd3c69}
.c-10 {fill: #4ebf22}
.c-11 {fill: #ddaf3c}
.c-12 {fill: #26b0d7}
.c-13 {fill: #b954e1}
.c-14 {fill: #54e1b9}
.c-15 {fill: #ffffff}
.c-8, .c-9, .c-10, .c-11, .c-12, .c-13, .c-14, .c-15 {font-weight: bold}
<!-- 256 colors -->
.c-16 {fill: #000000}
.c-17 {fill: #00005f}
.c-18 {fill: #000087}
.c-19 {fill: #0000af}
.c-20 {fill: #0000d7}
.c-21 {fill: #0000ff}
.c-22 {fill: #005f00}
.c-23 {fill: #005f5f}
.c-24 {fill: #005f87}
.c-25 {fill: #005faf}
.c-26 {fill: #005fd7}
.c-27 {fill: #005fff}
.c-28 {fill: #008700}
.c-29 {fill: #00875f}
.c-30 {fill: #008787}
.c-31 {fill: #0087af}
.c-32 {fill: #0087d7}
.c-33 {fill: #0087ff}
.c-34 {fill: #00af00}
.c-35 {fill: #00af5f}
.c-36 {fill: #00af87}
.c-37 {fill: #00afaf}
.c-38 {fill: #00afd7}
.c-39 {fill: #00afff}
.c-40 {fill: #00d700}
.c-41 {fill: #00d75f}
.c-42 {fill: #00d787}
.c-43 {fill: #00d7af}
.c-44 {fill: #00d7d7}
.c-45 {fill: #00d7ff}
.c-46 {fill: #00ff00}
.c-47 {fill: #00ff5f}
.c-48 {fill: #00ff87}
.c-49 {fill: #00ffaf}
.c-50 {fill: #00ffd7}
.c-51 {fill: #00ffff}
.c-52 {fill: #5f0000}
.c-53 {fill: #5f005f}
.c-54 {fill: #5f0087}
.c-55 {fill: #5f00af}
.c-56 {fill: #5f00d7}
.c-57 {fill: #5f00ff}
.c-58 {fill: #5f5f00}
.c-59 {fill: #5f5f5f}
.c-60 {fill: #5f5f87}
.c-61 {fill: #5f5faf}
.c-62 {fill: #5f5fd7}
.c-63 {fill: #5f5fff}
.c-64 {fill: #5f8700}
.c-65 {fill: #5f875f}
.c-66 {fill: #5f8787}
.c-67 {fill: #5f87af}
.c-68 {fill: #5f87d7}
.c-69 {fill: #5f87ff}
.c-70 {fill: #5faf00}
.c-71 {fill: #5faf5f}
.c-72 {fill: #5faf87}
.c-73 {fill: #5fafaf}
.c-74 {fill: #5fafd7}
.c-75 {fill: #5fafff}
.c-76 {fill: #5fd700}
.c-77 {fill: #5fd75f}
.c-78 {fill: #5fd787}
.c-79 {fill: #5fd7af}
.c-80 {fill: #5fd7d7}
.c-81 {fill: #5fd7ff}
.c-82 {fill: #5fff00}
.c-83 {fill: #5fff5f}
.c-84 {fill: #5fff87}
.c-85 {fill: #5fffaf}
.c-86 {fill: #5fffd7}
.c-87 {fill: #5fffff}
.c-88 {fill: #870000}
.c-89 {fill: #87005f}
.c-90 {fill: #870087}
.c-91 {fill: #8700af}
.c-92 {fill: #8700d7}
.c-93 {fill: #8700ff}
.c-94 {fill: #875f00}
.c-95 {fill: #875f5f}
.c-96 {fill: #875f87}
.c-97 {fill: #875faf}
.c-98 {fill: #875fd7}
.c-99 {fill: #875fff}
.c-100 {fill: #878700}
.c-101 {fill: #87875f}
.c-102 {fill: #878787}
.c-103 {fill: #8787af}
.c-104 {fill: #8787d7}
.c-105 {fill: #8787ff}
.c-106 {fill: #87af00}
.c-107 {fill: #87af5f}
.c-108 {fill: #87af87}
.c-109 {fill: #87afaf}
.c-110 {fill: #87afd7}
.c-111 {fill: #87afff}
.c-112 {fill: #87d700}
.c-113 {fill: #87d75f}
.c-114 {fill: #87d787}
.c-115 {fill: #87d7af}
.c-116 {fill: #87d7d7}
.c-117 {fill: #87d7ff}
.c-118 {fill: #87ff00}
.c-119 {fill: #87ff5f}
.c-120 {fill: #87ff87}
.c-121 {fill: #87ffaf}
.c-122 {fill: #87ffd7}
.c-123 {fill: #87ffff}
.c-124 {fill: #af0000}
.c-125 {fill: #af005f}
.c-126 {fill: #af0087}
.c-127 {fill: #af00af}
.c-128 {fill: #af00d7}
.c-129 {fill: #af00ff}
.c-130 {fill: #af5f00}
.c-131 {fill: #af5f5f}
.c-132 {fill: #af5f87}
.c-133 {fill: #af5faf}
.c-134 {fill: #af5fd7}
.c-135 {fill: #af5fff}
.c-136 {fill: #af8700}
.c-137 {fill: #af875f}
.c-138 {fill: #af8787}
.c-139 {fill: #af87af}
.c-140 {fill: #af87d7}
.c-141 {fill: #af87ff}
.c-142 {fill: #afaf00}
.c-143 {fill: #afaf5f}
.c-144 {fill: #afaf87}
.c-145 {fill: #afafaf}
.c-146 {fill: #afafd7}
.c-147 {fill: #afafff}
.c-148 {fill: #afd700}
.c-149 {fill: #afd75f}
.c-150 {fill: #afd787}
.c-151 {fill: #afd7af}
.c-152 {fill: #afd7d7}
.c-153 {fill: #afd7ff}
.c-154 {fill: #afff00}
.c-155 {fill: #afff5f}
.c-156 {fill: #afff87}
.c-157 {fill: #afffaf}
.c-158 {fill: #afffd7}
.c-159 {fill: #afffff}
.c-160 {fill: #d70000}
.c-161 {fill: #d7005f}
.c-162 {fill: #d70087}
.c-163 {fill: #d700af}
.c-164 {fill: #d700d7}
.c-165 {fill: #d700ff}
.c-166 {fill: #d75f00}
.c-167 {fill: #d75f5f}
.c-168 {fill: #d75f87}
.c-169 {fill: #d75faf}
.c-170 {fill: #d75fd7}
.c-171 {fill: #d75fff}
.c-172 {fill: #d78700}
.c-173 {fill: #d7875f}
.c-174 {fill: #d78787}
.c-175 {fill: #d787af}
.c-176 {fill: #d787d7}
.c-177 {fill: #d787ff}
.c-178 {fill: #d7af00}
.c-179 {fill: #d7af5f}
.c-180 {fill: #d7af87}
.c-181 {fill: #d7afaf}
.c-182 {fill: #d7afd7}
.c-183 {fill: #d7afff}
.c-184 {fill: #d7d700}
.c-185 {fill: #d7d75f}
.c-186 {fill: #d7d787}
.c-187 {fill: #d7d7af}
.c-188 {fill: #d7d7d7}
.c-189 {fill: #d7d7ff}
.c-190 {fill: #d7ff00}
.c-191 {fill: #d7ff5f}
.c-192 {fill: #d7ff87}
.c-193 {fill: #d7ffaf}
.c-194 {fill: #d7ffd7}
.c-195 {fill: #d7ffff}
.c-196 {fill: #ff0000}
.c-197 {fill: #ff005f}
.c-198 {fill: #ff0087}
.c-199 {fill: #ff00af}
.c-200 {fill: #ff00d7}
.c-201 {fill: #ff00ff}
.c-202 {fill: #ff5f00}
.c-203 {fill: #ff5f5f}
.c-204 {fill: #ff5f87}
.c-205 {fill: #ff5faf}
.c-206 {fill: #ff5fd7}
.c-207 {fill: #ff5fff}
.c-208 {fill: #ff8700}
.c-209 {fill: #ff875f}
.c-210 {fill: #ff8787}
.c-211 {fill: #ff87af}
.c-212 {fill: #ff87d7}
.c-213 {fill: #ff87ff}
.c-214 {fill: #ffaf00}
.c-215 {fill: #ffaf5f}
.c-216 {fill: #ffaf87}
.c-217 {fill: #ffafaf}
.c-218 {fill: #ffafd7}
.c-219 {fill: #ffafff}
.c-220 {fill: #ffd700}
.c-221 {fill: #ffd75f}
.c-222 {fill: #ffd787}
.c-223 {fill: #ffd7af}
.c-224 {fill: #ffd7d7}
.c-225 {fill: #ffd7ff}
.c-226 {fill: #ffff00}
.c-227 {fill: #ffff5f}
.c-228 {fill: #ffff87}
.c-229 {fill: #ffffaf}
.c-230 {fill: #ffffd7}
.c-231 {fill: #ffffff}
.c-232 {fill: #080808}
.c-233 {fill: #121212}
.c-234 {fill: #1c1c1c}
.c-235 {fill: #262626}
.c-236 {fill: #303030}
.c-237 {fill: #3a3a3a}
.c-238 {fill: #444444}
.c-239 {fill: #4e4e4e}
.c-240 {fill: #585858}
.c-241 {fill: #626262}
.c-242 {fill: #6c6c6c}
.c-243 {fill: #767676}
.c-244 {fill: #808080}
.c-245 {fill: #8a8a8a}
.c-246 {fill: #949494}
.c-247 {fill: #9e9e9e}
.c-248 {fill: #a8a8a8}
.c-249 {fill: #b2b2b2}
.c-250 {fill: #bcbcbc}
.c-251 {fill: #c6c6c6}
.c-252 {fill: #d0d0d0}
.c-253 {fill: #dadada}
.c-254 {fill: #e4e4e4}
.c-255 {fill: #eeeeee}
.br { font-weight: bold }
.it { font-style: italic }
.un { text-decoration: underline }
</style>
<rect width="100%" height="100%" class="default-bg-fill" />
<svg x="0.625%" y="1.136%" class="default-text-fill">
<g style="shape-rendering: optimizeSpeed">
<rect x="5.625%" y="6.818%" width="0.625%" height="19.7" class="c-7" />
<rect x="0.000%" y="95.455%" width="98.750%" height="19.7" class="c-2" />
</g>
<text class="default-text-fill">
<tspan y="0.000%">
<tspan dy="1em" x="0.000%">c</tspan><tspan x="0.625%">h</tspan><tspan x="1.250%">a</tspan><tspan x="1.875%">o</tspan><tspan x="2.500%">s</tspan><tspan x="3.125%">:</tspan><tspan x="3.750%">~</tspan><tspan x="4.375%">$</tspan><tspan x="5.625%">#</tspan><tspan x="6.875%">B</tspan><tspan x="7.500%">u</tspan><tspan x="8.125%">i</tspan><tspan x="8.750%">l</tspan><tspan x="9.375%">d</tspan><tspan x="10.625%">a</tspan><tspan x="11.250%">n</tspan><tspan x="11.875%">d</tspan><tspan x="13.125%">a</tspan><tspan x="13.750%">p</tspan><tspan x="14.375%">p</tspan><tspan x="15.000%">l</tspan><tspan x="15.625%">y</tspan><tspan x="16.875%">C</tspan><tspan x="17.500%">h</tspan><tspan x="18.125%">a</tspan><tspan x="18.750%">o</tspan><tspan x="19.375%">s</tspan><tspan x="20.000%">E</tspan><tspan x="20.625%">n</tspan><tspan x="21.250%">g</tspan><tspan x="21.875%">i</tspan><tspan x="22.500%">n</tspan><tspan x="23.125%">e</tspan><tspan x="24.375%">C</tspan><tspan x="25.000%">R</tspan><tspan x="26.250%">t</tspan><tspan x="26.875%">o</tspan><tspan x="28.125%">u</tspan><tspan x="28.750%">n</tspan><tspan x="29.375%">l</tspan><tspan x="30.000%">e</tspan><tspan x="30.625%">a</tspan><tspan x="31.250%">s</tspan><tspan x="31.875%">h</tspan><tspan x="33.125%">C</tspan><tspan x="33.750%">h</tspan><tspan x="34.375%">a</tspan><tspan x="35.000%">o</tspan><tspan x="35.625%">s</tspan><tspan x="49.375%" class="c-2"></tspan><tspan x="50.000%">E</tspan><tspan x="50.625%">v</tspan><tspan x="51.250%">e</tspan><tspan x="51.875%">r</tspan><tspan x="52.500%">y</tspan><tspan x="53.750%">1</tspan><tspan x="54.375%">.</tspan><tspan x="55.000%">0</tspan><tspan x="55.625%">s</tspan><tspan x="56.250%">:</tspan><tspan x="57.500%">k</tspan><tspan x="58.125%">u</tspan><tspan x="58.750%">b</tspan><tspan x="59.375%">e</tspan><tspan x="60.000%">c</tspan><tspan x="60.625%">t</tspan><tspan x="61.250%">l</tspan><tspan x="62.500%">g</tspan><tspan x="63.125%">e</tspan><tspan x="63.750%">t</tspan><tspan x="65.000%">p</tspan><tspan x="65.625%">o</tspan><tspan x="83.750%">F</tspan><tspan x="84.375%">r</tspan><tspan x="85.000%">i</tspan><tspan x="86.250%">O</tspan><tspan x="86.875%">c</tspan><tspan x="87.500%">t</tspan><tspan x="89.375%">4</tspan><tspan x="90.625%">1</tspan><tspan x="91.250%">9</tspan><tspan x="91.875%">:</tspan><tspan x="92.500%">3</tspan><tspan x="93.125%">2</tspan><tspan x="93.750%">:</tspan><tspan x="94.375%">3</tspan><tspan x="95.000%">5</tspan><tspan x="96.250%">2</tspan><tspan x="96.875%">0</tspan><tspan x="97.500%">1</tspan><tspan x="98.125%">9</tspan>
</tspan>
<tspan y="2.273%">
<tspan dy="1em" x="0.000%">c</tspan><tspan x="0.625%">h</tspan><tspan x="1.250%">a</tspan><tspan x="1.875%">o</tspan><tspan x="2.500%">s</tspan><tspan x="3.125%">:</tspan><tspan x="3.750%">~</tspan><tspan x="4.375%">$</tspan><tspan x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="4.545%">
<tspan dy="1em" x="0.000%">c</tspan><tspan x="0.625%">h</tspan><tspan x="1.250%">a</tspan><tspan x="1.875%">o</tspan><tspan x="2.500%">s</tspan><tspan x="3.125%">:</tspan><tspan x="3.750%">~</tspan><tspan x="4.375%">$</tspan><tspan x="5.625%">v</tspan><tspan x="6.250%">i</tspan><tspan x="7.500%">c</tspan><tspan x="8.125%">h</tspan><tspan x="8.750%">a</tspan><tspan x="9.375%">o</tspan><tspan x="10.000%">s</tspan><tspan x="10.625%">e</tspan><tspan x="11.250%">n</tspan><tspan x="11.875%">g</tspan><tspan x="12.500%">i</tspan><tspan x="13.125%">n</tspan><tspan x="13.750%">e</tspan><tspan x="14.375%">.</tspan><tspan x="15.000%">y</tspan><tspan x="15.625%">a</tspan><tspan x="16.250%">m</tspan><tspan x="16.875%">l</tspan><tspan x="49.375%" class="c-2"></tspan><tspan x="50.000%">N</tspan><tspan x="50.625%">A</tspan><tspan x="51.250%">M</tspan><tspan x="51.875%">E</tspan><tspan x="69.375%">R</tspan><tspan x="70.000%">E</tspan><tspan x="70.625%">A</tspan><tspan x="71.250%">D</tspan><tspan x="71.875%">Y</tspan><tspan x="74.375%">S</tspan><tspan x="75.000%">T</tspan><tspan x="75.625%">A</tspan><tspan x="76.250%">T</tspan><tspan x="76.875%">U</tspan><tspan x="77.500%">S</tspan><tspan x="80.625%">R</tspan><tspan x="81.250%">E</tspan><tspan x="81.875%">S</tspan><tspan x="82.500%">T</tspan><tspan x="83.125%">A</tspan><tspan x="83.750%">R</tspan><tspan x="84.375%">T</tspan><tspan x="85.000%">S</tspan><tspan x="87.500%">A</tspan><tspan x="88.125%">G</tspan><tspan x="88.750%">E</tspan>
</tspan>
<tspan y="6.818%">
<tspan dy="1em" x="0.000%">c</tspan><tspan x="0.625%">h</tspan><tspan x="1.250%">a</tspan><tspan x="1.875%">o</tspan><tspan x="2.500%">s</tspan><tspan x="3.125%">:</tspan><tspan x="3.750%">~</tspan><tspan x="4.375%">$</tspan><tspan x="49.375%" class="c-2"></tspan><tspan x="50.000%">h</tspan><tspan x="50.625%">e</tspan><tspan x="51.250%">l</tspan><tspan x="51.875%">l</tspan><tspan x="52.500%">o</tspan><tspan x="53.125%">-</tspan><tspan x="53.750%">d</tspan><tspan x="54.375%">e</tspan><tspan x="55.000%">p</tspan><tspan x="55.625%">l</tspan><tspan x="56.250%">o</tspan><tspan x="56.875%">y</tspan><tspan x="57.500%">-</tspan><tspan x="58.125%">d</tspan><tspan x="58.750%">d</tspan><tspan x="59.375%">5</tspan><tspan x="60.000%">9</tspan><tspan x="60.625%">b</tspan><tspan x="61.250%">8</tspan><tspan x="61.875%">9</tspan><tspan x="62.500%">5</tspan><tspan x="63.125%">6</tspan><tspan x="63.750%">-</tspan><tspan x="64.375%">h</tspan><tspan x="65.000%">x</tspan><tspan x="65.625%">c</tspan><tspan x="66.250%">j</tspan><tspan x="66.875%">v</tspan><tspan x="69.375%">1</tspan><tspan x="70.000%">/</tspan><tspan x="70.625%">1</tspan><tspan x="74.375%">R</tspan><tspan x="75.000%">u</tspan><tspan x="75.625%">n</tspan><tspan x="76.250%">n</tspan><tspan x="76.875%">i</tspan><tspan x="77.500%">n</tspan><tspan x="78.125%">g</tspan><tspan x="80.625%">0</tspan><tspan x="87.500%">1</tspan><tspan x="88.125%">9</tspan><tspan x="88.750%">m</tspan>
</tspan>
<tspan y="9.091%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="11.364%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="13.636%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="15.909%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="18.182%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="20.455%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="22.727%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="25.000%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="27.273%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="29.545%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="31.818%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="34.091%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="36.364%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="38.636%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="40.909%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="43.182%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="45.455%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="47.727%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%"></tspan><tspan x="50.625%"></tspan><tspan x="51.250%"></tspan><tspan x="51.875%"></tspan><tspan x="52.500%"></tspan><tspan x="53.125%"></tspan><tspan x="53.750%"></tspan><tspan x="54.375%"></tspan><tspan x="55.000%"></tspan><tspan x="55.625%"></tspan><tspan x="56.250%"></tspan><tspan x="56.875%"></tspan><tspan x="57.500%"></tspan><tspan x="58.125%"></tspan><tspan x="58.750%"></tspan><tspan x="59.375%"></tspan><tspan x="60.000%"></tspan><tspan x="60.625%"></tspan><tspan x="61.250%"></tspan><tspan x="61.875%"></tspan><tspan x="62.500%"></tspan><tspan x="63.125%"></tspan><tspan x="63.750%"></tspan><tspan x="64.375%"></tspan><tspan x="65.000%"></tspan><tspan x="65.625%"></tspan><tspan x="66.250%"></tspan><tspan x="66.875%"></tspan><tspan x="67.500%"></tspan><tspan x="68.125%"></tspan><tspan x="68.750%"></tspan><tspan x="69.375%"></tspan><tspan x="70.000%"></tspan><tspan x="70.625%"></tspan><tspan x="71.250%"></tspan><tspan x="71.875%"></tspan><tspan x="72.500%"></tspan><tspan x="73.125%"></tspan><tspan x="73.750%"></tspan><tspan x="74.375%"></tspan><tspan x="75.000%"></tspan><tspan x="75.625%"></tspan><tspan x="76.250%"></tspan><tspan x="76.875%"></tspan><tspan x="77.500%"></tspan><tspan x="78.125%"></tspan><tspan x="78.750%"></tspan><tspan x="79.375%"></tspan><tspan x="80.000%"></tspan><tspan x="80.625%"></tspan><tspan x="81.250%"></tspan><tspan x="81.875%"></tspan><tspan x="82.500%"></tspan><tspan x="83.125%"></tspan><tspan x="83.750%"></tspan><tspan x="84.375%"></tspan><tspan x="85.000%"></tspan><tspan x="85.625%"></tspan><tspan x="86.250%"></tspan><tspan x="86.875%"></tspan><tspan x="87.500%"></tspan><tspan x="88.125%"></tspan><tspan x="88.750%"></tspan><tspan x="89.375%"></tspan><tspan x="90.000%"></tspan><tspan x="90.625%"></tspan><tspan x="91.250%"></tspan><tspan x="91.875%"></tspan><tspan x="92.500%"></tspan><tspan x="93.125%"></tspan><tspan x="93.750%"></tspan><tspan x="94.375%"></tspan><tspan x="95.000%"></tspan><tspan x="95.625%"></tspan><tspan x="96.250%"></tspan><tspan x="96.875%"></tspan><tspan x="97.500%"></tspan><tspan x="98.125%"></tspan>
</tspan>
<tspan y="50.000%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="52.273%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="54.545%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="56.818%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="59.091%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="61.364%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="63.636%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="65.909%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="68.182%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="70.455%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="72.727%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="75.000%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="77.273%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="79.545%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="81.818%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="84.091%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="86.364%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="88.636%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="90.909%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="93.182%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="95.455%">
<tspan dy="1em" x="0.000%" class="c-0">[</tspan><tspan x="0.625%" class="c-0">d</tspan><tspan x="1.250%" class="c-0">e</tspan><tspan x="1.875%" class="c-0">m</tspan><tspan x="2.500%" class="c-0">o</tspan><tspan x="3.125%" class="c-0">]</tspan><tspan x="4.375%" class="c-0">0</tspan><tspan x="5.000%" class="c-0">:</tspan><tspan x="5.625%" class="c-0">s</tspan><tspan x="6.250%" class="c-0">s</tspan><tspan x="6.875%" class="c-0">h</tspan><tspan x="7.500%" class="c-0">*</tspan><tspan x="75.625%" class="c-0">&quot;</tspan><tspan x="76.250%" class="c-0">r</tspan><tspan x="76.875%" class="c-0">a</tspan><tspan x="77.500%" class="c-0">h</tspan><tspan x="78.125%" class="c-0">u</tspan><tspan x="78.750%" class="c-0">l</tspan><tspan x="79.375%" class="c-0">-</tspan><tspan x="80.000%" class="c-0">T</tspan><tspan x="80.625%" class="c-0">h</tspan><tspan x="81.250%" class="c-0">i</tspan><tspan x="81.875%" class="c-0">n</tspan><tspan x="82.500%" class="c-0">k</tspan><tspan x="83.125%" class="c-0">P</tspan><tspan x="83.750%" class="c-0">a</tspan><tspan x="84.375%" class="c-0">d</tspan><tspan x="85.000%" class="c-0">-</tspan><tspan x="85.625%" class="c-0">E</tspan><tspan x="86.250%" class="c-0">4</tspan><tspan x="86.875%" class="c-0">9</tspan><tspan x="87.500%" class="c-0">0</tspan><tspan x="88.125%" class="c-0">&quot;</tspan><tspan x="89.375%" class="c-0">0</tspan><tspan x="90.000%" class="c-0">1</tspan><tspan x="90.625%" class="c-0">:</tspan><tspan x="91.250%" class="c-0">0</tspan><tspan x="91.875%" class="c-0">2</tspan><tspan x="93.125%" class="c-0">0</tspan><tspan x="93.750%" class="c-0">5</tspan><tspan x="94.375%" class="c-0">-</tspan><tspan x="95.000%" class="c-0">O</tspan><tspan x="95.625%" class="c-0">c</tspan><tspan x="96.250%" class="c-0">t</tspan><tspan x="96.875%" class="c-0">-</tspan><tspan x="97.500%" class="c-0">1</tspan><tspan x="98.125%" class="c-0">9</tspan>
</tspan>
</text>
<g transform="translate(-50 -50)">
<svg x="50%" y="50%" width="100" height="100">
<svg version="1.1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 866.0254037844387 866.0254037844387">
<defs>
<mask id="small-triangle-mask">
<rect width="100%" height="100%" fill="white"/>
<polygon points="508.01270189221935 433.01270189221935, 208.0127018922194 259.8076211353316, 208.01270189221927 606.217782649107" fill="black"></polygon>
</mask>
</defs>
<polygon points="808.0127018922194 433.01270189221935, 58.01270189221947 -1.1368683772161603e-13, 58.01270189221913 866.0254037844386" mask="url(#small-triangle-mask)" fill="white"></polygon>
<polyline points="481.2177826491071 333.0127018922194, 134.80762113533166 533.0127018922194" stroke="white" stroke-width="90"></polyline>
</svg>
</svg>
</g>
</svg>
</svg>

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 354 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

View File

@ -0,0 +1,71 @@
---
id: chaoshub
title: Using and contributing to ChaosHub
sidebar_label: ChaosHub
original_id: chaoshub
---
---
**Important links**
Chaos Hub is maintained at https://hub.litmuschaos.io
To contribute new chaos charts visit: https://github.com/litmuschaos/chaos-charts
**Introduction**
Litmus chaos hub is a place where the chaos engineering community members publish their chaos experiments. A set of related chaos experiments are bundled into a `Chaos Chart`. Chaos Charts are classified into the following categories.
- [Generic Chaos](#generic-chaos)
- [Application Chaos](#application-chaos)
- [Platform Chaos](#platform-chaos)
### Generic Chaos
Chaos actions that apply to generic Kubernetes resources are classified into this category. Following chaos experiments are supported under Generic Chaos Chart
| Experiment name | Description | User guide link |
| ---------------------- | ----------------------------------------------------- | --------------------------------------------------- |
| Container Kill | Kill one container in the application pod | [container-kill](container-kill.md) |
| Pod Delete | Fail the application pod | [pod-delete](pod-delete.md) |
| Pod Network Latency | Experiment to inject network latency to the POD | [pod-network-latency](pod-network-latency.md) |
| Pod Network Loss | Experiment to inject network loss to the POD | [pod-network-loss](pod-network-loss.md) |
| Node CPU Hog | Exhaust CPU resources on the Kubernetes Node | [node-cpu-hog](node-cpu-hog.md) |
| Disk Fill | Fillup Ephemeral Storage of a Resource | [disk-fill](disk-fill.md) |
| Disk Loss | External disk loss from the node | [disk-loss](disk-loss.md) |
| Node Drain | Drain the node where application pod is scheduled | [node-drain](node-drain.md) |
| Pod CPU Hog | Consume CPU resources on the application container | [pod-cpu-hog](pod-cpu-hog.md) |
| Pod Network Corruption | Inject Network Packet Corruption Into Application Pod | [pod-network-corruption](pod-network-corruption.md) |
### Application Chaos
While Chaos Experiments under the Generic category offer the ability to induce chaos into Kubernetes resources, it is difficult to analyze and conclude if the chaos induced found a weakness in a given application. The application specific chaos experiments are built with some checks on _pre-conditions_ and some expected outcomes after the chaos injection. The result of the chaos experiment is determined by matching the outcome with the expected outcome.
<div class="danger">
<strong>NOTE:</strong> If the result of the chaos experiment is `pass`, it means that the application is resilient to that chaos.
</div>
#### Benefits of contributing an application chaos experiment
Application developers write negative tests in their CI pipelines to test the resiliency of the applications. These negative can be converted into Litmus Chaos Experiments and contributed to ChaosHub, so that the users of the application can use them in staging/pre-production/production environments to check the resilience. Application environments vary considerably from where they are tested (CI pipelines) to where they are deployed (Production). Hence, running the same chaos tests in the user's environment will help determine the weaknesses of the deployment and fixing such weaknesses leads to increased resilience.
Following Application Chaos experiments are available on ChaosHub
| Application | Description | Chaos Experiments |
| ----------- | ------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| OpenEBS | Container Attached Storage for Kubernetes | [openebs-pool-pod-failure](openebs-pool-pod-failure.md)<br/>[openebs-pool-container-failure](openebs-pool-container-failure.md)<br/>[openebs-target-pod-failure](openebs-target-pod-failure.md)<br/>[openebs-target-container-failure](openebs-target-container-failure.md)<br/>[openebs-target-network-delay](openebs-target-network-delay.md)<br/>[openebs-target-network-loss](openebs-target-network-loss.md) |
| Kafka | Open-source stream processing software | [kafka-broker-pod-failure](kafka-broker-pod-failure.md)<br/>[kafka-broker-disk-failure](kafka-broker-disk-failure.md)<br/> |
| CoreDns | CoreDNS is a fast and flexible DNS server that chains plugins | [coredns-pod-delete](coredns-pod-delete.md) |
### Platform Chaos
Chaos experiments that inject chaos into the platform resources of Kubernetes are classified into this category. Management of platform resources vary significantly from each other, Chaos Charts may be maintained separately for each platform (For example, AWS, GCP, Azure, etc)
Following Platform Chaos experiments are available on ChaosHub
| Platform | Description | Chaos Experiments |
| -------- | ------------------------------------------- | ----------------- |
| AWS | Amazon Web Services platform. Includes EKS. | None |
| GCP | Google Cloud Platform. Includes GKE. | None |
| Azure | Microsoft Azure platform. Includes AKS. | None |

View File

@ -0,0 +1,24 @@
---
id: community
title: Join Litmus Community
sidebar_label: Community
original_id: community
---
---
Litmus community is a subset of the larger Kubernetes community. Have a question? Want to stay in touch with the happenings on Chaos Engineering on Kubernetes? Join `#litmus` channel on Kubernetes Slack.
<br/><br/>
<a href="https://kubernetes.slack.com/messages/CNXNB0ZTN" target="_blank"><img src={require("./assets/join-community.png").default} width="400"/></a>
<br/>
<br/>
<hr/>
<br/>
<br/>

View File

@ -1,10 +1,11 @@
---
id: version-1.1.0-container-kill
id: container-kill
title: Container Kill Experiment Details
sidebar_label: Container Kill
original_id: container-kill
---
------
---
## Experiment Metadata
@ -38,20 +39,20 @@ original_id: container-kill
## Details
- Pumba chaoslib details
- Kills one container in the specified application pod by sending SIGKILL termination signal to its docker socket (hence docker runtime is required)
- Containers are killed using the `kill` command provided by [pumba](https://github.com/alexei-led/pumba)
- Pumba is run as a daemonset on all nodes in dry-run mode to begin with the `kill` command is issued during experiment execution via `kubectl exec`
- Kills one container in the specified application pod by sending SIGKILL termination signal to its docker socket (hence docker runtime is required)
- Containers are killed using the `kill` command provided by [pumba](https://github.com/alexei-led/pumba)
- Pumba is run as a daemonset on all nodes in dry-run mode to begin with the `kill` command is issued during experiment execution via `kubectl exec`
- Containerd chaoslib details
- Kills one container in the specified application pod by `crictl-chaos` Lib.
- Containers are killed using the `crictl stop` command.
- containerd-chaos is run as a daemonset on all nodes in dry-run mode to begin with the `stop` command is issued during experiment execution via `kubectl exec`
- Kills one container in the specified application pod by `crictl-chaos` Lib.
- Containers are killed using the `crictl stop` command.
- containerd-chaos is run as a daemonset on all nodes in dry-run mode to begin with the `stop` command is issued during experiment execution via `kubectl exec`
- Tests deployment sanity (replica availability & uninterrupted service) and recovery workflow of the application
- Good for testing recovery of pods having side-car containers
## Integrations
- Container kill is achieved using the `pumba` or `containerd_chaos` chaos library
- The desired pumba and containerd image can be configured in the env variable `LIB_IMAGE`.
- The desired pumba and containerd image can be configured in the env variable `LIB_IMAGE`.
## Steps to Execute the Chaos Experiment
@ -65,7 +66,8 @@ original_id: container-kill
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/container-kill/rbac.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/container-kill/rbac.yaml yaml"
```yaml
---
apiVersion: v1
@ -84,9 +86,18 @@ metadata:
labels:
name: container-kill-sa
rules:
- apiGroups: ["","litmuschaos.io","batch","apps"]
resources: ["pods","jobs","daemonsets","pods/exec","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: ["", "litmuschaos.io", "batch", "apps"]
resources:
[
"pods",
"jobs",
"daemonsets",
"pods/exec",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
@ -100,10 +111,9 @@ roleRef:
kind: Role
name: container-kill-sa
subjects:
- kind: ServiceAccount
name: container-kill-sa
namespace: default
- kind: ServiceAccount
name: container-kill-sa
namespace: default
```
### Prepare ChaosEngine
@ -148,7 +158,8 @@ subjects:
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/container-kill/engine.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/container-kill/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -157,19 +168,19 @@ metadata:
namespace: default
spec:
# It can be true/false
annotationCheck: 'true'
annotationCheck: "true"
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: container-kill-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: container-kill
spec:
@ -177,7 +188,7 @@ spec:
env:
# specify the name of the container to be killed
- name: TARGET_CONTAINER
value: 'nginx'
value: "nginx"
```
### Create the ChaosEngine Resource
@ -194,10 +205,10 @@ spec:
### Check Chaos Experiment Result
- Check whether the application is resilient to the container kill, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the application is resilient to the container kill, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult nginx-chaos-container-kill -n <application-namespace>`
## Application Container Kill Demo
## Application Container Kill Demo
- A sample recording of this experiment execution is provided [here](https://youtu.be/XKyMNdVsKMo).

View File

@ -1,10 +1,11 @@
---
id: version-1.1.0-coredns-pod-delete
id: coredns-pod-delete
title: CoreDNS Pod Delete Experiment Details
sidebar_label: CoreDNS Pod Delete
original_id: coredns-pod-delete
---
------
---
## Experiment Metadata
@ -22,6 +23,7 @@ original_id: coredns-pod-delete
</table>
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `coredns-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos?file=charts/coredns/coredns-pod-delete/experiment.yaml)
@ -52,11 +54,13 @@ original_id: coredns-pod-delete
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
## Prepare chaosServiceAccount
- Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/coredns/coredns-pod-delete/rbac.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/coredns/coredns-pod-delete/rbac.yaml yaml"
```yaml
apiVersion: v1
kind: ServiceAccount
@ -73,9 +77,17 @@ metadata:
labels:
name: coredns-pod-delete-sa
rules:
- apiGroups: ["","litmuschaos.io","batch"]
resources: ["services", "pods","jobs","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: ["", "litmuschaos.io", "batch"]
resources:
[
"services",
"pods",
"jobs",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -88,14 +100,15 @@ roleRef:
kind: ClusterRole
name: coredns-pod-delete-sa
subjects:
- kind: ServiceAccount
name: coredns-pod-delete-sa
namespace: kube-system
- kind: ServiceAccount
name: coredns-pod-delete-sa
namespace: kube-system
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- It will be default as
```
appinfo:
@ -143,7 +156,8 @@ subjects:
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/coredns/coredns-pod-delete/engine.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/coredns/coredns-pod-delete/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -152,34 +166,34 @@ metadata:
namespace: kube-system
spec:
appinfo:
appns: 'kube-system'
applabel: 'k8s-app=kube-dns'
appkind: 'deployment'
appns: "kube-system"
applabel: "k8s-app=kube-dns"
appkind: "deployment"
# It can be true/false
annotationCheck: 'false'
annotationCheck: "false"
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
chaosServiceAccount: coredns-pod-delete-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: coredns-pod-delete
spec:
components:
env:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '30'
value: "30"
# set chaos interval (in sec) as desired
- name: CHAOS_INTERVAL
value: '10'
value: "10"
- name: CHAOS_NAMESPACE
value: 'kube-system'
value: "kube-system"
```
### Create the ChaosEngine Resource
@ -192,10 +206,10 @@ spec:
- View coredns pod terminations & recovery by setting up a watch on the coredns pods in the application namespace
`watch kubectl get pods -n kube-system`
`watch kubectl get pods -n kube-system`
### Check Chaos Experiment Result
- Check whether the application is resilient to the coredns pod failure, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the application is resilient to the coredns pod failure, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult engine-coredns-coredns-pod-delete -n <chaos-namespace>`

View File

@ -0,0 +1,179 @@
---
id: cpu-hog
title: CPU Hog Experiment Details
sidebar_label: CPU Hog
original_id: cpu-hog
---
---
## Experiment Metadata
| Type | Description | Tested K8s Platform |
| ------- | -------------------------------------------- | ------------------- |
| Generic | Exhaust CPU resources on the Kubernetes Node | GKE |
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://raw.githubusercontent.com/litmuschaos/pages/master/docs/litmus-operator-latest.yaml)
- Ensure that the `cpu-hog` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/charts/generic/experiments/cpu-hog)
- There should be administrative access to the platform on which the Kubernetes cluster is hosted, as the recovery of the affected node could be manual. For example, gcloud access to the GKE project
## Entry Criteria
- Application pods are healthy on the respective Nodes before chaos injection
## Exit Criteria
- Application pods may or may not be healthy post chaos injection
## Details
- This experiment causes CPU resource exhaustion on the Kubernetes node. The experiment aims to verify resiliency of applications whose replicas may be evicted on account on nodes turning unschedulable (Not Ready) due to lack of CPU resources.
- The CPU chaos is injected using a daemonset running the linux stress tool (a workload generator). The chaos is effected for a period equalling the TOTAL_CHAOS_DURATION
- Application implies services. Can be reframed as:
Tests application resiliency upon replica evictions caused due to lack of CPU resources
## Integrations
- CPU Hog can be effected using the chaos library: `litmus`
- The desired chaos library can be selected by setting `litmus` as value for the env variable `LIB`
## Steps to Execute the Chaos Experiment
- This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
### Prepare chaosServiceAccount
- Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
#### Sample Rbac Manifest
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-sa
namespace: default
labels:
name: nginx-sa
---
# Source: openebs/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-sa
labels:
name: nginx-sa
rules:
- apiGroups: ["", "litmuschaos.io", "batch", "apps"]
resources:
[
"pods",
"daemonsets",
"jobs",
"pods/exec",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-sa
labels:
name: nginx-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-sa
subjects:
- kind: ServiceAccount
name: nginx-sa
namespace: default
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- Provide the auxiliary applications info (ns & labels) in `spec.auxiliaryAppInfo`
- Override the experiment tunables if desired
#### Supported Experiment Tunables
| Variables | Description | Type | Notes |
| -------------------- | --------------------------------------------------- | --------- | ----------------------------------------- |
| TOTAL_CHAOS_DURATION | The time duration for chaos insertion (seconds) | Optional | Defaults to 60s |
| PLATFORM | The platform on which the chaos experiment will run | Mandatory | Defaults to GKE |
| LIB | The chaos lib used to inject the chaos | Optional | Defaults to `litmus`. Supported: `litmus` |
#### Sample ChaosEngine Manifest
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
chaosType: 'infra' # It can be app/infra
auxiliaryAppInfo: "ns1:name=percona,ns2:run=nginx"
appinfo:
appns: default
applabel: 'app=nginx'
appkind: deployment
# It can be app/infra
chaosType: 'infra'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
chaosServiceAccount: nginx-sa
monitoring: false
components:
runner:
image: "litmuschaos/chaos-executor:1.0.0"
type: "go"
# It can be delete/retain
jobCleanUpPolicy: delete
experiments:
- name: cpu-hog
spec:
components:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '60'
# set chaos platform as desired
- name: PLATFORM
value: 'GKE'
# chaos lib used to inject the chaos
- name: LIB
value: 'litmus'
```
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
### Watch Chaos progress
- Setting up a watch of the CPU consumed by nodes in the Kubernetes Cluster
`watch kubectl top nodes`
### Check Chaos Experiment Result
- Check whether the application is resilient to the CPU hog, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult nginx-chaos-cpu-hog -n <application-namespace>`
## Application Pod Failure Demo
- A sample recording of this experiment execution is provided here.

View File

@ -1,10 +1,11 @@
---
id: version-1.1.0-cStor-pool-chaos
id: cStor-pool-chaos
title: cStor Pool Chaos Experiment Details
sidebar_label: cStor Pool Chaos
original_id: cStor-pool-chaos
---
------
---
## Experiment Metadata
@ -73,12 +74,21 @@ metadata:
labels:
name: cStor-pool-chaos-sa
rules:
- apiGroups: ["","litmuschaos.io","batch","apps"]
resources: ["pods","deployments","jobs","configmaps","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
- apiGroups: ["", "litmuschaos.io", "batch", "apps"]
resources:
[
"pods",
"deployments",
"jobs",
"configmaps",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
@ -92,9 +102,9 @@ roleRef:
kind: Role
name: cStor-pool-chaos-sa
subjects:
- kind: ServiceAccount
name: cStor-pool-chaos-sa
namespace: openebs
- kind: ServiceAccount
name: cStor-pool-chaos-sa
namespace: openebs
```
### Prepare ChaosEngine
@ -135,15 +145,15 @@ metadata:
namespace: openebs
spec:
appinfo:
appns: 'openebs'
applabel: 'app=cstor-pool'
appkind: 'deployment'
appns: "openebs"
applabel: "app=cstor-pool"
appkind: "deployment"
# It can be true/false
annotationCheck: 'false'
annotationCheck: "false"
chaosServiceAccount: cStor-pool-chaos-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: openebs-pool-pod-failure
spec:
@ -151,10 +161,10 @@ spec:
env:
# Namespace where openebs has been installed
- name: OPENEBS_NS
value: 'openebs'
value: "openebs"
# please leave it blank, for this experiment
- name: APP_PVC
value: ''
value: ""
```
### Create the ChaosEngine Resource
@ -171,13 +181,13 @@ spec:
### Check Chaos Experiment Result
- Check whether the cStor pool pod is resilient to the pod failure, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the cStor pool pod is resilient to the pod failure, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult cStor-pool-chaos-openebs-pool-pod-failure -n openebs`
## Recovery
## Recovery
- If the verdict of the ChaosResult is `Fail`, and/or the OpenEBS components do not return to healthy state post the chaos experiment, then please refer the [OpenEBS troubleshooting guide](https://docs.openebs.io/docs/next/troubleshooting.html#ndm-related) for more info on how to recover the same.
- If the verdict of the ChaosResult is `Fail`, and/or the OpenEBS components do not return to healthy state post the chaos experiment, then please refer the [OpenEBS troubleshooting guide](https://docs.openebs.io/docs/next/troubleshooting.html#ndm-related) for more info on how to recover the same.
## cStor Pool Pod Chaos Demo

View File

@ -0,0 +1,67 @@
---
id: devguide
title: Developer Guide for Chaos Charts
sidebar_label: Developer Guide
original_id: devguide
---
---
This page serves as a guide to develop either a new Chaos Chart or a new experiment in a Chaos Chart which are published at <a href="https://hub.litmuschaos.io" target="_blank">ChaosHub</a>.
Below are some key points to remember before understanding how to write a new chart or an experiment.
> Chaos Charts repository : https://github.com/litmuschaos/chaos-charts
>
> Litmusbooks repository : https://github.com/litmuschaos/litmus-ansible/tree/master/experiments
>
> Website rendering code repository: https://github.com/litmuschaos/charthub.litmuschaos.io
The experiments & chaos libraries are typically written in Ansible, though not mandatory. Ensure that
the experiments can be executed in a container & can read/update the litmuschaos custom resources. For example,
if you are writing an experiment in Go, use this [clientset](https://github.com/litmuschaos/chaos-operator/tree/master/pkg/client)
<hr/>
## Glossary
### Chaos Chart
A group of Choas Experiments put together in a YAML file. Each group or chart has a metadata manifest called `ChartServiceVersion`
that holds data such as `ChartVersion`, `Contributors`, `Description`, `links` etc.., This metadata is rendered on the ChartHub.
A chaos chart also consists of a `package` manifest that is an index of available experiments in the chart.
Here is an example of the [ChartServiceVersion](https://github.com/litmuschaos/chaos-charts/blob/master/charts/generic/generic.chartserviceversion.yaml) & [package](https://github.com/litmuschaos/chaos-charts/blob/master/charts/generic/generic.package.yaml) manifests of the generic chaos chart.
### Chaos Experiment
ChaosExperiment is a CRD that specifies the nature of a Chaos Experiment. The YAML file that constitutes a Chaos Experiment CR
is stored under a Chaos Chart of ChaosHub and typically consists of low-level chaos parameters specific to that experiment, set
to their default values.
Here is an example chaos experiment CR for a [pod-delete](https://github.com/litmuschaos/chaos-charts/blob/master/charts/generic/pod-delete/experiment.yaml) experiment
### Litmus Book
Litmus book is an `ansible` playbook that encompasses the logic of pre-checks, chaos-injection, post-checks, and result-updates.
Typically, these are accompanied by a Kubernetes job that can execute the respective playbook.
Here is an example of the litmus book for the [pod-delete](https://github.com/litmuschaos/litmus-ansible/tree/master/experiments/generic/pod_delete) experiment.
### Chaos functions
The `ansible` business logic inside Litmus books can make use of readily available chaos functions. The chaos functions are available as `task-files` which are wrapped in one of the chaos libraries. See [plugins](plugins.md) for more details.
<hr/>
## Developing a Chaos Experiment
A detailed how-to guide on developing chaos experiments is available [here](https://github.com/litmuschaos/litmus-ansible/tree/master/contribute/developer_guide)
<br/>
<hr/>
<br/>
<br/>

View File

@ -1,10 +1,11 @@
---
id: version-1.1.0-disk-fill
id: disk-fill
title: Disk Fill Experiment Details
sidebar_label: Disk Fill
original_id: disk-fill
---
------
---
## Experiment Metadata
@ -27,9 +28,9 @@ original_id: disk-fill
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://raw.githubusercontent.com/litmuschaos/pages/master/docs/litmus-operator-latest.yaml)
- Ensure that the `disk-fill` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace If not, install from [here](https://hub.litmuschaos.io/charts/generic/experiments/disk-fill)
- Cluster must run docker container runtime
- Appropriate Ephemeral Storage Requests and Limits should be set for the application before running the experiment.
- Appropriate Ephemeral Storage Requests and Limits should be set for the application before running the experiment.
An example specification is shown below:
```
apiVersion: v1
kind: Pod
@ -58,24 +59,24 @@ spec:
## Entry-Criteria
- Application pods are healthy before chaos injection.
- Application pods are healthy before chaos injection.
## Exit-Criteria
- Application pods are healthy post chaos injection.
- Application pods are healthy post chaos injection.
## Details
- Causes Disk Stress by filling up the ephemeral storage of the pod (in the /var/lib/docker/container/{{container_id}}) on any given node.
- Causes the application pod to get evicted if the capacity filled exceeds the pod's ephemeral storage limit.
- Tests the Ephemeral Storage Limits, to ensure those parameters are sufficient.
- Tests the application's resiliency to disk stress/replica evictions.
- Causes Disk Stress by filling up the ephemeral storage of the pod (in the /var/lib/docker/container/{{container_id}}) on any given node.
- Causes the application pod to get evicted if the capacity filled exceeds the pod's ephemeral storage limit.
- Tests the Ephemeral Storage Limits, to ensure those parameters are sufficient.
- Tests the application's resiliency to disk stress/replica evictions.
## Integrations
- Disk Fill can be effected using the chaos library: `litmus`, which makes use of `dd` to create a file of
specified capacity on the node.
- The desired chaoslib can be selected by setting the above options as value for the env variable `LIB`
- Disk Fill can be effected using the chaos library: `litmus`, which makes use of `dd` to create a file of
specified capacity on the node.
- The desired chaoslib can be selected by setting the above options as value for the env variable `LIB`
## Steps to Execute the Chaos Experiment
@ -89,7 +90,8 @@ spec:
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/disk-fill/rbac.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/disk-fill/rbac.yaml yaml"
```yaml
---
apiVersion: v1
@ -107,9 +109,18 @@ metadata:
labels:
name: disk-fill-sa
rules:
- apiGroups: ["","apps","litmuschaos.io","batch"]
resources: ["pods","jobs","pods/exec","daemonsets","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: ["", "apps", "litmuschaos.io", "batch"]
resources:
[
"pods",
"jobs",
"pods/exec",
"daemonsets",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -122,9 +133,9 @@ roleRef:
kind: ClusterRole
name: disk-fill-sa
subjects:
- kind: ServiceAccount
name: disk-fill-sa
namespace: default
- kind: ServiceAccount
name: disk-fill-sa
namespace: default
```
### Prepare ChaosEngine
@ -182,7 +193,8 @@ subjects:
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/disk-fill/engine.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/disk-fill/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -191,19 +203,19 @@ metadata:
namespace: default
spec:
# It can be true/false
annotationCheck: 'false'
annotationCheck: "false"
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: disk-fill-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: disk-fill
spec:
@ -211,9 +223,9 @@ spec:
env:
# specify the fill percentage according to the disk pressure required
- name: FILL_PERCENTAGE
value: '80'
value: "80"
- name: TARGET_CONTAINER
value: 'nginx'
value: "nginx"
```
### Create the ChaosEngine Resource
@ -224,17 +236,17 @@ spec:
### Watch Chaos progress
- View the status of the pods as they are subjected to disk stress.
- View the status of the pods as they are subjected to disk stress.
`watch -n 1 kubectl get pods -n <application-namespace>`
- Monitor the capacity filled up on the host filesystem
- Monitor the capacity filled up on the host filesystem
`watch -n 1 du -kh /var/lib/docker/containers/<container-id>`
### Check Chaos Experiment Result
- Check whether the application is resilient to the container kill, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the application is resilient to the container kill, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult nginx-chaos-disk-fill -n <application-namespace>`

View File

@ -1,10 +1,12 @@
---
id: version-1.1.0-disk-loss
id: disk-loss
title: Disk Loss Experiment Details
sidebar_label: Disk Loss
original_id: disk-loss
---
------
---
## Experiment Metadata
<table>
@ -21,10 +23,11 @@ original_id: disk-loss
</table>
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://raw.githubusercontent.com/litmuschaos/pages/master/docs/litmus-operator-latest.yaml)
- Ensure that the `disk-loss` experiment resource is available in the cluster by `kubectl get chaosexperiments` in the desired namespace. If not, install from <a href="https://hub.litmuschaos.io/charts/generic/experiments/disk-loss" target="_blank">here</a>
- Ensure to create a Kubernetes secret having the gcloud/aws access configuration(key) in the namespace of `CHAOS_NAMESPACE`.
- There should be administrative access to the platform on which the cluster is hosted, as the recovery of the affected node could be manual. Example gcloud access to the project
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://raw.githubusercontent.com/litmuschaos/pages/master/docs/litmus-operator-latest.yaml)
- Ensure that the `disk-loss` experiment resource is available in the cluster by `kubectl get chaosexperiments` in the desired namespace. If not, install from <a href="https://hub.litmuschaos.io/charts/generic/experiments/disk-loss" target="_blank">here</a>
- Ensure to create a Kubernetes secret having the gcloud/aws access configuration(key) in the namespace of `CHAOS_NAMESPACE`.
- There should be administrative access to the platform on which the cluster is hosted, as the recovery of the affected node could be manual. Example gcloud access to the project
```yaml
apiVersion: v1
@ -39,30 +42,30 @@ stringData:
## Entry-Criteria
- The disk is healthy before chaos injection
- The disk is healthy before chaos injection
## Exit-Criteria
- The disk is healthy post chaos injection
- If `APP_CHECK` is true, the application pod health is checked post chaos injection
- The disk is healthy post chaos injection
- If `APP_CHECK` is true, the application pod health is checked post chaos injection
## Details
- In this experiment, the external disk is detached from the node for a period equal to the `TOTAL_CHAOS_DURATION`.
- This chaos experiment is supported on GKE and AWS platforms.
- If the disk is created as part of dynamic persistent volume, it is expected to re-attach automatically. The experiment re-attaches the disk if it is not already attached.
- In this experiment, the external disk is detached from the node for a period equal to the `TOTAL_CHAOS_DURATION`.
- This chaos experiment is supported on GKE and AWS platforms.
- If the disk is created as part of dynamic persistent volume, it is expected to re-attach automatically. The experiment re-attaches the disk if it is not already attached.
<b>Note:</b> Especially with mounted disk. The remount of disk is a manual step that the user has to perform.
<b>Note:</b> Especially with mounted disk. The remount of disk is a manual step that the user has to perform.
## Integrations
- Disk loss is effected using the litmus chaoslib that internally makes use of the aws/gcloud commands
- Disk loss is effected using the litmus chaoslib that internally makes use of the aws/gcloud commands
## Steps to Execute the Chaos Experiment
- This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
### Prepare chaosServiceAccount
@ -70,7 +73,8 @@ stringData:
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/disk-loss/rbac.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/disk-loss/rbac.yaml yaml"
```yaml
---
apiVersion: v1
@ -88,9 +92,17 @@ metadata:
labels:
name: nginx-sa
rules:
- apiGroups: ["","litmuschaos.io","batch"]
resources: ["pods","jobs","secrets","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: ["", "litmuschaos.io", "batch"]
resources:
[
"pods",
"jobs",
"secrets",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -103,16 +115,16 @@ roleRef:
kind: ClusterRole
name: nginx-sa
subjects:
- kind: ServiceAccount
name: nginx-sa
namespace: default
- kind: ServiceAccount
name: nginx-sa
namespace: default
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- Provide the auxiliary applications info (ns & labels) in `spec.auxiliaryAppInfo`
- Override the experiment tunables if desired
- Provide the application info in `spec.appinfo`
- Provide the auxiliary applications info (ns & labels) in `spec.auxiliaryAppInfo`
- Override the experiment tunables if desired
### Supported Experiment Tunables for application
@ -211,7 +223,8 @@ subjects:
## Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/disk-loss/engine.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/disk-loss/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -220,74 +233,77 @@ metadata:
namespace: default
spec:
# It can be true/false
annotationCheck: 'false'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
annotationCheck: "false"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: nginx-sa
monitoring: false
components:
runner:
image: 'litmuschaos/chaos-executor:1.0.0'
type: 'go'
image: "litmuschaos/chaos-executor:1.0.0"
type: "go"
# It can be retain/delete
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: disk-loss
spec:
components:
env:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '60'
value: "60"
# set cloud platform name
- name: CLOUD_PLATFORM
value: 'GCP'
value: "GCP"
# set app_check to check application state
- name: APP_CHECK
value: 'true'
value: "true"
# This is a chaos namespace into which all infra chaos resources are created
- name: CHAOS_NAMESPACE
value: 'default'
value: "default"
# GCP project ID
- name: PROJECT_ID
value: 'litmus-demo-123'
value: "litmus-demo-123"
# Node name of the cluster
- name: NODE_NAME
value: 'demo-node-123'
# Disk Name of the node, it must be an external disk.
value: "demo-node-123"
# Disk Name of the node, it must be an external disk.
- name: DISK_NAME
value: 'demo-disk-123'
# Enter the device name which you wanted to mount only for AWS.
value: "demo-disk-123"
# Enter the device name which you wanted to mount only for AWS.
- name: DEVICE_NAME
value: '/dev/sdb'
value: "/dev/sdb"
# Name of Zone in which node is present (GCP)
# Use Region Name when running with AWS (ex: us-central1)
- name: ZONE_NAME
value: 'us-central1-a'
# ChaosEngine CR name associated with the experiment instance
value: "us-central1-a"
# ChaosEngine CR name associated with the experiment instance
- name: CHAOSENGINE
value: ''
# Service account used by the litmus
value: ""
# Service account used by the litmus
- name: CHAOS_SERVICE_ACCOUNT
value: ''
value: ""
```
## Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
## Watch Chaos progress
- Setting up a watch of the app which is using the disk in the Kubernetes Cluster
- Setting up a watch of the app which is using the disk in the Kubernetes Cluster
`watch -n 1 kubectl get pods`
## Check Chaos Experiment Result
- Check whether the application is resilient to the disk loss, once the experiment (job) is completed. The ChaosResult resource name is derived like this: <ChaosEngine-Name>-<ChaosExperiment-Name>.
- Check whether the application is resilient to the disk loss, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult nginx-chaos-disk-loss -n <CHAOS_NAMESPACE>`

View File

@ -1,10 +1,11 @@
---
id: version-1.1.0-faq-general
id: faq-general
title: The What, Why & How of Litmus
sidebar_label: General
original_id: faq-general
---
------
---
[Why should I use Litmus? What is its distinctive feature?](#why-should-i-use-litmus-what-is-its-distinctive-feature)
@ -28,48 +29,46 @@ original_id: faq-general
[How to get the chaos logs in Litmus?](#how-to-get-the-chaos-logs-in-litmus)
[Does Litmus support generation of events during chaos?](#does-litmus-support-generation-of-events-during-chaos)
[Does Litmus support generation of events during chaos?](#does-litmus-support-generation-of-events-during-chaos)
[How to stop/abort a chaos experiment?](#how-to-stopabort-a-chaos-experiment)
[Can a chaos experiment be resumed once stopped/aborted?](#can-a-chaos-experiment-be-resumed-once-stoppedaborted)
[Does Litmus track any usage metrics on the test clusters?](#does-litmus-track-any-usage-metrics-on-the-test-clusters)
<hr>
<hr/>
### Why should I use Litmus? What is its distinctive feature?
### Why should I use Litmus? What is its distinctive feature?
Litmus is a toolset to do cloud-native chaos engineering. Litmus provides tools to orchestrate chaos
on Kubernetes to help developers and SREs find weaknesses in their application deployments. Litmus can
be used to run chaos experiments initially in the staging environment and eventually in production to
find bugs, vulnerabilities. Fixing the weaknesses leads to increased resilience of the system.
Litmus adopts a “Kubernetes-native” approach to define chaos intent in a declarative manner via custom
Litmus is a toolset to do cloud-native chaos engineering. Litmus provides tools to orchestrate chaos
on Kubernetes to help developers and SREs find weaknesses in their application deployments. Litmus can
be used to run chaos experiments initially in the staging environment and eventually in production to
find bugs, vulnerabilities. Fixing the weaknesses leads to increased resilience of the system.
Litmus adopts a “Kubernetes-native” approach to define chaos intent in a declarative manner via custom
resources.
### What type of chaos experiments are supported by Litmus?
Litmus broadly defines Kubernetes chaos experiments into two categories: application or pod-level chaos
experiments and platform or infra-level chaos experiments. The former includes pod-delete, container-kill,
pod-cpu-hog, pod-network-loss etc., while the latter includes node-drain, disk-loss, node-cpu-hog etc.,
The infra chaos experiments typically have a higher blast radius and impacts more than one application
deployed on the Kubernetes cluster. Litmus also categorizes experiments on the basis of the applications,
with the experiments consisting of app-specific health checks. For example, OpenEBS, Kafka, CoreDNS.
Litmus broadly defines Kubernetes chaos experiments into two categories: application or pod-level chaos
experiments and platform or infra-level chaos experiments. The former includes pod-delete, container-kill,
pod-cpu-hog, pod-network-loss etc., while the latter includes node-drain, disk-loss, node-cpu-hog etc.,
The infra chaos experiments typically have a higher blast radius and impacts more than one application
deployed on the Kubernetes cluster. Litmus also categorizes experiments on the basis of the applications,
with the experiments consisting of app-specific health checks. For example, OpenEBS, Kafka, CoreDNS.
For a full list of supported chaos experiments, visit: https://hub.litmuschaos.io
### What are the prerequisites to get started with Litmus?
For getting started with Litmus the only prerequisites is to have Kubernetes 1.11+ cluster. While most
pod/container level experiments are supported on any Kubernetes platform, some of the infrastructure chaos
experiments are supported on specific platforms. To find the list of supported platforms for an experiment,
view the "Platforms" section on the sidebar in the experiment page.
For getting started with Litmus the only prerequisites is to have Kubernetes 1.11+ cluster. While most
pod/container level experiments are supported on any Kubernetes platform, some of the infrastructure chaos
experiments are supported on specific platforms. To find the list of supported platforms for an experiment,
view the "Platforms" section on the sidebar in the experiment page.
### How to Install Litmus on the Kubernetes Cluster?
You can install/deploy stable litmus using this command:
You can install/deploy stable litmus using this command:
```console
kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
@ -77,121 +76,120 @@ kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
### What are the permissions required to run Litmus Chaos Experiments?
By default, the Litmus operator uses the “litmus” serviceaccount that is bound to a ClusterRole, in order
to watch for the ChaosEngine resource across namespaces. However, the experiments themselves are associated
with “chaosServiceAccounts” which are created by the developers with bare-minimum permissions necessary to
execute the experiment in question. Visit the [chaos-charts](https://github.com/litmuschaos/chaos-charts) repo
By default, the Litmus operator uses the “litmus” serviceaccount that is bound to a ClusterRole, in order
to watch for the ChaosEngine resource across namespaces. However, the experiments themselves are associated
with “chaosServiceAccounts” which are created by the developers with bare-minimum permissions necessary to
execute the experiment in question. Visit the [chaos-charts](https://github.com/litmuschaos/chaos-charts) repo
to view the experiment-specific rbac permissions. For example, here are the [permissions](https://github.com/litmuschaos/chaos-charts/blob/master/charts/generic/container-kill/rbac.yaml) for container-kill chaos.
### What is the scope of a Litmus Chaos Experiment?
### What is the scope of a Litmus Chaos Experiment?
The chaos CRs (chaosexperiment, chaosengine, chaosresults) themselves are namespace scoped and are installed
in the same namespace as that of the target application. While most of the experiments can be executed with
service accounts mapped to namespaced roles, some infra chaos experiments typically perform health checks of
applications across namespaces & therefore need their serviceaccounts mapped to ClusterRoles.
The chaos CRs (chaosexperiment, chaosengine, chaosresults) themselves are namespace scoped and are installed
in the same namespace as that of the target application. While most of the experiments can be executed with
service accounts mapped to namespaced roles, some infra chaos experiments typically perform health checks of
applications across namespaces & therefore need their serviceaccounts mapped to ClusterRoles.
### How to get started with running chaos experiments using Litmus?
### How to get started with running chaos experiments using Litmus?
Litmus has a low entry barrier and is easy to install/use. Typically, it involves installing the chaos-operator,
chaos experiment CRs from the [charthub](https://hub.litmuschaos.io), annotating an application for chaos and creating
a chaosengine CR to map your application instance with a desired chaos experiment. Refer to
the [getting started](https://docs.litmuschaos.io/docs/getstarted/) documentation to learn more on how to run a
simple chaos experiment.
Litmus has a low entry barrier and is easy to install/use. Typically, it involves installing the chaos-operator,
chaos experiment CRs from the [charthub](https://hub.litmuschaos.io), annotating an application for chaos and creating
a chaosengine CR to map your application instance with a desired chaos experiment. Refer to
the [getting started](https://docs.litmuschaos.io/docs/getstarted/) documentation to learn more on how to run a
simple chaos experiment.
### How to view and interpret the results of a chaos experiment?
### How to view and interpret the results of a chaos experiment?
The results of a chaos experiment can be obtained from the verdict property of the chaosresult custom resource.
If the verdict is `Pass`, it means that the application under test is resilient to the chaos injected.
Alternatively, `Fail` reflects that the application is not resilient enough to the injected chaos, and indicates
the need for a relook into the deployment sanity or possible application bugs/issues.
The results of a chaos experiment can be obtained from the verdict property of the chaosresult custom resource.
If the verdict is `Pass`, it means that the application under test is resilient to the chaos injected.
Alternatively, `Fail` reflects that the application is not resilient enough to the injected chaos, and indicates
the need for a relook into the deployment sanity or possible application bugs/issues.
```console
kubectl describe chaosresult <chaosengine-name>-<chaos-experiment> -n <namespace>
```
The status of the experiment can also be gauged by the “status” property of the ChaosEngine.
The status of the experiment can also be gauged by the “status” property of the ChaosEngine.
```console
Kubectl describe chaosengine <chaosengne-name> -n <namespace>
```
### Do chaos experiments run as a standard set of pods?
### Do chaos experiments run as a standard set of pods?
The chaos experiment (triggered after creation of the chaosEngine resource) workflow consists of launching the “chaos-runner”
pod, which is an umbrella executor of different chaos experiments listed in the engine. The chaos-runner creates one pod (job)
per each experiment to run the actual experiment business logic, and also manages the lifecycle of these experiment pods
(performs functions such as experiment dependencies validation, job cleanup, patching of status back into chaosEngine etc.,).
Optionally, a monitor pod is created to export the chaos metrics. Together, these 3 pods are a standard set created upon execution
of the experiment. The experiment job, in turn may spawn dependent (helper) resources if
necessary to run the experiments, but this depends on the experiment selected, chaos libraries chosen etc.,
The chaos experiment (triggered after creation of the chaosEngine resource) workflow consists of launching the “chaos-runner”
pod, which is an umbrella executor of different chaos experiments listed in the engine. The chaos-runner creates one pod (job)
per each experiment to run the actual experiment business logic, and also manages the lifecycle of these experiment pods
(performs functions such as experiment dependencies validation, job cleanup, patching of status back into chaosEngine etc.,).
Optionally, a monitor pod is created to export the chaos metrics. Together, these 3 pods are a standard set created upon execution
of the experiment. The experiment job, in turn may spawn dependent (helper) resources if
necessary to run the experiments, but this depends on the experiment selected, chaos libraries chosen etc.,
### Is it mandatory to annotate application deployments for chaos?
### Is it mandatory to annotate application deployments for chaos?
Typically applications are expected to be annotated with `litmuschaos.io/chaos="true"` to lend themselves to chaos.
This is in order to support selection of the right applications with similar labels in a namespaces, thereby isolating
the application under test (AUT) & reduce the blast radius. It is also helpful for supporting automated execution
(say, via cron) as a background service. However, in cases where the app deployment specifications are sacrosanct and
not expected to be modified, or in cases where annotating a single application for chaos when the experiment itself is
Typically applications are expected to be annotated with `litmuschaos.io/chaos="true"` to lend themselves to chaos.
This is in order to support selection of the right applications with similar labels in a namespaces, thereby isolating
the application under test (AUT) & reduce the blast radius. It is also helpful for supporting automated execution
(say, via cron) as a background service. However, in cases where the app deployment specifications are sacrosanct and
not expected to be modified, or in cases where annotating a single application for chaos when the experiment itself is
known to have a higher blast radius doesnt make sense (ex: infra chaos), the annotation check can be disabled via the
chaosEngine tunable `annotationCheck` (`.spec.annotationCheck: false`).
### Is it mandatory for the chaosengine and chaos experiment resources to exist in the same namespace?
Yes. As of today, the chaos resources are expected to co-exist in the same namespace, which, typically is also the
Yes. As of today, the chaos resources are expected to co-exist in the same namespace, which, typically is also the
application's (AUT) namespace.
### How to get the chaos logs in Litmus?
### How to get the chaos logs in Litmus?
The chaos logs can be viewed in the following manner.
The chaos logs can be viewed in the following manner.
To view the successful launch/removal of chaos resources upon engine creation, for identification of
To view the successful launch/removal of chaos resources upon engine creation, for identification of
application under test (AUT) etc., view the chaos-operator logs:
```console
kubectl logs -f <chaos-operator-(hash)-(hash)>-runner -n litmus
```
To view lifecycle management logs of a given (or set of) chaos experiments, view the chaos-runner logs:
To view lifecycle management logs of a given (or set of) chaos experiments, view the chaos-runner logs:
```console
kubectl logs -f <chaosengine_name>-runner -n <application_namespace>
```
To view the chaos logs itself (details of experiment chaos injection, application health checks et al),
view the experiment pod logs:
To view the chaos logs itself (details of experiment chaos injection, application health checks et al),
view the experiment pod logs:
```console
kubectl logs -f <experiment_name_(hash)_(hash)> -n <application_namespace>
```
### Does Litmus support generation of events during chaos?
### Does Litmus support generation of events during chaos?
The chaos-operator generates Kubernetes events to signify the creation of removal of chaos resources over the
course of a chaos experiment, which can be obtained by running the following command:
The chaos-operator generates Kubernetes events to signify the creation of removal of chaos resources over the
course of a chaos experiment, which can be obtained by running the following command:
```console
kubectl describe chaosengine <chaosengine-name> -n <namespace>
kubectl describe chaosengine <chaosengine-name> -n <namespace>
```
Note: Efforts are underway to add more events around chaos injection in subsequent releases.
Note: Efforts are underway to add more events around chaos injection in subsequent releases.
### How to stop/abort a chaos experiment?
A chaos experiment can be stopped/aborted inflight by patching the `.spec.engineState` property of the chaosengine
to `stop` . This will delete all the chaos resources associated with the engine/experiment at once.
A chaos experiment can be stopped/aborted inflight by patching the `.spec.engineState` property of the chaosengine
to `stop` . This will delete all the chaos resources associated with the engine/experiment at once.
```console
kubectl patch chaosengine <chaosengine-name> -n <namespace> --type merge --patch '{"spec":{"engineState":"stop"}}'
```
The same effect will be caused by deleting the respective chaosengine resource.
The same effect will be caused by deleting the respective chaosengine resource.
### Can a chaos experiment be resumed once stopped/aborted?
### Can a chaos experiment be resumed once stopped/aborted?
Once stopped/aborted, patching the chaosengine `.spec.engineState` with `active` causes the experiment to be
re-executed. However, support is yet to be added for saving state and resuming an in-flight experiment (i.e., execute
pending iterations etc.,)
Once stopped/aborted, patching the chaosengine `.spec.engineState` with `active` causes the experiment to be
re-executed. However, support is yet to be added for saving state and resuming an in-flight experiment (i.e., execute
pending iterations etc.,)
```console
kubectl patch chaosengine <chaosengine-name> -n <namespace> --type merge --patch '{"spec":{"engineState":"active"}}'
@ -199,22 +197,18 @@ kubectl patch chaosengine <chaosengine-name> -n <namespace> --type merge --patch
### Does Litmus support any chaos metrics for experiments?
Litmus provides a basic set of prometheus metrics indicating the total count of chaos experiments, passed/failed
experiments and individual status of experiments specified in the ChaosEngine, which can be queried against the monitor
Litmus provides a basic set of prometheus metrics indicating the total count of chaos experiments, passed/failed
experiments and individual status of experiments specified in the ChaosEngine, which can be queried against the monitor
pod. Work to enhance and improve this is underway. The default mode is to run experiments with `monitoring: false`.
### Does Litmus track any usage metrics on the test clusters?
By default, the installation count of chaos-operator & run count of a given chaos experiment is collected as part
of general analytics to gauge user adoption & chaos trends. However, if you wish to inhibit this, please use the following
ENV setting on the chaos-operator deployment:
By default, the installation count of chaos-operator & run count of a given chaos experiment is collected as part
of general analytics to gauge user adoption & chaos trends. However, if you wish to inhibit this, please use the following
ENV setting on the chaos-operator deployment:
```console
env:
env:
name: ANALYTICS
value: 'FALSE'
value: 'FALSE'
```

View File

@ -1,10 +1,11 @@
---
id: version-1.1.0-faq-troubleshooting
title: Troubleshooting Litmus
id: faq-troubleshooting
title: "Troubleshooting Litmus"
sidebar_label: Troubleshooting
original_id: faq-troubleshooting
---
------
---
[The Litmus chaos operator is seen to be in CrashLoopBackOff state immediately after installation?](#the-litmus-chaos-operator-is-seen-to-be-in-crashloopbackOff-state-immediately-after-installation)
@ -13,19 +14,18 @@ original_id: faq-troubleshooting
[The chaos-runner pod enters completed state seconds after getting created. No experiment jobs are created?](#the-chaos-runner-pod-enters-completed-state-seconds-after-getting-created-no-experiment-jobs-are-created)
[The experiment pod enters completed state w/o the desired chaos being injected?](#the-experiment-pod-enters-completed-state-wo-the-desired-chaos-being-injected)
<hr>
<hr/>
### The Litmus chaos operator is seen to be in CrashLoopBackOff state immediately after installation?
Verify if the ChaosEngine custom resource definition (CRD) has been installed in the cluster. This can be
verified with the following commands:
Verify if the ChaosEngine custom resource definition (CRD) has been installed in the cluster. This can be
verified with the following commands:
```console
kubectl get crds | grep chaos
```
```console
kubectl api-resources | grep chaos
```
@ -34,9 +34,9 @@ If not created, install it from [here](https://github.com/litmuschaos/chaos-oper
### Nothing happens (no pods created) when the chaosengine resource is created?
If the ChaosEngine creation results in no action at all, check the logs of the chaos-operator pod using
the following command to get more details (on failed creation of chaos resources). The below example uses litmus namespace,
which is the default mode of installation. Please provide the namespace into which the operator has been deployed:
If the ChaosEngine creation results in no action at all, check the logs of the chaos-operator pod using
the following command to get more details (on failed creation of chaos resources). The below example uses litmus namespace,
which is the default mode of installation. Please provide the namespace into which the operator has been deployed:
```console
kubectl logs -f <chaos-operator-(hash)-(hash)>-runner -n litmus
@ -44,69 +44,65 @@ kubectl logs -f <chaos-operator-(hash)-(hash)>-runner -n litmus
Some of the possible reasons include:
- The annotationCheck is set to `true` in the ChaosEngine spec, but the application deployment (AUT) has not
been annotated for chaos. If so, please add it using the following command:
- The annotationCheck is set to `true` in the ChaosEngine spec, but the application deployment (AUT) has not
been annotated for chaos. If so, please add it using the following command:
```console
kubectl annotate <deploy-type>/<application_name> litmuschaos.io/chaos="true"
```
- The annotationCheck is set to `true` in the ChaosEngine spec and there are multiple chaos candidates that
share the same label (as provided in the `.spec.appinfo` of the ChaosEngine) and are also annotated for chaos.
If so, please provide a unique label for the AUT, or remove annotations on other applications with the same label.
Litmus, by default, doesn't allow selection of multiple applications. If this is a requirement, set the
annotationCheck to `false`.
- The annotationCheck is set to `true` in the ChaosEngine spec and there are multiple chaos candidates that
share the same label (as provided in the `.spec.appinfo` of the ChaosEngine) and are also annotated for chaos.
If so, please provide a unique label for the AUT, or remove annotations on other applications with the same label.
Litmus, by default, doesn't allow selection of multiple applications. If this is a requirement, set the
annotationCheck to `false`.
```console
kubectl annotate <deploy-type>/<application_name> litmuschaos.io/chaos-
```
- The ChaosEngine has the `.spec.engineState` set to `stop`, which causes the operator to refrain from creating chaos
- The ChaosEngine has the `.spec.engineState` set to `stop`, which causes the operator to refrain from creating chaos
resources. While it is an unlikely scenario, it is possible to reuse a previously modified ChaosEngine manifest.
- Verify if the service account used by the Litmus chaos operator has enough permissions to launch pods/services
- Verify if the service account used by the Litmus chaos operator has enough permissions to launch pods/services
(this is available by default if the manifests suggested by the docs have been used).
### The chaos-runner pod enters completed state seconds after getting created. No experiment jobs are created?
If the chaos-runner enters completed state immediately post creation, i.e., the creation of experiment resources is
unsuccessful, check the logs of the chaos-runner pod logs.
If the chaos-runner enters completed state immediately post creation, i.e., the creation of experiment resources is
unsuccessful, check the logs of the chaos-runner pod logs.
```console
kubectl logs -f <chaosengine_name>-runner -n <application_namespace>
```
Some of the possible reasons may include:
Some of the possible reasons may include:
- The ChaosExperiment CR for the experiment (name) specified in the ChaosEngine .spec.experiments list is not installed.
- The ChaosExperiment CR for the experiment (name) specified in the ChaosEngine .spec.experiments list is not installed.
If so, please install the desired experiment from the [chaoshub](https://hub.litmuschaos.io)
- The dependent resources for the ChaosExperiment, such as configmap & secret volumes (as specified in the ChaosExperiment CR
or the ChaosEngine CR) may not be present in the cluster (or in the desired namespace). The runner pod doesnt proceed
with creation of experiment resources if the dependencies are unavailable.
- The dependent resources for the ChaosExperiment, such as configmap & secret volumes (as specified in the ChaosExperiment CR
or the ChaosEngine CR) may not be present in the cluster (or in the desired namespace). The runner pod doesnt proceed
with creation of experiment resources if the dependencies are unavailable.
- The chaosServiceAccount specified in the ChaosEngine CR doesnt have sufficient permissions to create the experiment
- The chaosServiceAccount specified in the ChaosEngine CR doesnt have sufficient permissions to create the experiment
resources (For existing experiments, appropriate rbac manifests are already provided in chaos-charts/docs).
### The experiment pod enters completed state w/o the desired chaos being injected?
### The experiment pod enters completed state w/o the desired chaos being injected?
If the experiment pod enters completed state immediately (or in a few seconds) after creation w/o injecting the desired chaos,
check the logs of the chaos-experiment pod.
If the experiment pod enters completed state immediately (or in a few seconds) after creation w/o injecting the desired chaos,
check the logs of the chaos-experiment pod.
```console
kubectl logs -f <experiment_name_(hash)_(hash)> -n <application_namespace>
```
Some of the possible reasons may include:
Some of the possible reasons may include:
- The ChaosExperiment CR or the ChaosEngine CR doesnt include mandatory ENVs (or consists of incorrect values/info)
needed by the experiment. Note that each experiment (see docs) specifies a mandatory set of ENVs along with some
optional ones, which are necessary for successful execution of the experiment.
- The ChaosExperiment CR or the ChaosEngine CR doesnt include mandatory ENVs (or consists of incorrect values/info)
needed by the experiment. Note that each experiment (see docs) specifies a mandatory set of ENVs along with some
optional ones, which are necessary for successful execution of the experiment.
- The chaosServiceAccount specified in the ChaosEngine CR doesnt have sufficient permissions to create the experiment
- The chaosServiceAccount specified in the ChaosEngine CR doesnt have sufficient permissions to create the experiment
helper-resources (i.e., some experiments in turn create other K8s resources like jobs/daemonsets/deployments etc..,
For existing experiments, appropriate rbac manifests are already provided in chaos-charts/docs).
For existing experiments, appropriate rbac manifests are already provided in chaos-charts/docs).

View File

@ -1,10 +1,11 @@
---
id: version-1.1.0-getstarted
id: getstarted
title: Getting Started with Litmus
sidebar_label: Introduction
original_id: getstarted
---
------
---
## Pre-requisites
@ -28,11 +29,9 @@ Running chaos on your application involves the following steps:
[Observe chaos results](#observe-chaos-results)
<hr>
<hr/>
### Install Litmus
### Install Litmus
```
kubectl apply -f https://litmuschaos.github.io/pages/litmus-operator-v1.1.0.yaml
@ -42,20 +41,15 @@ The above command install all the CRDs, required service account configuration,
**Verify your installation**
- Verify if the chaos operator is running
- Verify if the chaos operator is running
```
kubectl get pods -n litmus
```
Expected output:
>chaos-operator-ce-554d6c8f9f-slc8k 1/1 Running 0 6m41s
Expected output:
> chaos-operator-ce-554d6c8f9f-slc8k 1/1 Running 0 6m41s
- Verify if chaos CRDs are installed
@ -65,48 +59,46 @@ kubectl get crds | grep chaos
Expected output:
> chaosengines.litmuschaos.io 2019-10-02T08:45:25Z
> chaosengines.litmuschaos.io 2019-10-02T08:45:25Z
>
> chaosexperiments.litmuschaos.io 2019-10-02T08:45:26Z
> chaosexperiments.litmuschaos.io 2019-10-02T08:45:26Z
>
> chaosresults.litmuschaos.io 2019-10-02T08:45:26Z
> chaosresults.litmuschaos.io 2019-10-02T08:45:26Z
- Verify if the chaos api resources are successfully created in the desired (application) namespace.
*Note*: Sometimes, it can take a few seconds for the resources to be available post the CRD installation
_Note_: Sometimes, it can take a few seconds for the resources to be available post the CRD installation
```
kubectl api-resources | grep chaos
```
Expected output:
Expected output:
> chaosengines litmuschaos.io true ChaosEngine
> chaosengines litmuschaos.io true ChaosEngine
>
> chaosexperiments litmuschaos.io true ChaosExperiment
> chaosexperiments litmuschaos.io true ChaosExperiment
>
> chaosresults litmuschaos.io true ChaosResult
> chaosresults litmuschaos.io true ChaosResult
**NOTE**:
**NOTE**:
- In this guide, we shall describe the steps to inject container-kill chaos on an nginx application already deployed in the
nginx namespace. It is a mandatory requirement to ensure that the chaos custom resources (chaosexperiment and chaosengine)
and the experiment specific serviceaccount are created in the same namespace (typically, the same as the namespace of the
application under test (AUT), in this case nginx). This is done to ensure that the developers/users of the experiment isolate
the chaos to their respective work-namespaces in shared environments.
- In this guide, we shall describe the steps to inject container-kill chaos on an nginx application already deployed in the
nginx namespace. It is a mandatory requirement to ensure that the chaos custom resources (chaosexperiment and chaosengine)
and the experiment specific serviceaccount are created in the same namespace (typically, the same as the namespace of the
application under test (AUT), in this case nginx). This is done to ensure that the developers/users of the experiment isolate
the chaos to their respective work-namespaces in shared environments.
- In all subsequent steps, please follow these instructions by replacing the nginx namespace and labels with that of your
application.
- In all subsequent steps, please follow these instructions by replacing the nginx namespace and labels with that of your
application.
### Install Chaos Experiments
Chaos experiments contain the actual chaos details. These experiments are installed on your cluster as Kubernetes CRs.
The Chaos Experiments are grouped as Chaos Charts and are published on <a href=" https://hub.litmuschaos.io" target="_blank">Chaos Hub</a>.
Chaos experiments contain the actual chaos details. These experiments are installed on your cluster as Kubernetes CRs.
The Chaos Experiments are grouped as Chaos Charts and are published on <a href="https://hub.litmuschaos.io" target="_blank">Chaos Hub</a>.
The generic chaos experiments such as `pod-delete`, `container-kill`,` pod-network-latency` are available under Generic Chaos Chart.
This is the first chart you are recommended to install.
The generic chaos experiments such as `pod-delete`, `container-kill`,` pod-network-latency` are available under Generic Chaos Chart.
This is the first chart you are recommended to install.
```
kubectl apply -f https://hub.litmuschaos.io/api/chaos?file=charts/generic/experiments.yaml -n nginx
@ -120,17 +112,17 @@ kubectl get chaosexperiments -n nginx
### Setup Service Account
A service account should be created to allow chaosengine to run experiments in your application namespace. Copy the following
into a `rbac.yaml` manifest and run `kubectl apply -f rbac.yaml` to create one such account on the nginx namespace. This serviceaccount
A service account should be created to allow chaosengine to run experiments in your application namespace. Copy the following
into a `rbac.yaml` manifest and run `kubectl apply -f rbac.yaml` to create one such account on the nginx namespace. This serviceaccount
has just enough permissions needed to run the container-kill chaos experiment.
**NOTE**:
**NOTE**:
- For rbac samples corresponding to other experiments such as, say, pod-delete, please refer the respective experiment folder in
the [chaos-charts](https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/pod-delete) repository.
- For rbac samples corresponding to other experiments such as, say, pod-delete, please refer the respective experiment folder in
the [chaos-charts](https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/pod-delete) repository.
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/container-kill/rbac_nginx_getstarted.yaml yaml"
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/container-kill/rbac_nginx_getstarted.yaml yaml)
```yaml
---
apiVersion: v1
@ -149,9 +141,18 @@ metadata:
labels:
name: container-kill-sa
rules:
- apiGroups: ["","litmuschaos.io","batch","apps"]
resources: ["pods","jobs","daemonsets","pods/exec","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: ["", "litmuschaos.io", "batch", "apps"]
resources:
[
"pods",
"jobs",
"daemonsets",
"pods/exec",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
@ -165,16 +166,15 @@ roleRef:
kind: Role
name: container-kill-sa
subjects:
- kind: ServiceAccount
name: container-kill-sa
namespace: nginx
- kind: ServiceAccount
name: container-kill-sa
namespace: nginx
```
### Annotate your application
Your application has to be annotated with `litmuschaos.io/chaos="true"`. As a security measure, and also as a means
to reduce blast radius the chaos operator checks for this annotation before invoking chaos experiment(s) on the application.
Your application has to be annotated with `litmuschaos.io/chaos="true"`. As a security measure, and also as a means
to reduce blast radius the chaos operator checks for this annotation before invoking chaos experiment(s) on the application.
Replace `nginx` with the name of your deployment.
<div class="danger">
@ -187,13 +187,14 @@ of other types, please use the appropriate resource/resource-name convention (sa
kubectl annotate deploy/nginx litmuschaos.io/chaos="true" -n nginx
```
### Prepare ChaosEngine
### Prepare ChaosEngine
ChaosEngine connects the application instance to a Chaos Experiment. Copy the following YAML snippet into a file called
`chaosengine.yaml` and update the values of `applabel` , `appns`, `appkind` and `experiments` as per your choice.
ChaosEngine connects the application instance to a Chaos Experiment. Copy the following YAML snippet into a file called
`chaosengine.yaml` and update the values of `applabel` , `appns`, `appkind` and `experiments` as per your choice.
Change the `chaosServiceAccount` to the name of service account created in above previous steps.
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/container-kill/engine_nginx_getstarted.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/container-kill/engine_nginx_getstarted.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -201,15 +202,15 @@ metadata:
name: nginx-chaos
namespace: nginx
spec:
annotationCheck: 'true'
engineState: 'active'
annotationCheck: "true"
engineState: "active"
appinfo:
appns: 'nginx'
applabel: 'app=nginx'
appkind: 'deployment'
appns: "nginx"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: container-kill-sa
# use retain to keep the job for debug
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: container-kill
spec:
@ -217,14 +218,14 @@ spec:
env:
# specify the name of the container to be killed
- name: TARGET_CONTAINER
value: 'nginx'
value: "nginx"
```
### Override Default Chaos Experiments Variables
From LitmusChaos v1.1.0, the default environment variable values in chaosexperiments can be overridden by specifying
them in the chaosengine under `experiments.<experiment_name>.spec.components.env` with the desired value. In the
example below, the TARGET_CONTAINER is being set to a desired value based on the application instance.
example below, the TARGET_CONTAINER is being set to a desired value based on the application instance.
```console
...
@ -237,20 +238,17 @@ experiments:
value: nginx
```
### Run Chaos
```console
kubectl apply -f chaosengine.yaml
```
### Observe Chaos results
Describe the ChaosResult CR to know the status of each experiment. The ```spec.verdict``` is set to `Awaited` when the experiment is in progress, eventually changing to either `Pass` or `Fail`.
Describe the ChaosResult CR to know the status of each experiment. The `spec.verdict` is set to `Awaited` when the experiment is in progress, eventually changing to either `Pass` or `Fail`.
<strong> NOTE:</strong> ChaosResult CR name will be `<chaos-engine-name>-<chaos-experiment-name>`
<strong> NOTE:</strong> ChaosResult CR name will be `{"<chaos-engine-name>-<chaos-experiment-name>"}`
```console
kubectl describe chaosresult nginx-chaos-container-kill -n nginx

View File

@ -1,5 +1,5 @@
---
id: version-1.1.0-kafka-broker-disk-failure
id: kafka-broker-disk-failure
title: Kafka Broker Disk Failure Experiment Details
sidebar_label: Broker Disk Failure
original_id: kafka-broker-disk-failure
@ -26,19 +26,19 @@ original_id: kafka-broker-disk-failure
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that Kafka & Zookeeper are deployed as Statefulsets
- If Confluent/Kudo Operators have been used to deploy Kafka, note the instance name, which will be
used as the value of `KAFKA_INSTANCE_NAME` experiment environment variable
- If Confluent/Kudo Operators have been used to deploy Kafka, note the instance name, which will be
used as the value of `KAFKA_INSTANCE_NAME` experiment environment variable
- In case of Confluent, specified by the `--name` flag
- In case of Kudo, specified by the `--instance` flag
Zookeeper uses this to construct a path in which kafka cluster data is stored.
Zookeeper uses this to construct a path in which kafka cluster data is stored.
- Ensure that the kafka-broker-disk failure experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/charts/kafka/experiments/kafka-broker-disk-failure)
- Create a secret with the gcloud serviceaccount key (placed in a file `cloud_config.yml`) named `kafka-broker-disk-failure` in the namespace where the experiment CRs are created. This is necessary to perform the disk-detach steps from the litmus experiment container.
`kubectl create secret generic kafka-broker-disk-failure --from-file=cloud_config.yml -n <kafka-namespace>`
`kubectl create secret generic kafka-broker-disk-failure --from-file=cloud_config.yml -n <kafka-namespace>`
## Entry Criteria
@ -57,12 +57,12 @@ original_id: kafka-broker-disk-failure
## Integrations
- Currently, the disk detach is supported only on GKE using LitmusLib, which internally uses the gcloud tools.
- Currently, the disk detach is supported only on GKE using LitmusLib, which internally uses the gcloud tools.
## Steps to Execute the Chaos Experiment
- This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster.
To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
@ -72,7 +72,8 @@ original_id: kafka-broker-disk-failure
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kafka/kafka-broker-disk-failure/rbac.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kafka/kafka-broker-disk-failure/rbac.yaml yaml"
```yaml
---
apiVersion: v1
@ -90,9 +91,19 @@ metadata:
labels:
name: kafka-broker-disk-failure-sa
rules:
- apiGroups: ["","litmuschaos.io","batch","apps"]
resources: ["pods","jobs","pods/exec","statefulsets","secrets","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["", "litmuschaos.io", "batch", "apps"]
resources:
[
"pods",
"jobs",
"pods/exec",
"statefulsets",
"secrets",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -105,10 +116,9 @@ roleRef:
kind: ClusterRole
name: kafka-broker-disk-failure-sa
subjects:
- kind: ServiceAccount
name: kafka-broker-disk-failure-sa
namespace: default
- kind: ServiceAccount
name: kafka-broker-disk-failure-sa
namespace: default
```
### Prepare ChaosEngine
@ -219,7 +229,7 @@ subjects:
<td> KAFKA_LIVENESS_IMAGE </td>
<td> Image used for liveness message stream </td>
<td> Optional </td>
<td> Image as `<registry_url>/<repository>/<image>:<tag>` </td>
<td> Image as `{"<registry_url>/<repository>/<image>:<tag>"}` </td>
</tr>
<tr>
<td> KAFKA_REPLICATION_FACTOR </td>
@ -249,7 +259,8 @@ subjects:
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kafka/kafka-broker-disk-failure/engine.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kafka/kafka-broker-disk-failure/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -258,119 +269,118 @@ metadata:
namespace: default
spec:
# It can be true/false
annotationCheck: 'true'
annotationCheck: "true"
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
appinfo:
appns: 'default'
applabel: 'app=cp-kafka'
appkind: 'statefulset'
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appns: "default"
applabel: "app=cp-kafka"
appkind: "statefulset"
chaosServiceAccount: kafka-broker-disk-failure-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: kafka-broker-disk-failure
spec:
components:
components:
env:
# choose based on available kafka broker replicas
# choose based on available kafka broker replicas
- name: KAFKA_REPLICATION_FACTOR
value: '3'
value: "3"
# get via 'kubectl get pods --show-labels -n <kafka-namespace>'
- name: KAFKA_LABEL
value: 'app=cp-kafka'
value: "app=cp-kafka"
- name: KAFKA_NAMESPACE
value: 'default'
# get via 'kubectl get svc -n <kafka-namespace>'
value: "default"
# get via 'kubectl get svc -n <kafka-namespace>'
- name: KAFKA_SERVICE
value: 'kafka-cp-kafka-headless'
value: "kafka-cp-kafka-headless"
# get via 'kubectl get svc -n <kafka-namespace>'
# get via 'kubectl get svc -n <kafka-namespace>'
- name: KAFKA_PORT
value: '9092'
value: "9092"
# in milliseconds
# in milliseconds
- name: KAFKA_CONSUMER_TIMEOUT
value: '70000'
value: "70000"
# ensure to set the instance name if using KUDO operator
- name: KAFKA_INSTANCE_NAME
value: ''
value: ""
- name: ZOOKEEPER_NAMESPACE
value: 'default'
value: "default"
# get via 'kubectl get pods --show-labels -n <zk-namespace>'
- name: ZOOKEEPER_LABEL
value: 'app=cp-zookeeper'
value: "app=cp-zookeeper"
# get via 'kubectl get svc -n <zk-namespace>
# get via 'kubectl get svc -n <zk-namespace>
- name: ZOOKEEPER_SERVICE
value: 'kafka-cp-zookeeper-headless'
value: "kafka-cp-zookeeper-headless"
# get via 'kubectl get svc -n <zk-namespace>
# get via 'kubectl get svc -n <zk-namespace>
- name: ZOOKEEPER_PORT
value: '2181'
value: "2181"
# get from google cloud console or 'gcloud projects list'
- name: PROJECT_ID
value: 'argon-tractor-237811'
value: "argon-tractor-237811"
# attached to (in use by) node where 'kafka-0' is scheduled
- name: DISK_NAME
value: 'disk-1'
value: "disk-1"
- name: ZONE_NAME
value: 'us-central1-a'
value: "us-central1-a"
# Uses 'disk-1' attached to the node on which it is scheduled
- name: KAFKA_BROKER
value: 'kafka-0'
value: "kafka-0"
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '60'
value: "60"
```
### Create the ChaosEngine Resource
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
### Watch Chaos progress
### Watch Chaos progress
- View broker pod termination upon disk loss by setting up a watch on the pods in the Kafka namespace
`watch -n 1 kubectl get pods -n <kafka-namespace>`
`watch -n 1 kubectl get pods -n <kafka-namespace>`
### Check Chaos Experiment Result
### Check Chaos Experiment Result
- Check whether the kafka deployment is resilient to the broker disk failure, once the experiment (job) is completed.
The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
The ChaosResult resource name is derived like this: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult kafka-chaos-kafka-broker-disk-failure -n <kafka-namespace>`
`kubectl describe chaosresult kafka-chaos-kafka-broker-disk-failure -n <kafka-namespace>`
### Kafka Broker Recovery Post Experiment Execution
- The experiment re-attaches the detached disk to the same node as part of recovery steps. However, if the disk is not provisioned
as a Persistent Volume & instead provides the backing store to a PV carved out of it, the brokers may continue to stay in `CrashLoopBackOff`
state (example: as hostPath directory for a Kubernetes Local PV)
as a Persistent Volume & instead provides the backing store to a PV carved out of it, the brokers may continue to stay in `CrashLoopBackOff`
state (example: as hostPath directory for a Kubernetes Local PV)
- The complete recovery steps involve:
- The complete recovery steps involve:
- Remounting the disk into the desired mount point
- Deleting the affected broker pod to force reschedule
- Deleting the affected broker pod to force reschedule
## Kafka Broker Disk Failure Demo
## Kafka Broker Disk Failure Demo
- TODO: A sample recording of this experiment execution is provided here.
------
---

View File

@ -1,5 +1,5 @@
---
id: version-1.1.0-kafka-broker-pod-failure
id: kafka-broker-pod-failure
title: Kafka Broker Pod Failure Experiment Details
sidebar_label: Broker Pod Failure
original_id: kafka-broker-pod-failure
@ -25,18 +25,17 @@ original_id: kafka-broker-pod-failure
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `kafka-broker-pod-failure` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/charts/kafka/experiments/kafka-broker-pod-failure)
- Ensure that the `kafka-broker-pod-failure` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/charts/kafka/experiments/kafka-broker-pod-failure)
- Ensure that Kafka & Zookeeper are deployed as Statefulsets
- If Confluent/Kudo Operators have been used to deploy Kafka, note the instance name, which will be
used as the value of `KAFKA_INSTANCE_NAME` experiment environment variable
- If Confluent/Kudo Operators have been used to deploy Kafka, note the instance name, which will be
used as the value of `KAFKA_INSTANCE_NAME` experiment environment variable
- In case of Confluent, specified by the `--name` flag
- In case of Kudo, specified by the `--instance` flag
Zookeeper uses this to construct a path in which kafka cluster data is stored.
- Ensure that the kafka-broker-disk failure experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/charts/kafka/experiments/kafka-broker-pod-failure)
Zookeeper uses this to construct a path in which kafka cluster data is stored.
- Ensure that the kafka-broker-disk failure experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/charts/kafka/experiments/kafka-broker-pod-failure)
## Entry Criteria
@ -61,7 +60,7 @@ original_id: kafka-broker-pod-failure
## Steps to Execute the Chaos Experiment
- This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster.
To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
@ -71,7 +70,8 @@ original_id: kafka-broker-pod-failure
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kafka/kafka-broker-pod-failure/rbac.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kafka/kafka-broker-pod-failure/rbac.yaml yaml"
```yaml
apiVersion: v1
kind: ServiceAccount
@ -88,12 +88,23 @@ metadata:
labels:
name: kafka-broker-pod-failure-sa
rules:
- apiGroups: ["","litmuschaos.io","batch","apps"]
resources: ["pods","deployments","jobs","pods/exec","statefulsets","configmaps","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
- apiGroups: ["", "litmuschaos.io", "batch", "apps"]
resources:
[
"pods",
"deployments",
"jobs",
"pods/exec",
"statefulsets",
"configmaps",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -106,9 +117,9 @@ roleRef:
kind: ClusterRole
name: kafka-broker-pod-failure-sa
subjects:
- kind: ServiceAccount
name: kafka-broker-pod-failure-sa
namespace: default
- kind: ServiceAccount
name: kafka-broker-pod-failure-sa
namespace: default
```
### Prepare ChaosEngine
@ -195,7 +206,7 @@ subjects:
<td> KAFKA_LIVENESS_IMAGE </td>
<td> Image used for liveness message stream </td>
<td> Optional </td>
<td> Image as `<registry_url>/<repository>/<image>:<tag>` </td>
<td> Image as `{"<registry_url>/<repository>/<image>:<tag>"}` </td>
</tr>
<tr>
<td> KAFKA_REPLICATION_FACTOR </td>
@ -243,7 +254,8 @@ subjects:
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kafka/kafka-broker-pod-failure/engine.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kafka/kafka-broker-pod-failure/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -252,100 +264,100 @@ metadata:
namespace: default
spec:
# It can be true/false
annotationCheck: 'true'
annotationCheck: "true"
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
appinfo:
appns: 'default'
applabel: 'app=cp-kafka'
appkind: 'statefulset'
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appns: "default"
applabel: "app=cp-kafka"
appkind: "statefulset"
chaosServiceAccount: kafka-broker-pod-failure-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: kafka-broker-pod-failure
spec:
components:
components:
env:
# choose based on available kafka broker replicas
# choose based on available kafka broker replicas
- name: KAFKA_REPLICATION_FACTOR
value: '3'
value: "3"
# get via 'kubectl get pods --show-labels -n <kafka-namespace>'
- name: KAFKA_LABEL
value: 'app=cp-kafka'
value: "app=cp-kafka"
- name: KAFKA_NAMESPACE
value: 'default'
# get via 'kubectl get svc -n <kafka-namespace>'
value: "default"
# get via 'kubectl get svc -n <kafka-namespace>'
- name: KAFKA_SERVICE
value: 'kafka-cp-kafka-headless'
value: "kafka-cp-kafka-headless"
# get via 'kubectl get svc -n <kafka-namespace>'
# get via 'kubectl get svc -n <kafka-namespace>'
- name: KAFKA_PORT
value: '9092'
value: "9092"
# in milliseconds
# in milliseconds
- name: KAFKA_CONSUMER_TIMEOUT
value: '70000'
value: "70000"
# ensure to set the instance name if using KUDO operator
- name: KAFKA_INSTANCE_NAME
value: ''
value: ""
- name: ZOOKEEPER_NAMESPACE
value: 'default'
value: "default"
# get via 'kubectl get pods --show-labels -n <zk-namespace>'
- name: ZOOKEEPER_LABEL
value: 'app=cp-zookeeper'
value: "app=cp-zookeeper"
# get via 'kubectl get svc -n <zk-namespace>
# get via 'kubectl get svc -n <zk-namespace>
- name: ZOOKEEPER_SERVICE
value: 'kafka-cp-zookeeper-headless'
value: "kafka-cp-zookeeper-headless"
# get via 'kubectl get svc -n <zk-namespace>
# get via 'kubectl get svc -n <zk-namespace>
- name: ZOOKEEPER_PORT
value: '2181'
value: "2181"
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '60'
value: "60"
# set chaos interval (in sec) as desired
- name: CHAOS_INTERVAL
value: '20'
value: "20"
# pod failures without '--force' & default terminationGracePeriodSeconds
- name: FORCE
value: 'false'
value: "false"
```
### Create the ChaosEngine Resource
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
### Watch Chaos progress
### Watch Chaos progress
- View pod terminations & recovery by setting up a watch on the pods in the Kafka namespace
`watch -n 1 kubectl get pods -n <kafka-namespace>`
`watch -n 1 kubectl get pods -n <kafka-namespace>`
### Check Chaos Experiment Result
### Check Chaos Experiment Result
- Check whether the kafka deployment is resilient to the broker pod failure, once the experiment (job) is completed.
The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
The ChaosResult resource name is derived like this: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult kafka-chaos-kafka-broker-pod-failure -n <kafka-namespace>`
`kubectl describe chaosresult kafka-chaos-kafka-broker-pod-failure -n <kafka-namespace>`
## Kafka Broker Pod Failure Demo
## Kafka Broker Pod Failure Demo
- TODO: A sample recording of this experiment execution is provided here.
------
---

View File

@ -1,10 +1,11 @@
---
id: version-1.1.0-node-cpu-hog
id: node-cpu-hog
title: Node CPU Hog Experiment Details
sidebar_label: Node CPU Hog
original_id: node-cpu-hog
---
------
---
## Experiment Metadata
@ -24,8 +25,9 @@ original_id: node-cpu-hog
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://raw.githubusercontent.com/litmuschaos/pages/master/docs/litmus-operator-latest.yaml)
- Ensure that the `node-cpu-hog` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/charts/generic/experiments/node-cpu-hog)
- Ensure that the `node-cpu-hog` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/charts/generic/experiments/node-cpu-hog)
- There should be administrative access to the platform on which the Kubernetes cluster is hosted, as the recovery of the affected node could be manual. For example, gcloud access to the GKE project
## Entry Criteria
- Application pods are healthy on the respective Nodes before chaos injection
@ -39,12 +41,12 @@ original_id: node-cpu-hog
- This experiment causes CPU resource exhaustion on the Kubernetes node. The experiment aims to verify resiliency of applications whose replicas may be evicted on account on nodes turning unschedulable (Not Ready) due to lack of CPU resources.
- The CPU chaos is injected using a daemonset running the linux stress tool (a workload generator). The chaos is effected for a period equalling the TOTAL_CHAOS_DURATION
- Application implies services. Can be reframed as:
Tests application resiliency upon replica evictions caused due to lack of CPU resources
Tests application resiliency upon replica evictions caused due to lack of CPU resources
## Integrations
- CPU Hog can be effected using the chaos library: `litmus`
- The desired chaos library can be selected by setting `litmus` as value for the env variable `LIB`
- The desired chaos library can be selected by setting `litmus` as value for the env variable `LIB`
## Steps to Execute the Chaos Experiment
@ -58,7 +60,8 @@ Tests application resiliency upon replica evictions caused due to lack of CPU re
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/node-cpu-hog/rbac.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/node-cpu-hog/rbac.yaml yaml"
```yaml
---
apiVersion: v1
@ -76,12 +79,21 @@ metadata:
labels:
name: node-cpu-hog-sa
rules:
- apiGroups: ["","litmuschaos.io","batch","apps"]
resources: ["pods","daemonsets","jobs","pods/exec","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
- apiGroups: ["", "litmuschaos.io", "batch", "apps"]
resources:
[
"pods",
"daemonsets",
"jobs",
"pods/exec",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -94,16 +106,16 @@ roleRef:
kind: ClusterRole
name: node-cpu-hog-sa
subjects:
- kind: ServiceAccount
name: node-cpu-hog-sa
namespace: default
- kind: ServiceAccount
name: node-cpu-hog-sa
namespace: default
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- Provide the auxiliary applications info (ns & labels) in `spec.auxiliaryAppInfo`
- Override the experiment tunables if desired
- Override the experiment tunables if desired
#### Supported Experiment Tunables
@ -142,7 +154,8 @@ subjects:
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/node-cpu-hog/engine.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/node-cpu-hog/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -151,19 +164,19 @@ metadata:
namespace: default
spec:
# It can be true/false
annotationCheck: 'false'
annotationCheck: "false"
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: node-cpu-hog-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: node-cpu-hog
spec:
@ -171,13 +184,13 @@ spec:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '60'
value: "60"
# set chaos platform as desired
- name: PLATFORM
value: 'GKE'
value: "GKE"
# chaos lib used to inject the chaos
- name: LIB
value: 'litmus'
value: "litmus"
```
### Create the ChaosEngine Resource
@ -194,7 +207,7 @@ spec:
### Check Chaos Experiment Result
- Check whether the application is resilient to the CPU hog, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the application is resilient to the CPU hog, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult nginx-chaos-node-cpu-hog -n <application-namespace>`

View File

@ -1,10 +1,11 @@
---
id: version-1.1.0-node-drain
id: node-drain
title: Node Drain Experiment Details
sidebar_label: Node Drain
original_id: node-drain
---
------
---
## Experiment Metadata
@ -25,10 +26,10 @@ original_id: node-drain
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://raw.githubusercontent.com/litmuschaos/pages/master/docs/litmus-operator-latest.yaml)
- Ensure that the `node-drain` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/charts/generic/experiments/node-drain)
- Ensure that the node specified in the experiment ENV variable `APP_NODE` (the node which will be drained) should be cordoned before execution of the chaos experiment (before applying the chaosengine manifest) to ensure that the litmus experiment runner pods are not scheduled on it / subjected to eviction. This can be achieved with the following steps:
- Ensure that the node specified in the experiment ENV variable `APP_NODE` (the node which will be drained) should be cordoned before execution of the chaos experiment (before applying the chaosengine manifest) to ensure that the litmus experiment runner pods are not scheduled on it / subjected to eviction. This can be achieved with the following steps:
- Get node names against the applications pods: `kubectl get pods -o wide`
- Cordon the node `kubectl cordon <nodename>`
- Cordon the node `kubectl cordon <nodename>`
## Entry Criteria
@ -59,7 +60,8 @@ Use this sample RBAC manifest to create a chaosServiceAccount in the desired (ap
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/node-drain/rbac.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/node-drain/rbac.yaml yaml"
```yaml
---
apiVersion: v1
@ -77,12 +79,21 @@ metadata:
labels:
name: node-drain-sa
rules:
- apiGroups: ["","litmuschaos.io","batch","extensions"]
resources: ["pods","jobs","chaosengines","daemonsets","pods/eviction","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["patch","get","list"]
- apiGroups: ["", "litmuschaos.io", "batch", "extensions"]
resources:
[
"pods",
"jobs",
"chaosengines",
"daemonsets",
"pods/eviction",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["patch", "get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -95,17 +106,16 @@ roleRef:
kind: ClusterRole
name: node-drain-sa
subjects:
- kind: ServiceAccount
name: node-drain-sa
namespace: default
- kind: ServiceAccount
name: node-drain-sa
namespace: default
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- Provide the auxiliary applications info (ns & labels) in `spec.auxiliaryAppInfo`
- Override the experiment tunables if desired
- Override the experiment tunables if desired
#### Supported Experiment Tunables
@ -138,7 +148,8 @@ subjects:
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/node-drain/engine.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/node-drain/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -147,19 +158,19 @@ metadata:
namespace: default
spec:
# It can be true/false
annotationCheck: 'false'
annotationCheck: "false"
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: node-drain-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: node-drain
spec:
@ -167,7 +178,7 @@ spec:
env:
# set node name
- name: APP_NODE
value: 'node-1'
value: "node-1"
```
### Create the ChaosEngine Resource
@ -184,10 +195,10 @@ spec:
### Check Chaos Experiment Result
- Check whether the application is resilient to the node drain, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the application is resilient to the node drain, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult nginx-chaos-node-drain -n <application-namespace>`
## Node Drain Experiment Demo [TODO]
- A sample recording of this experiment execution is provided here.
- A sample recording of this experiment execution is provided here.

View File

@ -1,10 +1,11 @@
---
id: version-1.2.0-openebs-control-plane-chaos
id: openebs-control-plane-chaos
title: OpenEBS Control Plane Chaos Experiment Details
sidebar_label: Control Plane Chaos
original_id: openebs-control-plane-chaos
---
------
---
## Experiment Metadata
@ -58,14 +59,14 @@ original_id: openebs-control-plane-chaos
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
### Prepare chaosServiceAccount
Use this sample RBAC manifest to create a chaosServiceAccount in the desired (openebs) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-control-plane-chaos/rbac.yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-control-plane-chaos/rbac.yaml"
```yaml
---
apiVersion: v1
@ -84,12 +85,21 @@ metadata:
labels:
name: control-plane-sa
rules:
- apiGroups: ["","litmuschaos.io","batch","apps"]
resources: ["pods","deployments","pods/log","events","jobs","configmaps","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
- apiGroups: ["", "litmuschaos.io", "batch", "apps"]
resources:
[
"pods",
"deployments",
"jobs",
"configmaps",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
@ -103,9 +113,9 @@ roleRef:
kind: Role
name: control-plane-sa
subjects:
- kind: ServiceAccount
name: control-plane-sa
namespace: openebs
- kind: ServiceAccount
name: control-plane-sa
namespace: openebs
```
### Prepare ChaosEngine
@ -132,7 +142,8 @@ subjects:
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-control-plane-chaos/engine.yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-control-plane-chaos/engine.yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -141,34 +152,34 @@ metadata:
namespace: openebs
spec:
# It can be true/false
annotationCheck: 'false'
annotationCheck: "false"
# It can be active/stop
engineState: 'active'
engineState: "active"
appinfo:
appns: 'openebs'
applabel: 'name=maya-apiserver'
appkind: 'deployment'
appns: "openebs"
applabel: "name=maya-apiserver"
appkind: "deployment"
chaosServiceAccount: control-plane-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: openebs-control-plane-chaos
spec:
components:
env:
- name: OPENEBS_NAMESPACE
value: 'openebs'
value: "openebs"
## Period to wait before injection of chaos
## Period to wait before injection of chaos
- name: RAMP_TIME
value: '10'
value: "10"
- name: FORCE
value: ''
value: ""
- name: LIB
value: ''
value: ""
```
### Create the ChaosEngine Resource
@ -185,11 +196,11 @@ spec:
### Check Chaos Experiment Result
- Check whether the OpenEBS control plane is resilient to the pod failure, once the experiment (job) is completed. The ChaosResult resource naming convention is: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the OpenEBS control plane is resilient to the pod failure, once the experiment (job) is completed. The ChaosResult resource naming convention is: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult control-plane-chaos-openebs-control-plane-chaos -n openebs`
## Recovery
## Recovery
- If the verdict of the ChaosResult is `Fail`, and/or the OpenEBS components do not return to healthy state post the chaos experiment, then please refer the [OpenEBS troubleshooting guide](https://docs.openebs.io/docs/next/troubleshooting.html#installation) for more info on how to recover the same.

View File

@ -1,10 +1,11 @@
---
id: version-1.1.0-openebs-pool-container-failure
id: openebs-pool-container-failure
title: OpenEBS Pool Container Failure Experiment Details
sidebar_label: Pool Container Failure
original_id: openebs-pool-container-failure
---
------
---
## Experiment Metadata
@ -27,37 +28,40 @@ original_id: openebs-pool-container-failure
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `openebs-pool-container-failure` experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/charts/openebs/experiments/openebs-pool-container-failure)
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox.
- For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox.
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-pool-container-failure
data:
parameters.yml: |
dbuser: root
dbpassword: k8sDem0
dbname: test
```
- For Busybox data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
- For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-pool-container-failure
data:
parameters.yml: |
dbuser: root
dbpassword: k8sDem0
dbname: test
```
- For Busybox data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-pool-container-failure
data:
parameters.yml: |
blocksize: 4k
blockcount: 1024
testfile: exampleFile
```
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-pool-container-failure
data:
parameters.yml: |
blocksize: 4k
blockcount: 1024
testfile: exampleFile
```
- Ensure that the chaosServiceAccount used for the experiment has cluster-scope permissions as the experiment may involve carrying out the chaos in the `openebs` namespace
while performing application health checks in its respective namespace.
while performing application health checks in its respective namespace.
## Entry Criteria
@ -71,8 +75,8 @@ original_id: openebs-pool-container-failure
If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
## Details
@ -98,7 +102,8 @@ Use this sample RBAC manifest to create a chaosServiceAccount in the desired (ap
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-pool-container-failure/rbac.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-pool-container-failure/rbac.yaml yaml"
```yaml
---
apiVersion: v1
@ -117,9 +122,32 @@ metadata:
labels:
name: pool-container-failure-sa
rules:
- apiGroups: ["","apps","litmuschaos.io","batch","extensions","storage.k8s.io","openebs.io"]
resources: ["pods","jobs","daemonsets","replicasets","pods/exec","configmaps","secrets","persistentvolumeclaims","cstorvolumereplicas","chaosexperiments","chaosresults","chaosengines"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups:
[
"",
"apps",
"litmuschaos.io",
"batch",
"extensions",
"storage.k8s.io",
"openebs.io",
]
resources:
[
"pods",
"jobs",
"daemonsets",
"replicasets",
"pods/exec",
"configmaps",
"secrets",
"persistentvolumeclaims",
"cstorvolumereplicas",
"chaosexperiments",
"chaosresults",
"chaosengines",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -132,9 +160,9 @@ roleRef:
kind: ClusterRole
name: pool-container-failure-sa
subjects:
- kind: ServiceAccount
name: pool-container-failure-sa
namespace: default
- kind: ServiceAccount
name: pool-container-failure-sa
namespace: default
```
### Prepare ChaosEngine
@ -186,7 +214,8 @@ subjects:
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-pool-container-failure/engine.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-pool-container-failure/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -195,28 +224,28 @@ metadata:
namespace: default
spec:
# It can be true/false
annotationCheck: 'false'
annotationCheck: "false"
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: pool-container-failure-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: openebs-pool-container-failure
spec:
components:
env:
- name: APP_PVC
value: 'demo-nginx-claim'
value: "demo-nginx-claim"
- name: DEPLOY_TYPE
value: 'deployment'
value: "deployment"
```
### Create the ChaosEngine Resource
@ -233,8 +262,8 @@ spec:
### Check Chaos Experiment Result
- Check whether the application is resilient to the pool pod container failure, once the experiment (job) is completed. The ChaosResult resource naming convention
is: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the application is resilient to the pool pod container failure, once the experiment (job) is completed. The ChaosResult resource naming convention
is: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult target-chaos-openebs-pool-container-failure -n <application-namespace>`

View File

@ -1,10 +1,11 @@
---
id: version-1.2.0-openebs-pool-network-delay
id: openebs-pool-network-delay
title: OpenEBS Pool Network Latency Experiment Details
sidebar_label: Pool Network Latency
original_id: openebs-pool-network-delay
---
------
---
## Experiment Metadata
@ -21,7 +22,7 @@ original_id: openebs-pool-network-delay
</tr>
</table>
<b>Note:</b> In this example, we are using nginx as stateful application that stores static pages on a Kubernetes volume.
<b>Note:</b> In this example, we are using nginx as stateful application that stores static pages on a Kubernetes volume.
## Prerequisites
@ -42,11 +43,12 @@ kind: ConfigMap
metadata:
name: openebs-pool-network-delay
data:
parameters.yml: |
parameters.yml: |
dbuser: root
dbpassword: k8sDem0
dbname: test
```
- For Busybox data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
```yaml
@ -56,12 +58,13 @@ kind: ConfigMap
metadata:
name: openebs-pool-network-delay
data:
parameters.yml: |
parameters.yml: |
blocksize: 4k
blockcount: 1024
testfile: exampleFile
```
- Ensure that the chaosServiceAccount used for the experiment has cluster-scope permissions as the experiment may involve carrying out the chaos in the `openebs` namespace while performing application health checks in its respective namespace.
- Ensure that the chaosServiceAccount used for the experiment has cluster-scope permissions as the experiment may involve carrying out the chaos in the `openebs` namespace while performing application health checks in its respective namespace.
## Entry Criteria
@ -75,8 +78,8 @@ data:
If the experiment tunable DATA_PERSISTENCE is set to 'mysql' or 'busybox':
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
## Details
@ -87,8 +90,8 @@ If the experiment tunable DATA_PERSISTENCE is set to 'mysql' or 'busybox':
## Integrations
- Network delay is achieved using the `pumba` chaos library in case of docker runtime. Support for other other runtimes via tc direct invocation of `tc` will be added soon.
- The desired lib image can be configured in the env variable `LIB_IMAGE`.
- Network delay is achieved using the `pumba` chaos library in case of docker runtime. Support for other other runtimes via tc direct invocation of `tc` will be added soon.
- The desired lib image can be configured in the env variable `LIB_IMAGE`.
## Steps to Execute the Chaos Experiment
@ -101,7 +104,9 @@ If the experiment tunable DATA_PERSISTENCE is set to 'mysql' or 'busybox':
Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app)namespace. This example consists of the minimum necessary cluster role permissions to execute the experiment.
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-pool-network-delay/rbac.yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-pool-network-delay/rbac.yaml"
```yaml
apiVersion: v1
kind: ServiceAccount
@ -119,9 +124,34 @@ metadata:
labels:
name: pool-network-delay-sa
rules:
- apiGroups: ["","apps","litmuschaos.io","batch","extensions","storage.k8s.io","openebs.io"]
resources: ["pods","pods/exec","pods/log","events","jobs","configmaps","services","persistentvolumeclaims","storageclasses","persistentvolumes","chaosengines","chaosexperiments","chaosresults","cstorpools","cstorvolumereplicas","replicasets"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups:
[
"",
"apps",
"litmuschaos.io",
"batch",
"extensions",
"storage.k8s.io",
"openebs.io",
]
resources:
[
"pods",
"pods/exec",
"jobs",
"configmaps",
"services",
"persistentvolumeclaims",
"storageclasses",
"persistentvolumes",
"chaosengines",
"chaosexperiments",
"chaosresults",
"cstorpools",
"cstorvolumereplicas",
"replicasets",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -134,9 +164,9 @@ roleRef:
kind: ClusterRole
name: pool-network-delay-sa
subjects:
- kind: ServiceAccount
name: pool-network-delay-sa
namespace: default
- kind: ServiceAccount
name: pool-network-delay-sa
namespace: default
```
### Prepare ChaosEngine
@ -187,7 +217,9 @@ subjects:
</table>
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-pool-network-delay/engine.yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-pool-network-delay/engine.yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -195,33 +227,32 @@ metadata:
name: pool-chaos
namespace: default
spec:
annotationCheck: 'false'
annotationCheck: "false"
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: pool-network-delay-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: openebs-pool-network-delay
spec:
components:
env:
- name: APP_PVC
value: 'demo-nginx-claim'
value: "demo-nginx-claim"
- name: OPENEBS_NAMESPACE
value: 'openebs'
value: "openebs"
# in milliseconds
- name: NETWORK_DELAY
value: '60000'
value: "60000"
- name: TOTAL_CHAOS_DURATION
value: '60' # in seconds
value: "60" # in seconds
```
### Create the ChaosEngine Resource
@ -239,7 +270,7 @@ spec:
### Check Chaos Experiment Result
- Check whether the application is resilient to the pool network delays, once the experiment (job) is completed. The ChaosResult resource naming convention is: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the application is resilient to the pool network delays, once the experiment (job) is completed. The ChaosResult resource naming convention is: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult pool-chaos-openebs-pool-network-delay -n <application-namespace>`

View File

@ -1,10 +1,11 @@
---
id: version-1.2.0-openebs-pool-network-loss
id: openebs-pool-network-loss
title: OpenEBS Pool Network Loss Experiment Details
sidebar_label: Pool Network Loss
original_id: openebs-pool-network-loss
---
------
---
## Experiment Metadata
@ -21,7 +22,7 @@ original_id: openebs-pool-network-loss
</tr>
</table>
<b>Note:</b> In this example, we are using nginx as stateful application that stores static pages on a Kubernetes volume.
<b>Note:</b> In this example, we are using nginx as stateful application that stores static pages on a Kubernetes volume.
## Prerequisites
@ -42,11 +43,12 @@ kind: ConfigMap
metadata:
name: openebs-pool-network-loss
data:
parameters.yml: |
parameters.yml: |
dbuser: root
dbpassword: k8sDem0
dbname: test
```
- For Busybox data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
```yaml
@ -56,12 +58,13 @@ kind: ConfigMap
metadata:
name: openebs-pool-network-loss
data:
parameters.yml: |
parameters.yml: |
blocksize: 4k
blockcount: 1024
testfile: exampleFile
```
- Ensure that the chaosServiceAccount used for the experiment has cluster-scope permissions as the experiment may involve carrying out the chaos in the `openebs` namespace while performing application health checks in its respective namespace.
- Ensure that the chaosServiceAccount used for the experiment has cluster-scope permissions as the experiment may involve carrying out the chaos in the `openebs` namespace while performing application health checks in its respective namespace.
## Entry Criteria
@ -75,8 +78,8 @@ data:
If the experiment tunable DATA_PERSISTENCE is set to 'mysql' or 'busybox':
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
## Details
@ -87,8 +90,8 @@ If the experiment tunable DATA_PERSISTENCE is set to 'mysql' or 'busybox':
## Integrations
- Network loss is achieved using the `pumba` chaos library in case of docker runtime. Support for other other runtimes via tc direct invocation of `tc` will be added soon.
- The desired lib image can be configured in the env variable `LIB_IMAGE`.
- Network loss is achieved using the `pumba` chaos library in case of docker runtime. Support for other other runtimes via tc direct invocation of `tc` will be added soon.
- The desired lib image can be configured in the env variable `LIB_IMAGE`.
## Steps to Execute the Chaos Experiment
@ -102,7 +105,8 @@ Use this sample RBAC manifest to create a chaosServiceAccount in the desired (ap
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-pool-network-loss/rbac.yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-pool-network-loss/rbac.yaml"
```yaml
---
apiVersion: v1
@ -121,39 +125,37 @@ metadata:
labels:
name: pool-network-loss-sa
rules:
- apiGroups:
- ""
- "apps"
- "litmuschaos.io"
- "batch"
- "extensions"
- "storage.k8s.io"
- "openebs.io"
resources:
- "pods"
- "pods/exec"
- "jobs"
- "pods/log"
- "events"
- "configmaps"
- "services"
- "persistentvolumeclaims"
- "storageclasses"
- "persistentvolumeclaims"
- "persistentvolumes"
- "chaosengines"
- "chaosexperiments"
- "chaosresults"
- "cstorpools"
- "cstorvolumereplicas"
- "replicasets"
verbs:
- "create"
- "get"
- "delete"
- "list"
- "patch"
- "update"
- apiGroups:
- ""
- "apps"
- "litmuschaos.io"
- "batch"
- "extensions"
- "storage.k8s.io"
- "openebs.io"
resources:
- "pods"
- "pods/exec"
- "jobs"
- "configmaps"
- "services"
- "persistentvolumeclaims"
- "storageclasses"
- "persistentvolumeclaims"
- "persistentvolumes"
- "chaosengines"
- "chaosexperiments"
- "chaosresults"
- "cstorpools"
- "cstorvolumereplicas"
- "replicasets"
verbs:
- "create"
- "get"
- "delete"
- "list"
- "patch"
- "update"
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -166,9 +168,9 @@ roleRef:
kind: ClusterRole
name: pool-network-loss-sa
subjects:
- kind: ServiceAccount
name: pool-network-loss-sa
namespace: default
- kind: ServiceAccount
name: pool-network-loss-sa
namespace: default
```
### Prepare ChaosEngine
@ -219,7 +221,9 @@ subjects:
</table>
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-pool-network-loss/engine.yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-pool-network-loss/engine.yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -227,38 +231,37 @@ metadata:
name: pool-chaos
namespace: default
spec:
auxiliaryAppInfo: ''
annotationCheck: 'false'
auxiliaryAppInfo: ""
annotationCheck: "false"
# It can be active/stop
engineState: 'active'
engineState: "active"
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: pool-network-loss-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: openebs-pool-network-loss
spec:
components:
env:
- name: FORCE
value: 'true'
- name: FORCE
value: "true"
- name: APP_PVC
value: 'demo-nginx-claim'
- name: APP_PVC
value: "demo-nginx-claim"
- name: OPENEBS_NAMESPACE
value: 'openebs'
- name: OPENEBS_NAMESPACE
value: "openebs"
- name: NETWORK_PACKET_LOSS_PERCENTAGE
value: '100'
- name: NETWORK_PACKET_LOSS_PERCENTAGE
value: "100"
- name: TOTAL_CHAOS_DURATION
value: '120' # in seconds
- name: TOTAL_CHAOS_DURATION
value: "120" # in seconds
```
### Create the ChaosEngine Resource
@ -276,7 +279,7 @@ spec:
### Check Chaos Experiment Result
- Check whether the application is resilient to the pool network loss, once the experiment (job) is completed. The ChaosResult resource naming convention is: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the application is resilient to the pool network loss, once the experiment (job) is completed. The ChaosResult resource naming convention is: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult pool-chaos-openebs-pool-network-loss -n <application-namespace>`

View File

@ -1,10 +1,11 @@
---
id: version-1.1.0-openebs-pool-pod-failure
id: openebs-pool-pod-failure
title: OpenEBS Pool Pod Failure Experiment Details
sidebar_label: Pool Pod Failure
original_id: openebs-pool-pod-failure
---
------
---
## Experiment Metadata
@ -21,13 +22,14 @@ original_id: openebs-pool-pod-failure
</tr>
</table>
<b>Note:</b> In this example, we are using nginx as stateful application that stores static pages on a Kubernetes volume.
<b>Note:</b> In this example, we are using nginx as stateful application that stores static pages on a Kubernetes volume.
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `openebs-pool-pod-failure` experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/charts/openebs/experiments/openebs-pool-pod-failure)
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox.
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox.
- For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
```
@ -37,11 +39,12 @@ original_id: openebs-pool-pod-failure
metadata:
name: openebs-pool-pod-failure
data:
parameters.yml: |
parameters.yml: |
dbuser: root
dbpassword: k8sDem0
dbname: test
```
- For Busybox data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
```
@ -51,12 +54,13 @@ original_id: openebs-pool-pod-failure
metadata:
name: openebs-pool-pod-failure
data:
parameters.yml: |
parameters.yml: |
blocksize: 4k
blockcount: 1024
testfile: exampleFile
```
- Ensure that the chaosServiceAccount used for the experiment has cluster-scope permissions as the experiment may involve carrying out the chaos in the `openebs` namespace while performing application health checks in its respective namespace.
- Ensure that the chaosServiceAccount used for the experiment has cluster-scope permissions as the experiment may involve carrying out the chaos in the `openebs` namespace while performing application health checks in its respective namespace.
## Entry Criteria
@ -70,8 +74,8 @@ original_id: openebs-pool-pod-failure
If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
## Details
@ -95,7 +99,8 @@ Use this sample RBAC manifest to create a chaosServiceAccount in the desired (ap
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-pool-pod-failure/rbac.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-pool-pod-failure/rbac.yaml yaml"
```yaml
---
apiVersion: v1
@ -114,12 +119,35 @@ metadata:
labels:
name: pool-pod-failure-sa
rules:
- apiGroups: ["","apps","litmuschaos.io","batch","extensions","storage.k8s.io","openebs.io"]
resources: ["pods","jobs","deployments","configmaps","secrets","replicasets","persistentvolumeclaims","storageclasses","cstorvolumereplicas","chaosexperiments","chaosresults","chaosengines"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
- apiGroups:
[
"",
"apps",
"litmuschaos.io",
"batch",
"extensions",
"storage.k8s.io",
"openebs.io",
]
resources:
[
"pods",
"jobs",
"deployments",
"configmaps",
"secrets",
"replicasets",
"persistentvolumeclaims",
"storageclasses",
"cstorvolumereplicas",
"chaosexperiments",
"chaosresults",
"chaosengines",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -132,9 +160,9 @@ roleRef:
kind: ClusterRole
name: pool-pod-failure-sa
subjects:
- kind: ServiceAccount
name: pool-pod-failure-sa
namespace: default
- kind: ServiceAccount
name: pool-pod-failure-sa
namespace: default
```
### Prepare ChaosEngine
@ -180,7 +208,8 @@ subjects:
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-pool-pod-failure/engine.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-pool-pod-failure/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -189,30 +218,30 @@ metadata:
namespace: default
spec:
# It can be true/false
annotationCheck: 'false'
annotationCheck: "false"
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: pool-pod-failure-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: openebs-pool-pod-failure
spec:
components:
env:
- name: FORCE
value: 'true'
value: "true"
- name: APP_PVC
value: 'demo-nginx-claim'
value: "demo-nginx-claim"
- name: DEPLOY_TYPE
value: 'deployment'
value: "deployment"
```
### Create the ChaosEngine Resource
@ -229,7 +258,7 @@ spec:
### Check Chaos Experiment Result
- Check whether the application is resilient to the pool pod failure, once the experiment (job) is completed. The ChaosResult resource naming convention is: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the application is resilient to the pool pod failure, once the experiment (job) is completed. The ChaosResult resource naming convention is: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult target-chaos-openebs-pool-pod-failure -n <application-namespace>`

View File

@ -1,10 +1,11 @@
---
id: version-1.1.0-openebs-target-container-failure
id: openebs-target-container-failure
title: OpenEBS Target Container Failure Experiment Details
sidebar_label: Target Container Failure
original_id: openebs-target-container-failure
---
------
---
## Experiment Metadata
@ -21,43 +22,46 @@ original_id: openebs-target-container-failure
</tr>
</table>
<b>Note:</b> In this example, we are using nginx as stateful application that stores static pages on a Kubernetes volume.
<b>Note:</b> In this example, we are using nginx as stateful application that stores static pages on a Kubernetes volume.
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `openebs-target-container-failure` experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/charts/openebs/experiments/openebs-target-container-failure)
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox.
- For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox.
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-target-container-failure
data:
parameters.yml: |
dbuser: root
dbpassword: k8sDem0
dbname: test
```
- For Busybox data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
- For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-target-container-failure
data:
parameters.yml: |
dbuser: root
dbpassword: k8sDem0
dbname: test
```
- For Busybox data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-target-container-failure
data:
parameters.yml: |
blocksize: 4k
blockcount: 1024
testfile: exampleFile
```
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-target-container-failure
data:
parameters.yml: |
blocksize: 4k
blockcount: 1024
testfile: exampleFile
```
- Ensure that the chaosServiceAccount used for the experiment has cluster-scope permissions as the experiment may involve carrying out the chaos in the `openebs` namespace
while performing application health checks in its respective namespace.
while performing application health checks in its respective namespace.
## Entry Criteria
@ -71,8 +75,8 @@ original_id: openebs-target-container-failure
If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
## Details
@ -84,8 +88,8 @@ If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
## Integrations
- Container kill is achieved using the `pumba` chaos library in case of docker runtime, & `litmuslib` using `crictl` tool in case of containerd runtime.
- The desired lib image can be configured in the env variable `LIB_IMAGE`.
- Container kill is achieved using the `pumba` chaos library in case of docker runtime, & `litmuslib` using `crictl` tool in case of containerd runtime.
- The desired lib image can be configured in the env variable `LIB_IMAGE`.
## Steps to Execute the Chaos Experiment
@ -99,7 +103,8 @@ If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-target-container-failure/rbac.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-target-container-failure/rbac.yaml yaml"
```yaml
---
apiVersion: v1
@ -118,9 +123,23 @@ metadata:
labels:
name: target-container-failure-sa
rules:
- apiGroups: ["","litmuschaos.io","batch","apps","storage.k8s.io"]
resources: ["pods","jobs","pods/exec","daemonsets","configmaps","secrets","persistentvolumeclaims","storageclasses","persistentvolumes","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: ["", "litmuschaos.io", "batch", "apps", "storage.k8s.io"]
resources:
[
"pods",
"jobs",
"pods/exec",
"daemonsets",
"configmaps",
"secrets",
"persistentvolumeclaims",
"storageclasses",
"persistentvolumes",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -133,9 +152,9 @@ roleRef:
kind: ClusterRole
name: target-container-failure-sa
subjects:
- kind: ServiceAccount
name: target-container-failure-sa
namespace: default
- kind: ServiceAccount
name: target-container-failure-sa
namespace: default
```
### Prepare ChaosEngine
@ -163,7 +182,7 @@ subjects:
<td> LIB_IMAGE </td>
<td> The chaos library image used to run the kill command </td>
<td> Optional </td>
<td> Defaults to `gaiaadm/pumba:0.4.8`. Supported: `{docker : gaiaadm/pumba:0.4.8, containerd: gprasath/crictl:ci}` </td>
<td> Defaults to `gaiaadm/pumba:0.4.8`. Supported: `{"{docker : gaiaadm/pumba:0.4.8, containerd: gprasath/crictl:ci}"}` </td>
</tr>
<tr>
<td> CONTAINER_RUNTIME </td>
@ -199,7 +218,8 @@ subjects:
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-target-container-failure/engine.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-target-container-failure/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -208,30 +228,30 @@ metadata:
namespace: default
spec:
# It can be true/false
annotationCheck: 'false'
annotationCheck: "false"
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: target-container-failure-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: openebs-target-container-failure
spec:
components:
env:
- name: TARGET_CONTAINER
value: 'cstor-istgt'
value: "cstor-istgt"
- name: APP_PVC
value: 'demo-nginx-claim'
value: "demo-nginx-claim"
- name: DEPLOY_TYPE
value: 'deployment'
value: "deployment"
```
### Create the ChaosEngine Resource
@ -248,14 +268,14 @@ spec:
### Check Chaos Experiment Result
- Check whether the application is resilient to the target container kill, once the experiment (job) is completed. The ChaosResult resource naming convention
is: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the application is resilient to the target container kill, once the experiment (job) is completed. The ChaosResult resource naming convention
is: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult target-chaos-openebs-target-container-failure -n <application-namespace>`
## Recovery
## Recovery
- If the verdict of the ChaosResult is `Fail`, and/or the OpenEBS components do not return to healthy state post the chaos experiment, then please refer the [OpenEBS troubleshooting guide](https://docs.openebs.io/docs/next/troubleshooting.html#volume-provisioning) for more info on how to recover the same.
- If the verdict of the ChaosResult is `Fail`, and/or the OpenEBS components do not return to healthy state post the chaos experiment, then please refer the [OpenEBS troubleshooting guide](https://docs.openebs.io/docs/next/troubleshooting.html#volume-provisioning) for more info on how to recover the same.
## OpenEBS Target Container Failure Demo [TODO]

View File

@ -1,10 +1,11 @@
---
id: version-1.2.0-openebs-target-network-delay
id: openebs-target-network-delay
title: OpenEBS Target Network Latency Experiment Details
sidebar_label: Target Network Latency
original_id: openebs-target-network-delay
---
------
---
## Experiment Metadata
@ -21,44 +22,47 @@ original_id: openebs-target-network-delay
</tr>
</table>
<b>Note:</b> In this example, we are using nginx as stateful application that stores static pages on a Kubernetes volume.
<b>Note:</b> In this example, we are using nginx as stateful application that stores static pages on a Kubernetes volume.
## Prerequisites
- Ensure that the Kubernetes Cluster uses Docker runtime
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `openebs-target-network-delay` experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/charts/openebs/experiments/openebs-target-network-delay)
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox.
- For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox.
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-target-network-delay
data:
parameters.yml: |
dbuser: root
dbpassword: k8sDem0
dbname: test
```
- For Busybox data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
- For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-target-network-delay
data:
parameters.yml: |
dbuser: root
dbpassword: k8sDem0
dbname: test
```
- For Busybox data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-target-network-delay
data:
parameters.yml: |
blocksize: 4k
blockcount: 1024
testfile: exampleFile
```
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-target-network-delay
data:
parameters.yml: |
blocksize: 4k
blockcount: 1024
testfile: exampleFile
```
- Ensure that the chaosServiceAccount used for the experiment has cluster-scope permissions as the experiment may involve carrying out the chaos in the `openebs` namespace
while performing application health checks in its respective namespace.
while performing application health checks in its respective namespace.
## Entry Criteria
@ -72,8 +76,8 @@ original_id: openebs-target-network-delay
If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
## Details
@ -84,8 +88,8 @@ If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
## Integrations
- Network delay is achieved using the `pumba` chaos library in case of docker runtime. Support for other other runtimes via tc direct invocation of `tc` will be added soon.
- The desired lib image can be configured in the env variable `LIB_IMAGE`.
- Network delay is achieved using the `pumba` chaos library in case of docker runtime. Support for other other runtimes via tc direct invocation of `tc` will be added soon.
- The desired lib image can be configured in the env variable `LIB_IMAGE`.
## Steps to Execute the Chaos Experiment
@ -99,7 +103,8 @@ Use this sample RBAC manifest to create a chaosServiceAccount in the desired (ap
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-target-network-delay/rbac.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-target-network-delay/rbac.yaml yaml"
```yaml
---
apiVersion: v1
@ -118,9 +123,24 @@ metadata:
labels:
name: target-network-delay-sa
rules:
- apiGroups: ["","apps","litmuschaos.io","batch","extensions","storage.k8s.io"]
resources: ["pods","pods/exec","pods/log","events","jobs","configmaps","secrets","services","persistentvolumeclaims","storageclasses","persistentvolumes","chaosexperiments","chaosresults","chaosengines"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups:
["", "apps", "litmuschaos.io", "batch", "extensions", "storage.k8s.io"]
resources:
[
"pods",
"pods/exec",
"jobs",
"configmaps",
"secrets",
"services",
"persistentvolumeclaims",
"storageclasses",
"persistentvolumes",
"chaosexperiments",
"chaosresults",
"chaosengines",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -133,9 +153,9 @@ roleRef:
kind: ClusterRole
name: target-network-delay-sa
subjects:
- kind: ServiceAccount
name: target-network-delay-sa
namespace: default
- kind: ServiceAccount
name: target-network-delay-sa
namespace: default
```
### Prepare ChaosEngine
@ -205,7 +225,8 @@ subjects:
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-target-network-delay/engine.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-target-network-delay/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -214,38 +235,38 @@ metadata:
namespace: default
spec:
# It can be true/false
annotationCheck: 'false'
annotationCheck: "false"
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: target-network-delay-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: openebs-target-network-delay
spec:
components:
env:
- name: TARGET_CONTAINER
value: 'cstor-istgt'
value: "cstor-istgt"
- name: APP_PVC
value: 'demo-nginx-claim'
value: "demo-nginx-claim"
- name: DEPLOY_TYPE
value: 'deployment'
value: "deployment"
- name: NETWORK_DELAY
value: '30000'
value: "30000"
- name: TOTAL_CHAOS_DURATION
value: '60' # in seconds
value: "60" # in seconds
```
### Create the ChaosEngine Resource
@ -263,12 +284,12 @@ spec:
### Check Chaos Experiment Result
- Check whether the application is resilient to the target network delays, once the experiment (job) is completed. The ChaosResult resource naming
convention is: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the application is resilient to the target network delays, once the experiment (job) is completed. The ChaosResult resource naming
convention is: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult target-chaos-openebs-target-network-delay -n <application-namespace>`
## Recovery
## Recovery
- If the verdict of the ChaosResult is `Fail`, and/or the OpenEBS components do not return to healthy state post the chaos experiment, then please refer the [OpenEBS troubleshooting guide](https://docs.openebs.io/docs/next/troubleshooting.html#volume-provisioning) for more info on how to recover the same.

View File

@ -1,10 +1,11 @@
---
id: version-1.1.0-openebs-target-network-loss
id: openebs-target-network-loss
title: OpenEBS Target Network Loss Experiment Details
sidebar_label: Target Network Loss
original_id: openebs-target-network-loss
---
------
---
## Experiment Metadata
@ -21,44 +22,47 @@ original_id: openebs-target-network-loss
</tr>
</table>
<b>Note:</b> In this example, we are using nginx as stateful application that stores static pages on a Kubernetes volume.
<b>Note:</b> In this example, we are using nginx as stateful application that stores static pages on a Kubernetes volume.
## Prerequisites
- Ensure that the Kubernetes Cluster uses Docker runtime
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `openebs-target-network-loss` experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/charts/openebs/experiments/openebs-target-network-loss)
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox.
- For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox.
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-target-network-loss
data:
parameters.yml: |
dbuser: root
dbpassword: k8sDem0
dbname: test
```
- For Busybox data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
- For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-target-network-loss
data:
parameters.yml: |
dbuser: root
dbpassword: k8sDem0
dbname: test
```
- For Busybox data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-target-network-loss
data:
parameters.yml: |
blocksize: 4k
blockcount: 1024
testfile: exampleFile
```
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-target-network-loss
data:
parameters.yml: |
blocksize: 4k
blockcount: 1024
testfile: exampleFile
```
- Ensure that the chaosServiceAccount used for the experiment has cluster-scope permissions as the experiment may involve carrying out the chaos in the `openebs` namespace
while performing application health checks in its respective namespace.
while performing application health checks in its respective namespace.
## Entry Criteria
@ -72,8 +76,8 @@ original_id: openebs-target-network-loss
If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
## Details
@ -84,8 +88,8 @@ If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
## Integrations
- Network loss is achieved using the `pumba` chaos library in case of docker runtime. Support for other other runtimes via tc direct invocation of `tc` will be added soon.
- The desired lib image can be configured in the env variable `LIB_IMAGE`.
- Network loss is achieved using the `pumba` chaos library in case of docker runtime. Support for other other runtimes via tc direct invocation of `tc` will be added soon.
- The desired lib image can be configured in the env variable `LIB_IMAGE`.
## Steps to Execute the Chaos Experiment
@ -99,7 +103,8 @@ Use this sample RBAC manifest to create a chaosServiceAccount in the desired (ap
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-target-network-loss/rbac.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-target-network-loss/rbac.yaml yaml"
```yaml
---
apiVersion: v1
@ -118,9 +123,24 @@ metadata:
labels:
name: target-network-loss-sa
rules:
- apiGroups: ["","apps","litmuschaos.io","batch","extensions","storage.k8s.io"]
resources: ["pods","pods/exec","jobs","configmaps","secrets","services","persistentvolumeclaims","storageclasses","persistentvolumes","chaosexperiments","chaosresults","chaosengines"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups:
["", "apps", "litmuschaos.io", "batch", "extensions", "storage.k8s.io"]
resources:
[
"pods",
"pods/exec",
"jobs",
"configmaps",
"secrets",
"services",
"persistentvolumeclaims",
"storageclasses",
"persistentvolumes",
"chaosexperiments",
"chaosresults",
"chaosengines",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -133,9 +153,9 @@ roleRef:
kind: ClusterRole
name: target-network-loss-sa
subjects:
- kind: ServiceAccount
name: target-network-loss-sa
namespace: default
- kind: ServiceAccount
name: target-network-loss-sa
namespace: default
```
### Prepare ChaosEngine
@ -193,7 +213,8 @@ subjects:
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-target-network-loss/engine.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-target-network-loss/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -202,35 +223,35 @@ metadata:
namespace: default
spec:
# It can be true/false
annotationCheck: 'false'
annotationCheck: "false"
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: target-network-loss-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: openebs-target-network-loss
spec:
components:
env:
- name: TARGET_CONTAINER
value: 'cstor-istgt'
value: "cstor-istgt"
- name: APP_PVC
value: 'demo-nginx-claim'
value: "demo-nginx-claim"
- name: DEPLOY_TYPE
value: 'deployment'
value: "deployment"
- name: TOTAL_CHAOS_DURATION
value: '120' # in seconds
value: "120" # in seconds
```
### Create the ChaosEngine Resource
@ -248,8 +269,8 @@ spec:
### Check Chaos Experiment Result
- Check whether the application is resilient to the target network loss, once the experiment (job) is completed. The ChaosResult resource naming
convention is: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the application is resilient to the target network loss, once the experiment (job) is completed. The ChaosResult resource naming
convention is: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult target-chaos-openebs-target-network-loss -n <application-namespace>`

View File

@ -1,10 +1,11 @@
---
id: version-1.2.0-openebs-target-pod-failure
id: openebs-target-pod-failure
title: OpenEBS Target Pod Failure Experiment Details
sidebar_label: Target Pod Failure
original_id: openebs-target-pod-failure
---
------
---
## Experiment Metadata
@ -21,43 +22,46 @@ original_id: openebs-target-pod-failure
</tr>
</table>
<b>Note:</b> In this example, we are using nginx as stateful application that stores static pages on a Kubernetes volume.
<b>Note:</b> In this example, we are using nginx as stateful application that stores static pages on a Kubernetes volume.
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `openebs-target-pod-failure` experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/charts/openebs/experiments/openebs-target-pod-failure)
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox.
- For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox.
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-target-pod-failure
data:
parameters.yml: |
dbuser: root
dbpassword: k8sDem0
dbname: test
```
- For Busybox data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
- For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-target-pod-failure
data:
parameters.yml: |
dbuser: root
dbpassword: k8sDem0
dbname: test
```
- For Busybox data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-target-pod-failure
data:
parameters.yml: |
blocksize: 4k
blockcount: 1024
testfile: exampleFile
```
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-target-pod-failure
data:
parameters.yml: |
blocksize: 4k
blockcount: 1024
testfile: exampleFile
```
- Ensure that the chaosServiceAccount used for the experiment has cluster-scope permissions as the experiment may involve carrying out the chaos in the `openebs` namespace
while performing application health checks in its respective namespace.
while performing application health checks in its respective namespace.
## Entry Criteria
@ -71,8 +75,8 @@ original_id: openebs-target-pod-failure
If the experiment tunable DATA_PERSISTENCE is set to 'enabled':
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
- Application data written prior to chaos is successfully retrieved/read
- Database consistency is maintained as per db integrity check utils
## Details
@ -96,7 +100,8 @@ Use this sample RBAC manifest to create a chaosServiceAccount in the desired (ap
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-target-pod-failure/rbac.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-target-pod-failure/rbac.yaml yaml"
```yaml
---
apiVersion: v1
@ -115,12 +120,28 @@ metadata:
labels:
name: target-pod-failure-sa
rules:
- apiGroups: ["","apps","litmuschaos.io","batch","extensions","storage.k8s.io"]
resources: ["pods","jobs","pods/log","deployments","pods/exec","events","chaosexperiments","chaosresults","chaosengines","configmaps","secrets","services","persistentvolumeclaims","storageclasses","persistentvolumes"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
- apiGroups:
["", "apps", "litmuschaos.io", "batch", "extensions", "storage.k8s.io"]
resources:
[
"pods",
"jobs",
"deployments",
"pods/exec",
"chaosexperiments",
"chaosresults",
"chaosengines",
"configmaps",
"secrets",
"services",
"persistentvolumeclaims",
"storageclasses",
"persistentvolumes",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -133,10 +154,9 @@ roleRef:
kind: ClusterRole
name: target-pod-failure-sa
subjects:
- kind: ServiceAccount
name: target-pod-failure-sa
namespace: default
- kind: ServiceAccount
name: target-pod-failure-sa
namespace: default
```
### Prepare ChaosEngine
@ -182,7 +202,8 @@ subjects:
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-target-pod-failure/engine.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/openebs/openebs-target-pod-failure/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -191,30 +212,30 @@ metadata:
namespace: default
spec:
# It can be true/false
annotationCheck: 'false'
annotationCheck: "false"
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: target-pod-failure-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: openebs-target-pod-failure
spec:
components:
env:
- name: FORCE
value: 'true'
value: "true"
- name: APP_PVC
value: 'demo-nginx-claim'
value: "demo-nginx-claim"
- name: DEPLOY_TYPE
value: 'deployment'
value: "deployment"
```
### Create the ChaosEngine Resource
@ -231,12 +252,12 @@ spec:
### Check Chaos Experiment Result
- Check whether the application is resilient to the target container kill, once the experiment (job) is completed. The ChaosResult resource naming convention
is: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the application is resilient to the target container kill, once the experiment (job) is completed. The ChaosResult resource naming convention
is: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult target-chaos-openebs-target-pod-failure -n <application-namespace>`
## Recovery
## Recovery
- If the verdict of the ChaosResult is `Fail`, and/or the OpenEBS components do not return to healthy state post the chaos experiment, then please refer the [OpenEBS troubleshooting guide](https://docs.openebs.io/docs/next/troubleshooting.html#volume-provisioning) for more info on how to recover the same.

View File

@ -0,0 +1,40 @@
---
id: plugins
title: Using other chaos libraries as plugins
sidebar_label: Plugins
original_id: plugins
---
---
Litmus provides a way to use any chaos library or a tool to inject chaos. The chaos tool to be compatible with Litmus should satisfy the following requirements:
- Should be available as a Docker Image
- Should take configuration through a `config-map`
The `plugins` or `chaos-libraries` host the core logic to inject chaos.
These plugins are hosted at https://github.com/litmuschaos/litmus-ansible/tree/master/chaoslib
Litmus project has integration into the following chaos-libraries.
| Chaos Library | Logo | Experiments covered |
| ------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
| <a href="https://github.com/litmuschaos/litmus-ansible" target="_blank">Litmus</a> | <img src="https://camo.githubusercontent.com/953211f24c1c246f7017703f67b9779e4589bf76/68747470733a2f2f6c616e6473636170652e636e63662e696f2f6c6f676f732f6c69746d75732e737667" width="50"/> | Litmus native chaos libraries that encompasses the chaos capabilities for `pod-kill`, `container-kill`, `cpu-hog` |
| <a href="https://github.com/alexei-led/pumba" target="_blank">Pumba</a> | <img src="https://github.com/alexei-led/pumba/raw/master/docs/img/pumba_logo.png" width="50"/> | Pumba provides chaos capabilities for `network-delay` |
| <a href="https://github.com/bloomberg/powerfulseal" target="_blank">PowerfulSeal</a> | <img src="https://github.com/bloomberg/powerfulseal/raw/master/media/powerful-seal.png" width="50"/> | PowerfulSeal provides chaos capabilities for `pod-kill` |
| | | |
Usage of plugins is a configuration parameter inside the chaos experiment.
> Add an example snippet here.
<br/>
<br/>
<hr/>
<br/>
<br/>

View File

@ -1,10 +1,11 @@
---
id: version-1.1.0-pod-cpu-hog
id: pod-cpu-hog
title: Pod CPU Hog Details
sidebar_label: Pod CPU Hog
original_id: pod-cpu-hog
---
------
---
## Experiment Metadata
@ -37,10 +38,9 @@ original_id: pod-cpu-hog
## Details
- This experiment consumes the CPU resources on the application container (upward of 80%) on specified number of cores
- This experiment consumes the CPU resources on the application container (upward of 80%) on specified number of cores
- It simulates conditions where app pods experience CPU spikes either due to expected/undesired processes thereby testing how the
overall application stack behaves when this occurs.
overall application stack behaves when this occurs.
## Integrations
@ -58,7 +58,8 @@ Use this sample RBAC manifest to create a chaosServiceAccount in the desired (ap
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-cpu-hog/rbac.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-cpu-hog/rbac.yaml yaml"
```yaml
---
apiVersion: v1
@ -77,9 +78,10 @@ metadata:
labels:
name: pod-cpu-hog-sa
rules:
- apiGroups: ["","litmuschaos.io","batch"]
resources: ["pods","jobs","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: ["", "litmuschaos.io", "batch"]
resources:
["pods", "jobs", "chaosengines", "chaosexperiments", "chaosresults"]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
@ -93,16 +95,16 @@ roleRef:
kind: Role
name: pod-cpu-hog-sa
subjects:
- kind: ServiceAccount
name: pod-cpu-hog-sa
namespace: default
- kind: ServiceAccount
name: pod-cpu-hog-sa
namespace: default
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- Provide the auxiliary applications info (ns & labels) in `spec.auxiliaryAppInfo`
- Override the experiment tunables if desired
- Override the experiment tunables if desired
#### Supported Experiment Tunables
@ -148,7 +150,8 @@ subjects:
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-cpu-hog/engine.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-cpu-hog/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -157,19 +160,19 @@ metadata:
namespace: default
spec:
# It can be true/false
annotationCheck: 'true'
annotationCheck: "true"
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: pod-cpu-hog-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: pod-cpu-hog
spec:
@ -178,16 +181,15 @@ spec:
# Provide name of target container
# where chaos has to be injected
- name: TARGET_CONTAINER
value: 'nginx'
value: "nginx"
#number of cpu cores to be consumed
#verify the resources the app has been launched with
- name: CPU_CORES
value: '1'
value: "1"
- name: TOTAL_CHAOS_DURATION
value: '60' # in seconds
value: "60" # in seconds
```
### Create the ChaosEngine Resource
@ -204,10 +206,10 @@ spec:
### Check Chaos Experiment Result
- Check whether the application stack is resilient to CPU spikes on the app replica, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the application stack is resilient to CPU spikes on the app replica, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult nginx-chaos-pod-cpu-hog -n <application-namespace>`
## Pod CPU Hog Experiment Demo
## Pod CPU Hog Experiment Demo
- A sample recording of this experiment execution is provided [here](https://youtu.be/MBGSPmZKb2I).

View File

@ -1,10 +1,11 @@
---
id: version-1.1.0-pod-delete
id: pod-delete
title: Pod Delete Experiment Details
sidebar_label: Pod Delete
original_id: pod-delete
---
------
---
## Experiment Metadata
@ -24,7 +25,7 @@ original_id: pod-delete
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`).If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/charts/generic/experiments/pod-delete)
- Ensure that the `pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/charts/generic/experiments/pod-delete)
## Entry Criteria
@ -57,7 +58,8 @@ original_id: pod-delete
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-delete/rbac.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-delete/rbac.yaml yaml"
```yaml
---
apiVersion: v1
@ -76,12 +78,21 @@ metadata:
labels:
name: pod-delete-sa
rules:
- apiGroups: ["","litmuschaos.io","batch","apps"]
resources: ["pods","deployments","jobs","configmaps","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
- apiGroups: ["", "litmuschaos.io", "batch", "apps"]
resources:
[
"pods",
"deployments",
"jobs",
"configmaps",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
@ -95,10 +106,9 @@ roleRef:
kind: Role
name: pod-delete-sa
subjects:
- kind: ServiceAccount
name: pod-delete-sa
namespace: default
- kind: ServiceAccount
name: pod-delete-sa
namespace: default
```
### Prepare ChaosEngine
@ -155,7 +165,8 @@ subjects:
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-delete/engine.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-delete/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
@ -164,19 +175,19 @@ metadata:
namespace: default
spec:
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
# It can be true/false
annotationCheck: 'true'
annotationCheck: "true"
# It can be active/stop
engineState: 'active'
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
auxiliaryAppInfo: ""
chaosServiceAccount: pod-delete-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
experiments:
- name: pod-delete
spec:
@ -184,13 +195,13 @@ spec:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '30'
value: "30"
# set chaos interval (in sec) as desired
- name: CHAOS_INTERVAL
value: '10'
value: "10"
# pod failures without '--force' & default terminationGracePeriodSeconds
- name: FORCE
value: 'false'
value: "false"
```
### Create the ChaosEngine Resource
@ -207,10 +218,10 @@ spec:
### Check Chaos Experiment Result
- Check whether the application is resilient to the pod failure, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the application is resilient to the pod failure, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult nginx-chaos-pod-delete -n <application-namespace>`
## Application Pod Failure Demo
- A sample recording of this experiment execution is provided [here](https://youtu.be/X3JvY_58V9A)
- A sample recording of this experiment execution is provided [here](https://youtu.be/X3JvY_58V9A)

View File

@ -1,10 +1,11 @@
---
id: version-1.1.0-pod-network-corruption
id: pod-network-corruption
title: Pod Network Corruption Experiment Details
sidebar_label: Pod Network Corruption
original_id: pod-network-corruption
---
------
---
## Experiment Metadata
@ -22,9 +23,10 @@ original_id: pod-network-corruption
</table>
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `pod-network-corruption` experiment resource is available in the cluster by `kubectl get chaosexperiments` command. If not, install from [here](https://hub.litmuschaos.io/charts/generic/experiments/pod-network-corruption)
- Cluster must run docker container runtime
- Cluster must run docker container runtime
<div class="danger">
<strong>NOTE</strong>:
@ -41,7 +43,7 @@ original_id: pod-network-corruption
## Details
- The application pod should be healthy once chaos is stopped. Service-requests should be served despite chaos.
- The application pod should be healthy once chaos is stopped. Service-requests should be served despite chaos.
- Injects packet corruption on the specified container by starting a traffic control (tc) process with netem rules to add egress packet corruption
- Corruption is injected via pumba library with command pumba netem corruption by passing the relevant network interface, packet-corruption-percentage, chaos duration and regex filter for container name
- Can test the application's resilience to lossy/flaky network
@ -58,7 +60,8 @@ original_id: pod-network-corruption
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-network-corruption/rbac.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-network-corruption/rbac.yaml yaml"
```yaml
---
apiVersion: v1
@ -77,9 +80,10 @@ metadata:
labels:
name: pod-network-corruption-sa
rules:
- apiGroups: ["","litmuschaos.io","batch"]
resources: ["pods","jobs","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: ["", "litmuschaos.io", "batch"]
resources:
["pods", "jobs", "chaosengines", "chaosexperiments", "chaosresults"]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
@ -93,9 +97,9 @@ roleRef:
kind: Role
name: pod-network-corruption-sa
subjects:
- kind: ServiceAccount
name: pod-network-corruption-sa
namespace: default
- kind: ServiceAccount
name: pod-network-corruption-sa
namespace: default
```
### Prepare ChaosEngine
@ -158,44 +162,45 @@ subjects:
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-network-corruption/engine.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-network-corruption/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-network-chaos
name: nginx-network-chaos
namespace: default
spec:
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
# It can be true/false
annotationCheck: 'true'
annotationCheck: "true"
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
monitoring: false
appinfo:
appns: 'default'
appinfo:
appns: "default"
# FYI, To see app label, apply kubectl get pods --show-labels
applabel: 'app=nginx'
appkind: 'deployment'
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: pod-network-corruption-sa
experiments:
- name: pod-network-corruption
spec:
components:
env:
#Container name where chaos has to be injected
#Container name where chaos has to be injected
- name: TARGET_CONTAINER
value: 'nginx'
value: "nginx"
#Network interface inside target container
#Network interface inside target container
- name: NETWORK_INTERFACE
value: 'eth0'
value: "eth0"
- name: TOTAL_CHAOS_DURATION
value: '60' # in seconds
value: "60" # in seconds
```
### Create the ChaosEngine Resource
@ -206,17 +211,16 @@ spec:
### Watch Chaos progress
- View impact of network packet corruption on the affected pod from the cluster nodes (alternate is to setup ping to a remote IP from inside the target pod)
- View impact of network packet corruption on the affected pod from the cluster nodes (alternate is to setup ping to a remote IP from inside the target pod)
`ping <pod_ip_address>`
### Check Chaos Experiment Result
- Check whether the application is resilient to the Pod Network Packet Corruption, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the application is resilient to the Pod Network Packet Corruption, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult <ChaosEngine-Name>-<ChaosExperiment-Name> -n <application-namespace>`
## Application Pod Network Packet Corruption Demo
## Application Pod Network Packet Corruption Demo
- A sample recording of this experiment execution is provided [here](https://youtu.be/kSiLrIaILvs).

View File

@ -1,10 +1,11 @@
---
id: version-1.1.0-pod-network-latency
id: pod-network-latency
title: Pod Network Latency Experiment Details
sidebar_label: Pod Network Latency
original_id: pod-network-latency
---
------
---
## Experiment Metadata
@ -22,10 +23,10 @@ original_id: pod-network-latency
</table>
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `pod-network-latency` experiment resource is available in the cluster by executing kubectl `get chaosexperiments` in the desired namespace. . If not, install from [here](https://hub.litmuschaos.io/charts/generic/experiments/pod-network-latency)
<div class="danger">
<strong>NOTE</strong>:
Experiment is supported only on Docker Runtime. Support for containerd/CRIO runtimes will be added in subsequent releases.
@ -41,10 +42,10 @@ original_id: pod-network-latency
## Details
- The application pod should be healthy once chaos is stopped. Service-requests should be served despite chaos.
- The application pod should be healthy once chaos is stopped. Service-requests should be served despite chaos.
- Causes flaky access to application replica by injecting network delay using pumba.
- Injects latency on the specified container by starting a traffic control (tc) process with netem rules to add egress delays
- Latency is injected via pumba library with command pumba netem delay by passing the relevant network interface, latency, chaos duration and regex filter for container name
- Injects latency on the specified container by starting a traffic control (tc) process with netem rules to add egress delays
- Latency is injected via pumba library with command pumba netem delay by passing the relevant network interface, latency, chaos duration and regex filter for container name
- Can test the application's resilience to lossy/flaky network
## Steps to Execute the Chaos Experiment
@ -59,7 +60,8 @@ original_id: pod-network-latency
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-network-latency/rbac.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-network-latency/rbac.yaml yaml"
```yaml
---
apiVersion: v1
@ -78,9 +80,10 @@ metadata:
labels:
name: pod-network-latency-sa
rules:
- apiGroups: ["","litmuschaos.io","batch"]
resources: ["pods","jobs","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: ["", "litmuschaos.io", "batch"]
resources:
["pods", "jobs", "chaosengines", "chaosexperiments", "chaosresults"]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
@ -94,9 +97,9 @@ roleRef:
kind: Role
name: pod-network-latency-sa
subjects:
- kind: ServiceAccount
name: pod-network-latency-sa
namespace: default
- kind: ServiceAccount
name: pod-network-latency-sa
namespace: default
```
### Prepare ChaosEngine
@ -159,28 +162,29 @@ subjects:
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-network-latency/engine.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-network-latency/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-network-chaos
name: nginx-network-chaos
namespace: default
spec:
spec:
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
# It can be true/false
annotationCheck: 'true'
annotationCheck: "true"
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
monitoring: false
appinfo:
appns: 'default'
appinfo:
appns: "default"
# FYI, To see app label, apply kubectl get pods --show-labels
applabel: 'app=nginx'
appkind: 'deployment'
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: pod-network-latency-sa
experiments:
- name: pod-network-latency
@ -189,20 +193,20 @@ spec:
env:
#Container name where chaos has to be injected
- name: TARGET_CONTAINER
value: 'nginx'
value: "nginx"
#Network interface inside target container
- name: NETWORK_INTERFACE
value: 'eth0'
value: "eth0"
- name: LIB_IMAGE
value: 'gaiaadm/pumba:0.6.5'
value: "gaiaadm/pumba:0.6.5"
- name: NETWORK_LATENCY
value: '60000'
value: "60000"
- name: TOTAL_CHAOS_DURATION
value: '60' # in seconds
value: "60" # in seconds
```
### Create the ChaosEngine Resource
@ -213,17 +217,16 @@ spec:
### Watch Chaos progress
- View network latency by setting up a ping on the affected pod from the cluster nodes
- View network latency by setting up a ping on the affected pod from the cluster nodes
`ping <pod_ip_address>`
### Check Chaos Experiment Result
- Check whether the application is resilient to the Pod Network Latency, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the application is resilient to the Pod Network Latency, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult <ChaosEngine-Name>-<ChaosExperiment-Name> -n <application-namespace>`
## Application Pod Network Latency Demo
- A sample recording of this experiment execution is provided [here](https://youtu.be/QsQZyXVCcCw).

View File

@ -1,10 +1,11 @@
---
id: version-1.1.0-pod-network-loss
id: pod-network-loss
title: Pod Network Loss Experiment Details
sidebar_label: Pod Network Loss
original_id: pod-network-loss
---
------
---
## Experiment Metadata
@ -24,7 +25,7 @@ original_id: pod-network-loss
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `pod-network-loss` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/charts/generic/experiments/pod-network-loss)
- Ensure that the `pod-network-loss` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/charts/generic/experiments/pod-network-loss)
<div class="danger">
<strong>NOTE</strong>:
Experiment is supported only on Docker Runtime. Support for containerd/CRIO runtimes will be added in subsequent releases.
@ -41,7 +42,7 @@ original_id: pod-network-loss
## Details
- Pod-network-loss injects chaos to disrupt network connectivity to kubernetes pods.
- The application pod should be healthy once chaos is stopped. Service-requests should be served despite chaos.
- The application pod should be healthy once chaos is stopped. Service-requests should be served despite chaos.
- Causes loss of access to application replica by injecting packet loss using pumba
## Steps to Execute the Chaos Experiment
@ -56,7 +57,8 @@ original_id: pod-network-loss
#### Sample Rbac Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-network-loss/rbac.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-network-loss/rbac.yaml yaml"
```yaml
apiVersion: v1
kind: ServiceAccount
@ -74,9 +76,10 @@ metadata:
labels:
name: pod-network-loss-sa
rules:
- apiGroups: ["","litmuschaos.io","batch"]
resources: ["pods","jobs","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: ["", "litmuschaos.io", "batch"]
resources:
["pods", "jobs", "chaosengines", "chaosexperiments", "chaosresults"]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
@ -90,9 +93,9 @@ roleRef:
kind: Role
name: pod-network-loss-sa
subjects:
- kind: ServiceAccount
name: pod-network-loss-sa
namespace: default
- kind: ServiceAccount
name: pod-network-loss-sa
namespace: default
```
### Prepare ChaosEngine
@ -155,7 +158,8 @@ subjects:
#### Sample ChaosEngine Manifest
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-network-loss/engine.yaml yaml)
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-network-loss/engine.yaml yaml"
```yaml
# chaosengine.yaml
apiVersion: litmuschaos.io/v1alpha1
@ -165,42 +169,41 @@ metadata:
namespace: default
spec:
# It can be delete/retain
jobCleanUpPolicy: 'delete'
jobCleanUpPolicy: "delete"
# It can be true/false
annotationCheck: 'true'
annotationCheck: "true"
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
monitoring: false
appinfo:
appns: 'default'
appinfo:
appns: "default"
# FYI, To see app label, apply kubectl get pods --show-labels
applabel: 'app=nginx'
appkind: 'deployment'
chaosServiceAccount: pod-network-loss-sa
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: pod-network-loss-sa
experiments:
- name: pod-network-loss
spec:
components:
env:
#Container name where chaos has to be injected
#Container name where chaos has to be injected
- name: TARGET_CONTAINER
value: 'nginx'
value: "nginx"
- name: LIB_IMAGE
value: 'gaiaadm/pumba:0.6.5'
value: "gaiaadm/pumba:0.6.5"
#Network interface inside target container
- name: NETWORK_INTERFACE
value: 'eth0'
value: "eth0"
- name: NETWORK_PACKET_LOSS_PERCENTAGE
value: '100'
value: "100"
- name: TOTAL_CHAOS_DURATION
value: '60' # in seconds
value: "60" # in seconds
```
### Create the ChaosEngine Resource
@ -211,17 +214,16 @@ spec:
### Watch Chaos progress
- View network latency by setting up a ping on the affected pod from the cluster nodes
- View network latency by setting up a ping on the affected pod from the cluster nodes
`ping <pod_ip_address>`
### Check Chaos Experiment Result
- Check whether the application is resilient to the Pod Network Loss, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
- Check whether the application is resilient to the Pod Network Loss, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `{"<ChaosEngine-Name>-<ChaosExperiment-Name>"}`.
`kubectl describe chaosresult <ChaosEngine-Name>-<ChaosExperiment-Name> -n <application-namespace>`
## Application Pod Network Loss Demo
## Application Pod Network Loss Demo
- A sample recording of this experiment execution is provided [here](https://youtu.be/jqvYy-nWc_I).

View File

@ -0,0 +1,40 @@
---
id: resources
title: Resources related to Chaos Engineering on Kubernetes
sidebar_label: Resources
original_id: resources
---
---
## Chaos Demos
### Getting Started
Use this video to learn how to get started with Litmus. You will learn how to install Litmus, how to inject a fault into your application using one of the experiments available at ChaosHub.
<a href="https://asciinema.org/a/G9TcXpgikLuGTBY7btIUNSuWN" target="_blank">
<img src={require("./assets/getstarted.svg").default} width="300"/>
</a>
<hr/>
## Reference Implementations
| Reference | Description |
| ------------------ | ---------------------------------------------------------------------- |
| https://openebs.ci | CNCF SandBox project uses Litmus chaos experiments in its CI pipelines |
| | |
| | |
<br/>
<br/>
<hr/>
<br/>
<br/>

View File

@ -1,12 +1,13 @@
---
id: architecture
original_id: architecture
title: Litmus Architecture
sidebar_label: Architecture
---
<hr/>
<img src={require('./assets/litmus-schematic.png').default} width="800" />
<img src={require("./assets/litmus-schematic.png").default} width="800" />
**Chaos-Operator**

View File

@ -2,6 +2,7 @@
id: chaosexperiment
title: Constructing the ChaosExperiment
sidebar_label: ChaosExperiment
original_id: chaosexperiment
---
---

View File

@ -2,6 +2,7 @@
id: chaoshub
title: Using and contributing to ChaosHub
sidebar_label: ChaosHub
original_id: chaoshub
---
---
@ -73,6 +74,8 @@ Chaos experiments that inject chaos into the platform resources of Kubernetes ar
Following Platform Chaos experiments are available on ChaosHub
| Platform | Description | Chaos Experiments |
| -------- | ------------------------------------------- | --------------------------------------------------------------------------- |
| AWS | Amazon Web Services platform. Includes EKS. | [ec2-terminate](chaostoolkit-aws-ec2-terminate.md), [ebs-loss](ebs-loss.md) |
| Platform | Description | Chaos Experiments |
| -------- | ------------------------------------------- | ----------------- |
| AWS | Amazon Web Services platform. Includes EKS. | None |
| GCP | Google Cloud Platform. Includes GKE. | None |
| Azure | Microsoft Azure platform. Includes AKS. | None |

View File

@ -2,6 +2,7 @@
id: chaosresult
title: Constructing the ChaosResult
sidebar_label: ChaosResult
original_id: chaosresult
---
---

View File

@ -2,6 +2,7 @@
id: chaosschedule
title: Constructing the ChaosSchedule
sidebar_label: ChaosSchedule (alpha)
original_id: chaosschedule
---
---
@ -173,10 +174,7 @@ This section describes the fields in the ChaosSchedule spec and the possible val
</tr>
<tr>
<th>Notes</th>
<td> The <code>includedDays</code> in the spec specifies a (comma-separated) list
of days of the week at which chaos is allowed to take place. {'{'}day_name{'}'} is to
be specified with the first 3 letters of the name of day such as
<code>Mon</code>, <code>Tue</code> etc.</td>
<td>The <code>includedDays</code> in the spec specifies a (comma-separated) list of days of the week at which chaos is allowed to take place. {'{'}day_name{'}'} is to be specified with the first 3 letters of the name of day such as <code>Mon</code>, <code>Tue</code> etc.</td>
</tr>
</table>
@ -195,7 +193,7 @@ This section describes the fields in the ChaosSchedule spec and the possible val
</tr>
<tr>
<th>Range</th>
<td><i>{'{'}hour_number{'}'} will range from 0 to 23</i> (type: string)(pattern: {'{'}hour_number{'}'}-{'{'}hour_number{'}'}).</td>
<td><i>{'{'}hour_number{'}'} will range from 0 to 23</i> (type: string)(pattern: {'{'}hour_number{'}'}-{'{'}hour_number{'}'}).</td>
</tr>
<tr>
<th>Default</th>

View File

@ -1,24 +1,24 @@
---
id: version-1.0.0-community
id: community
title: Join Litmus Community
sidebar_label: Community
sidebar_label: Community
original_id: community
---
------
Litmus community is a subset of the larger Kubernetes community. Have a question? Want to stay in touch with the happenings on Chaos Engineering on Kubernetes? Join `#litmus` channel on Kubernetes Slack.
---
<br><br>
Litmus community is a subset of the larger Kubernetes community. Have a question? Want to stay in touch with the happenings on Chaos Engineering on Kubernetes? Join `#litmus` channel on Kubernetes Slack.
<a href="https://kubernetes.slack.com/messages/CNXNB0ZTN" target="_blank"><img src="/docs/assets/join-community.png" width="400"></a>
<br/><br/>
<br>
<a href="https://kubernetes.slack.com/messages/CNXNB0ZTN" target="_blank"><img src="/docs/assets/join-community.png" width="400"/></a>
<br>
<br/>
<hr>
<br/>
<br>
<hr/>
<br>
<br/>
<br/>

View File

@ -2,6 +2,7 @@
id: devguide
title: Developer Guide for ChaosCharts
sidebar_label: Developer Guide
original_id: devguide
---
---
@ -12,15 +13,13 @@ Below are some key points to remember before understanding how to write a new ch
> ChaosCharts repository : https://github.com/litmuschaos/chaos-charts
>
> Litmus-Go repository : https://github.com/litmuschaos/litmus-go/tree/master/experiments
> Litmusbooks repository : https://github.com/litmuschaos/litmus-ansible/tree/master/experiments
>
> Website rendering code repository: https://github.com/litmuschaos/charthub.litmuschaos.io
The experiments & chaos libraries are typically written in Go, though not mandatory. Ensure that
The experiments & chaos libraries are typically written in Ansible, though not mandatory. Ensure that
the experiments can be executed in a container & can read/update the litmuschaos custom resources. For example,
if you are writing an experiment in Go, use this [clientset](https://github.com/litmuschaos/chaos-operator/tree/master/pkg/client).
Litmus Experiment contains the logic of pre-checks, chaos-injection, litmus-probes, post-checks, and result-updates.
Typically, these are accompanied by a Kubernetes job that can execute the respective experiment.
if you are writing an experiment in Go, use this [clientset](https://github.com/litmuschaos/chaos-operator/tree/master/pkg/client)
<hr/>
@ -42,11 +41,22 @@ to their default values.
Here is an example chaos experiment CR for a [pod-delete](https://github.com/litmuschaos/chaos-charts/blob/master/charts/generic/pod-delete/experiment.yaml) experiment
### Litmus Book
Litmus book is an `ansible` playbook that encompasses the logic of pre-checks, chaos-injection, post-checks, and result-updates.
Typically, these are accompanied by a Kubernetes job that can execute the respective playbook.
Here is an example of the litmus book for the [pod-delete](https://github.com/litmuschaos/litmus-ansible/tree/master/experiments/generic/pod_delete) experiment.
### Chaos functions
The `ansible` business logic inside Litmus books can make use of readily available chaos functions. The chaos functions are available as `task-files` which are wrapped in one of the chaos libraries. See [plugins](plugins.md) for more details.
<hr/>
## Developing a ChaosExperiment
A detailed how-to guide on developing chaos experiments is available [here](https://github.com/litmuschaos/litmus-go/tree/master/contribute/developer-guide)
A detailed how-to guide on developing chaos experiments is available [here](https://github.com/litmuschaos/litmus-ansible/tree/master/contribute/developer_guide)
<br/>

Some files were not shown because too many files have changed in this diff Show More