Migrated latest docs. (#2)

* Migrated latest docs.

Signed-off-by: Vedant Shrotria <vedant.shrotria@mayadata.io>

* Added Changes for sidebar and 1.10.0 docs.

Signed-off-by: Vedant Shrotria <vedant.shrotria@mayadata.io>

* Added Changes for 1.11.0 and master docs.

Signed-off-by: Vedant Shrotria <vedant.shrotria@mayadata.io>

* Added routes changes and minor fixes.

Signed-off-by: Vedant Shrotria <vedant.shrotria@mayadata.io>

* Synced 428 and 429 commits.

Signed-off-by: Vedant Shrotria <vedant.shrotria@mayadata.io>

* Added required changes.

Signed-off-by: Vedant Shrotria <vedant.shrotria@mayadata.io>

* Added required changes.

Signed-off-by: Vedant Shrotria <vedant.shrotria@mayadata.io>
This commit is contained in:
VEDANT SHROTRIA 2020-12-16 19:51:56 +05:30 committed by GitHub
parent 7d7b5cc012
commit e67110a933
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
249 changed files with 59523 additions and 699 deletions

View File

@ -9,6 +9,6 @@ RUN npm run build
CMD ["npm", "start"] CMD ["npm", "start"]
FROM nginx:1.13-alpine FROM nginx:1.13-alpine
COPY --from=build-env /app/website/build/litmus/ /usr/share/nginx/html COPY --from=build-env /app/website/build/ /usr/share/nginx/html
COPY ./nginx-custom.conf /etc/nginx/conf.d/default.conf COPY ./nginx-custom.conf /etc/nginx/conf.d/default.conf
EXPOSE 80 EXPOSE 80

View File

@ -24,7 +24,7 @@ Provide this ServiceAccount in ChaosEngine's .spec.chaosServiceAccount.
- Select Chaos Experiment from [hub.litmuschaos.io](https://hub.litmuschaos.io/) and click on `INSTALL EXPERIMENT` button. - Select Chaos Experiment from [hub.litmuschaos.io](https://hub.litmuschaos.io/) and click on `INSTALL EXPERIMENT` button.
```bash ```bash
kubectl apply -f https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/pod-delete/experiment.yaml -n litmus kubectl apply -f https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-delete/experiment.yaml -n litmus
``` ```
#### Prepare RBAC Manifest #### Prepare RBAC Manifest
@ -64,7 +64,8 @@ rules:
] ]
verbs: verbs:
["create", "delete", "get", "list", "patch", "update", "deletecollection"] ["create", "delete", "get", "list", "patch", "update", "deletecollection"]
- apiGroups: ["", "apps", "litmuschaos.io"] - apiGroups:
["", "apps", "litmuschaos.io", "apps.openshift.io", "argoproj.io"]
resources: resources:
[ [
"configmaps", "configmaps",
@ -75,6 +76,8 @@ rules:
"replicasets", "replicasets",
"deployments", "deployments",
"statefulsets", "statefulsets",
"deploymentconfigs",
"rollouts",
"services", "services",
] ]
verbs: ["get", "list", "patch", "update"] verbs: ["get", "list", "patch", "update"]

View File

@ -4,7 +4,7 @@ title: Litmus Architecture
sidebar_label: Architecture sidebar_label: Architecture
--- ---
<hr></hr> <hr/>
<img src={require('./assets/litmus-schematic.png').default} width="800" /> <img src={require('./assets/litmus-schematic.png').default} width="800" />
@ -24,7 +24,7 @@ During installation, the following three CRDs are installed on the Kubernetes cl
**Chaos-Experiments** **Chaos-Experiments**
Chaos Experiment is a CR and are available as YAML files on <a href="https://hub.litmuschaos.io" target="_blank">ChaosHub</a>.. For more details visit Chaos Hub [documentation](chaoshub.md). Chaos Experiment is a CR and are available as YAML files on <a href="https://hub.litmuschaos.io" target="_blank">Chaos Hub</a>. For more details visit Chaos Hub [documentation](chaoshub.md).
**Chaos-Engine** **Chaos-Engine**

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

View File

@ -24,7 +24,7 @@ sidebar_label: Cassandra Pod Delete
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`).If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`).If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `cassandra-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/cassandra/cassandra-pod-delete/experiment.yaml) - Ensure that the `cassandra-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/cassandra/cassandra-pod-delete/experiment.yaml)
## Entry Criteria ## Entry Criteria
@ -208,7 +208,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>

View File

@ -66,7 +66,7 @@ namespaces. Ensure that you have the right permission to be able to create the s
- Apply the LitmusChaos Operator manifest: - Apply the LitmusChaos Operator manifest:
``` ```
kubectl apply -f https://litmuschaos.github.io/litmus/litmus-operator-v1.9.0.yaml kubectl apply -f https://litmuschaos.github.io/litmus/litmus-operator-v1.10.0.yaml
``` ```
- Install the litmus-admin service account to be used by the chaos-operator while executing the experiment (this example - Install the litmus-admin service account to be used by the chaos-operator while executing the experiment (this example
@ -86,7 +86,7 @@ kubectl apply -f https://hub.litmuschaos.io/api/chaos/master?file=charts/generic
- **Note**: If you are interested in using chaostoolkit to perform the pod-delete, instead of the native litmus lib, you can apply - **Note**: If you are interested in using chaostoolkit to perform the pod-delete, instead of the native litmus lib, you can apply
this [rbac](https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/k8-pod-delete/Cluster/rbac-admin.yaml) this [rbac](https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/k8-pod-delete/Cluster/rbac-admin.yaml)
& [experiment](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/k8-pod-delete/experiment.yaml) manifests instead & [experiment](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/k8-pod-delete/experiment.yaml) manifests instead
of the ones described above. of the ones described above.
- Create the service account and associated RBAC, which will be used by the Argo workflow controller to execute the - Create the service account and associated RBAC, which will be used by the Argo workflow controller to execute the

View File

@ -56,7 +56,7 @@ This section describes the fields in the ChaosEngine spec and the possible value
</tr> </tr>
<tr> <tr>
<th>Type</th> <th>Type</th>
<td>Mandatory</td> <td>Optional</td>
</tr> </tr>
<tr> <tr>
<th>Range</th> <th>Range</th>
@ -68,7 +68,7 @@ This section describes the fields in the ChaosEngine spec and the possible value
</tr> </tr>
<tr> <tr>
<th>Notes</th> <th>Notes</th>
<td>The <code>appns</code> in the spec specifies the namespace of the AUT. Usually provided as a quoted string.</td> <td>The <code>appns</code> in the spec specifies the namespace of the AUT. Usually provided as a quoted string. It is optional for the infra chaos.</td>
</tr> </tr>
</table> </table>
@ -83,7 +83,7 @@ This section describes the fields in the ChaosEngine spec and the possible value
</tr> </tr>
<tr> <tr>
<th>Type</th> <th>Type</th>
<td>Mandatory</td> <td>Optional</td>
</tr> </tr>
<tr> <tr>
<th>Range</th> <th>Range</th>
@ -95,7 +95,7 @@ This section describes the fields in the ChaosEngine spec and the possible value
</tr> </tr>
<tr> <tr>
<th>Notes</th> <th>Notes</th>
<td>The <code>applabel</code> in the spec specifies a unique label of the AUT. Usually provided as a quoted string of pattern key=value. Note that if multiple applications share the same label within a given namespace, the AUT is filtered based on the presence of the chaos annotation <code>litmuschaos.io/chaos: "true"</code>. If, however, the <code>annotationCheck</code> is disabled, then a random application (pod) sharing the specified label is selected for chaos.</td> <td>The <code>applabel</code> in the spec specifies a unique label of the AUT. Usually provided as a quoted string of pattern key=value. Note that if multiple applications share the same label within a given namespace, the AUT is filtered based on the presence of the chaos annotation <code>litmuschaos.io/chaos: "true"</code>. If, however, the <code>annotationCheck</code> is disabled, then a random application (pod) sharing the specified label is selected for chaos. It is optional for the infra chaos.</td>
</tr> </tr>
</table> </table>
@ -110,11 +110,11 @@ This section describes the fields in the ChaosEngine spec and the possible value
</tr> </tr>
<tr> <tr>
<th>Type</th> <th>Type</th>
<td>Mandatory</td> <td>Optional</td>
</tr> </tr>
<tr> <tr>
<th>Range</th> <th>Range</th>
<td><code>deployment</code>, <code>statefulset</code>, <code>daemonset</code></td> <td><code>deployment</code>, <code>statefulset</code>, <code>daemonset</code>, <code>deploymentconfig</code>, <code>rollout</code></td>
</tr> </tr>
<tr> <tr>
<th>Default</th> <th>Default</th>
@ -122,7 +122,7 @@ This section describes the fields in the ChaosEngine spec and the possible value
</tr> </tr>
<tr> <tr>
<th>Notes</th> <th>Notes</th>
<td>The <code>appkind</code> in the spec specifies the Kubernetes resource type of the app deployment. The Litmus ChaosOperator supports chaos on deployments, statefulsets and daemonsets. Application health check routines are dependent on the resource types, in case of some experiments.</td> <td>The <code>appkind</code> in the spec specifies the Kubernetes resource type of the app deployment. The Litmus ChaosOperator supports chaos on deployments, statefulsets and daemonsets. Application health check routines are dependent on the resource types, in case of some experiments. It is optional for the infra chaos</td>
</tr> </tr>
</table> </table>
@ -446,7 +446,7 @@ This section describes the fields in the ChaosEngine spec and the possible value
</tr> </tr>
<tr> <tr>
<th>Range</th> <th>Range</th>
<i>user-defined</i> (type: {"{"}name: string, mountPath: string{"}"}) <td><i>user-defined</i> (type: {'{'}name: string, mountPath: string{'}'})</td>
</tr> </tr>
<tr> <tr>
<th>Default</th> <th>Default</th>
@ -473,7 +473,7 @@ This section describes the fields in the ChaosEngine spec and the possible value
</tr> </tr>
<tr> <tr>
<th>Range</th> <th>Range</th>
<i>user-defined</i> (type: {"{"}name: string, mountPath: string{"}"}) <td><i>user-defined</i> (type: {'{'}name: string, mountPath: string{'}'})</td>
</tr> </tr>
<tr> <tr>
<th>Default</th> <th>Default</th>
@ -485,6 +485,87 @@ This section describes the fields in the ChaosEngine spec and the possible value
</tr> </tr>
</table> </table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.components.runner.nodeSelector</code></td>
</tr>
<tr>
<th>Description</th>
<td>Node selectors for the runner pod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td>Labels in the from of label key=value</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.spec.components.runner.nodeSelector</code> The nodeselector contains labels of the node on which runner pod should be scheduled. Typically used in case of infra/node level chaos.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.components.runner.resources</code></td>
</tr>
<tr>
<th>Description</th>
<td>Specify the resource requirements for the ChaosRunner pod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: corev1.ResourceRequirements)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.spec.components.runner.resources</code> contains the resource requirements for the ChaosRunner Pod, where we can provide resource requests and limits for the pod.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.components.runner.tolerations</code></td>
</tr>
<tr>
<th>Description</th>
<td>Toleration for the runner pod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: []corev1.Toleration)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.spec.components.runner.tolerations</code> Provides tolerations for the runner pod so that it can be scheduled on the respective tainted node. Typically used in case of infra/node level chaos.</td>
</tr>
</table>
## Experiment Specification ## Experiment Specification
<table> <table>
@ -529,7 +610,7 @@ This section describes the fields in the ChaosEngine spec and the possible value
</tr> </tr>
<tr> <tr>
<th>Range</th> <th>Range</th>
<i>user-defined</i> (type: {"{"}name: string, mountPath: string{"}"}) <td><i>user-defined</i> (type: {'{'}name: string, value: string{'}'})</td>
</tr> </tr>
<tr> <tr>
<th>Default</th> <th>Default</th>
@ -556,7 +637,7 @@ This section describes the fields in the ChaosEngine spec and the possible value
</tr> </tr>
<tr> <tr>
<th>Range</th> <th>Range</th>
<i>user-defined</i> (type: {"{"}name: string, mountPath: string{"}"}) <td><i>user-defined</i> (type: {'{'}name: string, mountPath: string{'}'})</td>
</tr> </tr>
<tr> <tr>
<th>Default</th> <th>Default</th>
@ -583,7 +664,7 @@ This section describes the fields in the ChaosEngine spec and the possible value
</tr> </tr>
<tr> <tr>
<th>Range</th> <th>Range</th>
<i>user-defined</i> (type: {"{"}name: string, mountPath: string{"}"}) <td><i>user-defined</i> (type: {'{'}name: string, mountPath: string{'}'})</td>
</tr> </tr>
<tr> <tr>
<th>Default</th> <th>Default</th>
@ -691,7 +772,7 @@ This section describes the fields in the ChaosEngine spec and the possible value
</tr> </tr>
<tr> <tr>
<th>Range</th> <th>Range</th>
<td><i> It contains values in the form {"{"}delay: int, timeout: int{"}"} </i></td> <td><i> It contains values in the form {'{'}delay: int, timeout: int{'}'} </i></td>
</tr> </tr>
<tr> <tr>
<th>Default</th> <th>Default</th>
@ -733,7 +814,7 @@ This section describes the fields in the ChaosEngine spec and the possible value
<table> <table>
<tr> <tr>
<th>Field</th> <th>Field</th>
<td><code>.spec.components.runner.experimentannotation</code></td> <td><code>.spec.experiments[].spec.components.experimentannotation</code></td>
</tr> </tr>
<tr> <tr>
<th>Description</th> <th>Description</th>
@ -753,7 +834,34 @@ This section describes the fields in the ChaosEngine spec and the possible value
</tr> </tr>
<tr> <tr>
<th>Notes</th> <th>Notes</th>
<td>The <code>.components.runner.experimentannotation</code> allows developers to specify the custom annotations for the experiment pod.</td> <td>The <code>.spec.components.experimentannotation</code> allows developers to specify the custom annotations for the experiment pod.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.experiments[].spec.components.tolerations</code></td>
</tr>
<tr>
<th>Description</th>
<td>Toleration for the experiment pod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: []corev1.Toleration)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.spec.components.tolerations</code>Tolerations for the experiment pod so that it can be scheduled on the respective tainted node. Typically used in case of infra/node level chaos.</td>
</tr> </tr>
</table> </table>
@ -780,6 +888,6 @@ This section describes the fields in the ChaosEngine spec and the possible value
</tr> </tr>
<tr> <tr>
<th>Notes</th> <th>Notes</th>
<td>The <code>.probe</code> allows developers to specify the chaos hypothesis. It supports three types: <code>cmdProbe</code>, <code>k8sProbe</code>, <code>httpProbe</code></td> <td>The <code>.probe</code> allows developers to specify the chaos hypothesis. It supports four types: <code>cmdProbe</code>, <code>k8sProbe</code>, <code>httpProbe</code>, <code>promProbe</code>. For more details <a href="https://docs.litmuschaos.io/docs/litmus-probe/">refer</a></td>
</tr> </tr>
</table> </table>

View File

@ -195,7 +195,7 @@ This section describes the fields in the ChaosExperiment spec and the possible v
</tr> </tr>
<tr> <tr>
<th>Range</th> <th>Range</th>
<td><i>user-defined</i> (type: {"{"}name: string, value: string{"}"})</td> <td><i>user-defined</i> (type: {'{'}name: string, value: string{'}'})</td>
</tr> </tr>
<tr> <tr>
<th>Default</th> <th>Default</th>

View File

@ -74,7 +74,5 @@ Chaos experiments that inject chaos into the platform resources of Kubernetes ar
Following Platform Chaos experiments are available on ChaosHub Following Platform Chaos experiments are available on ChaosHub
| Platform | Description | Chaos Experiments | | Platform | Description | Chaos Experiments |
| -------- | ------------------------------------------- | ----------------- | | -------- | ------------------------------------------- | --------------------------------------------------------------------------- |
| AWS | Amazon Web Services platform. Includes EKS. | None | | AWS | Amazon Web Services platform. Includes EKS. | [ec2-terminate](chaostoolkit-aws-ec2-terminate.md), [ebs-loss](ebs-loss.md) |
| GCP | Google Cloud Platform. Includes GKE. | None |
| Azure | Microsoft Azure platform. Includes AKS. | None |

View File

@ -138,7 +138,7 @@ This section describes the fields in the ChaosSchedule spec and the possible val
</tr> </tr>
<tr> <tr>
<th>Range</th> <th>Range</th>
<td><i>user-defined</i> (type: string)(pattern: {"{"}number{"}"}m", {"{"}number{"}"}h").</td> <td><i>user-defined</i> (type: string)(pattern: "{'{'}number{'}'}m", "{'{'}number{'}'}h").</td>
</tr> </tr>
<tr> <tr>
<th>Default</th> <th>Default</th>
@ -165,7 +165,7 @@ This section describes the fields in the ChaosSchedule spec and the possible val
</tr> </tr>
<tr> <tr>
<th>Range</th> <th>Range</th>
<td><i>user-defined</i> (type: string)(pattern: [{"{"}day_name{"}"},{"{"}day_name{"}"}...]).</td> <td><i>user-defined</i> (type: string)(pattern: [{'{'}day_name{'}'},{'{'}day_name{'}'}...]).</td>
</tr> </tr>
<tr> <tr>
<th>Default</th> <th>Default</th>
@ -173,7 +173,7 @@ This section describes the fields in the ChaosSchedule spec and the possible val
</tr> </tr>
<tr> <tr>
<th>Notes</th> <th>Notes</th>
<td>The <code>includedDays</code> in the spec specifies a (comma-separated) list of days of the week at which chaos is allowed to take place. {"{"}day_name{"}"} is to be specified with the first 3 letters of the name of day such as <code>Mon</code>, <code>Tue</code> etc.</td> <td> The <code>includedDays</code> in the spec specifies a (comma-separated) list of days of the week at which chaos is allowed to take place. {'{'}day_name{'}'} is to be specified with the first 3 letters of the name of day such as <code>Mon</code>, <code>Tue</code> etc.</td>
</tr> </tr>
</table> </table>
@ -192,7 +192,7 @@ This section describes the fields in the ChaosSchedule spec and the possible val
</tr> </tr>
<tr> <tr>
<th>Range</th> <th>Range</th>
<td><i>{"{"}hour_number{"}"} will range from 0 to 23</i> (type: string)(pattern: {"{"}hour_number{"}"}-{"{"}hour_number{"}"}).</td> <td><i>{'{'}hour_number{'}'} will range from 0 to 23</i> (type: string)(pattern: {'{'}hour_number{'}'}-{'{'}hour_number{'}'}).</td>
</tr> </tr>
<tr> <tr>
<th>Default</th> <th>Default</th>

View File

@ -1,7 +1,7 @@
--- ---
id: Kubernetes-Chaostoolkit-AWS id: Kubernetes-Chaostoolkit-AWS
title: ChaosToolKit AWS EC2 Experiment Details title: ChaosToolKit AWS EC2 Experiment Details
sidebar_label: EC2 Terminate sidebar_label: ChaosToolKit AWS EC2 Terminate
--- ---
--- ---
@ -57,7 +57,7 @@ sidebar_label: EC2 Terminate
<tr> <tr>
<td> ChaosToolKit </td> <td> ChaosToolKit </td>
<td> ChaosToolKit single, random EC2 terminate experiment with Application uptime </td> <td> ChaosToolKit single, random EC2 terminate experiment with Application uptime </td>
<td> Executing via label name app={"<"}{">"} </td> <td> Executing via label name app=&lt;&gt;</td>
<td> ec2-delete.json</td> <td> ec2-delete.json</td>
</tr> </tr>
</table> </table>
@ -74,7 +74,7 @@ sidebar_label: EC2 Terminate
## Prepare chaosServiceAccount ## Prepare chaosServiceAccount
- Based on your use case pick one of the choice from here `https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/k8-aws-ec2-terminate` - Based on your use case pick one of the choice from here `https://github.com/litmuschaos/chaos-charts/tree/v1.10.x/charts/kube-aws/k8-aws-ec2-terminate`
### Sample Rbac Manifest for Service Owner use case ### Sample Rbac Manifest for Service Owner use case
@ -340,18 +340,18 @@ spec:
### Watch Chaos progress ### Watch Chaos progress
- View application pod termination & recovery by setting up a watch on the pods in the application namespace - View aws ec2 instance termination & recovery by setting up a watch on the nodes/verify in AWS console
`watch kubectl get pods` `watch kubectl get pods`
### Check ChaosExperiment Result ### Check ChaosExperiment Result
- Check whether the application is resilient to the ChaosToolKit pod failure, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`. - Check whether the application is resilient to the ChaosToolKit aws ec2 termination, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult k8-pod-delete -n <chaos-namespace>` `kubectl describe chaosresult k8-aws-ec2-terminate-k8-aws-ec2-terminate -n <chaos-namespace>`
### Check ChaosExperiment logs ### Check ChaosExperiment logs
- Check the log and result for existing experiment - Check the log and result for existing experiment
`kubectl log -f k8-pod-delete-<> -n <chaos-namespace>` `kubectl log -f k8-aws-ec2-terminate-<hax-value> -n <chaos-namespace>`

View File

@ -0,0 +1,271 @@
---
id: Kubernetes-Chaostoolkit-Cluster-alb-ingress-controller
title: ChaosToolKit Cluster Level Pod Delete Experiment Details in kube-system
sidebar_label: Cluster Pod - alb-ingress-controller
---
---
## Experiment Metadata
<table>
<tr>
<th> Type </th>
<th> Description </th>
<th> Tested K8s Platform </th>
</tr>
<tr>
<td> ChaosToolKit </td>
<td> ChaosToolKit Cluster Level Pod delete experiment </td>
<td> Kubeadm, Minikube </td>
</tr>
</table>
## Prerequisites
- Ensure that the Litmus ChaosOperator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `k8-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/kube-components/k8-alb-ingress-controller/experiment.yaml)
- Ensure you have nginx default application setup on default namespace ( if you are using specific namespace please execute below on that namespace)
## Entry Criteria
- Application replicas are healthy before chaos injection
- Service resolution works successfully as determined by deploying a sample nginx application and a custom liveness app querying the nginx application health end point
- This application we are executing against kube-system type namespace
## Exit Criteria
- Application replicas are healthy after chaos injection
- Service resolution works successfully as determined by deploying a sample nginx application and a custom liveness app querying the nginx application health end point
## Details
- Causes graceful pod failure of an ChaosToolKit replicas bases on provided namespace and Label with endpoint
- Tests deployment sanity check with Steady state hypothesis pre and post pod failures
- Service resolution will failed if Application replicas are not present.
### Use Cases for executing the experiment
<table>
<tr>
<th> Type </th>
<th> Experiment </th>
<th> Details </th>
<th> json </th>
</tr>
<tr>
<td> ChaosToolKit </td>
<td> ChaosToolKit single, random pod delete experiment with count </td>
<td> Executing via label name k8s-app=&lt;&gt; </td>
<td> pod-custom-kill-health.json</td>
</tr>
<tr>
<td> TEST_NAMESPACE </td>
<td> Place holder from where the chaos experiment is executed</td>
<td> Optional </td>
<td> Defaults to is `default` </td>
</tr>
</table>
## Integrations
- Pod failures can be effected using one of these chaos libraries: `litmus`
## Steps to Execute the ChaosExperiment
- This ChaosExperiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
## Prepare chaosServiceAccount
- Based on your use case pick one of the choice from here `https://hub.litmuschaos.io/kube-components/k8-alb-ingress-controller`
- Service owner use case
- Install the rbac for cluster in namespace from where you are executing the experiments `kubectl apply rbac-admin.yaml`
### Sample Rbac Manifest for Cluster Owner use case
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-components/k8-alb-ingress-controller/rbac-admin.yaml yaml"
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: chaos-admin
labels:
name: chaos-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaos-admin
labels:
name: chaos-admin
rules:
- apiGroups: ["", "apps", "batch"]
resources: ["jobs", "deployments", "daemonsets"]
verbs: ["create", "list", "get", "patch", "delete"]
- apiGroups: ["", "litmuschaos.io"]
resources:
[
"pods",
"configmaps",
"events",
"services",
"chaosengines",
"chaosexperiments",
"chaosresults",
"deployments",
"jobs",
]
verbs: ["get", "create", "update", "patch", "delete", "list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaos-admin
labels:
name: chaos-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaos-admin
subjects:
- kind: ServiceAccount
name: chaos-admin
namespace: default
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- It will be default as
```
appinfo:
appns: default
applabel: 'app=alb-ingress-controller'
appkind: deployment
```
- Override the experiment tunables if desired in `experiments.spec.components.env`
- To understand the values to provided in a ChaosEngine specification, refer [ChaosEngine Concepts](chaosengine-concepts.md)
#### Supported Experiment Tunables
<table>
<tr>
<th> Variables </th>
<th> Description </th>
<th> Specify In ChaosEngine </th>
<th> Notes </th>
</tr>
<tr>
<td> NAME_SPACE </td>
<td> This is chaos namespace which will create all infra chaos resources in that namespace </td>
<td> Mandatory </td>
<td> Default to kube-system </td>
</tr>
<tr>
<td> LABEL_NAME </td>
<td> The default name of the label </td>
<td> Mandatory </td>
<td> Defaults to `app=alb-ingress-controller`</td>
</tr>
<tr>
<td> APP_ENDPOINT </td>
<td> Endpoint where ChaosToolKit will make a call and ensure the application endpoint is healthy </td>
<td> Mandatory </td>
<td> Defaults to localhost </td>
</tr>
<tr>
<td> FILE </td>
<td> Type of chaos experiments we want to execute </td>
<td> Mandatory </td>
<td> Default to `pod-custom-kill-health.json` </td>
</tr>
<tr>
<td> REPORT </td>
<td> The Report of execution coming in json format </td>
<td> Optional </td>
<td> Defaults to is `false` </td>
</tr>
<tr>
<td> REPORT_ENDPOINT </td>
<td> Report endpoint which can take the json format and submit it</td>
<td> Optional </td>
<td> Default to setup for Kafka topic for chaos, but can support any reporting database </td>
</tr>
</table>
#### Sample ChaosEngine Manifest
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-components/k8-alb-ingress-controller/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-alb-ingress-controller
namespace: default
spec:
appinfo:
appns: "default"
applabel: "app=alb-ingress-controller"
appkind: deployment
annotationCheck: "false"
engineState: "active"
chaosServiceAccount: chaos-admin
monitoring: false
jobCleanUpPolicy: "retain"
experiments:
- name: k8-pod-delete
spec:
components:
env:
# set chaos namespace
- name: NAME_SPACE
value: addon-alb-ingress-controller-ns
# set chaos label name
- name: LABEL_NAME
value: app=alb-ingress-controller
# pod endpoint
- name: APP_ENDPOINT
value: "localhost"
- name: FILE
value: "pod-custom-kill-health.json"
- name: REPORT
value: "true"
- name: REPORT_ENDPOINT
value: "none"
- name: TEST_NAMESPACE
value: "default"
```
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
### Watch Chaos progress
- View ChaosToolKit pod terminations & recovery by setting up a watch on the ChaosToolKit pods in the application namespace
`watch kubectl get pods -n kube-system`
### Check ChaosExperiment Result
- Check whether the application is resilient to the ChaosToolKit pod failure, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult k8-pod-delete -n <chaos-namespace>`
### Check ChaosExperiment logs
- Check the log and result for existing experiment
`kubectl log -f k8-pod-delete-<> -n <chaos-namespace>`

View File

@ -0,0 +1,271 @@
---
id: Kubernetes-Chaostoolkit-Cluster-Calico-Node
title: ChaosToolKit Cluster Level Pod Delete Experiment Details in kube-system
sidebar_label: Cluster Pod - calico-node
---
---
## Experiment Metadata
<table>
<tr>
<th> Type </th>
<th> Description </th>
<th> Tested K8s Platform </th>
</tr>
<tr>
<td> ChaosToolKit </td>
<td> ChaosToolKit Cluster Level Pod delete experiment </td>
<td> Kubeadm, Minikube </td>
</tr>
</table>
## Prerequisites
- Ensure that the Litmus ChaosOperator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `k8-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/kube-components/k8-calico-node/experiment.yaml)
- Ensure you have nginx default application setup on default namespace ( if you are using specific namespace please execute below on that namespace)
## Entry Criteria
- Application replicas are healthy before chaos injection
- Service resolution works successfully as determined by deploying a sample nginx application and a custom liveness app querying the nginx application health end point
- This application we are executing against kube-system type namespace
## Exit Criteria
- Application replicas are healthy after chaos injection
- Service resolution works successfully as determined by deploying a sample nginx application and a custom liveness app querying the nginx application health end point
## Details
- Causes graceful pod failure of an ChaosToolKit replicas bases on provided namespace and Label with endpoint
- Tests deployment sanity check with Steady state hypothesis pre and post pod failures
- Service resolution will failed if Application replicas are not present.
### Use Cases for executing the experiment
<table>
<tr>
<th> Type </th>
<th> Experiment </th>
<th> Details </th>
<th> json </th>
</tr>
<tr>
<td> ChaosToolKit </td>
<td> ChaosToolKit single, random pod delete experiment with count </td>
<td> Executing via label name k8s-app=&lt;&gt; </td>
<td> pod-custom-kill-health.json</td>
</tr>
<tr>
<td> TEST_NAMESPACE </td>
<td> Place holder from where the chaos experiment is executed</td>
<td> Optional </td>
<td> Defaults to is `default` </td>
</tr>
</table>
## Integrations
- Pod failures can be effected using one of these chaos libraries: `litmus`
## Steps to Execute the ChaosExperiment
- This ChaosExperiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
## Prepare chaosServiceAccount
- Based on your use case pick one of the choice from here `https://hub.litmuschaos.io/kube-components/k8-calico-node`
- Service owner use case
- Install the rbac for cluster in namespace from where you are executing the experiments `kubectl apply rbac-admin.yaml`
### Sample Rbac Manifest for Cluster Owner use case
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-components/k8-calico-node/rbac-admin.yaml yaml"
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: chaos-admin
labels:
name: chaos-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaos-admin
labels:
name: chaos-admin
rules:
- apiGroups: ["", "apps", "batch"]
resources: ["jobs", "deployments", "daemonsets"]
verbs: ["create", "list", "get", "patch", "delete"]
- apiGroups: ["", "litmuschaos.io"]
resources:
[
"pods",
"configmaps",
"events",
"services",
"chaosengines",
"chaosexperiments",
"chaosresults",
"deployments",
"jobs",
]
verbs: ["get", "create", "update", "patch", "delete", "list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaos-admin
labels:
name: chaos-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaos-admin
subjects:
- kind: ServiceAccount
name: chaos-admin
namespace: default
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- It will be default as
```
appinfo:
appns: default
applabel: 'k8s-app=calico-node'
appkind: deployment
```
- Override the experiment tunables if desired in `experiments.spec.components.env`
- To understand the values to provided in a ChaosEngine specification, refer [ChaosEngine Concepts](chaosengine-concepts.md)
#### Supported Experiment Tunables
<table>
<tr>
<th> Variables </th>
<th> Description </th>
<th> Specify In ChaosEngine </th>
<th> Notes </th>
</tr>
<tr>
<td> NAME_SPACE </td>
<td> This is chaos namespace which will create all infra chaos resources in that namespace </td>
<td> Mandatory </td>
<td> Default to kube-system </td>
</tr>
<tr>
<td> LABEL_NAME </td>
<td> The default name of the label </td>
<td> Mandatory </td>
<td> Defaults to calico-node </td>
</tr>
<tr>
<td> APP_ENDPOINT </td>
<td> Endpoint where ChaosToolKit will make a call and ensure the application endpoint is healthy </td>
<td> Mandatory </td>
<td> Defaults to localhost </td>
</tr>
<tr>
<td> FILE </td>
<td> Type of chaos experiments we want to execute </td>
<td> Mandatory </td>
<td> Default to `pod-custom-kill-health.json` </td>
</tr>
<tr>
<td> REPORT </td>
<td> The Report of execution coming in json format </td>
<td> Optional </td>
<td> Defaults to is `false` </td>
</tr>
<tr>
<td> REPORT_ENDPOINT </td>
<td> Report endpoint which can take the json format and submit it</td>
<td> Optional </td>
<td> Default to setup for Kafka topic for chaos, but can support any reporting database </td>
</tr>
</table>
#### Sample ChaosEngine Manifest
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-components/k8-calico-node/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-calico-node
namespace: default
spec:
appinfo:
appns: "default"
applabel: "k8s-app=calico-node"
appkind: deployment
annotationCheck: "false"
engineState: "active"
chaosServiceAccount: chaos-admin
monitoring: false
jobCleanUpPolicy: "retain"
experiments:
- name: k8-pod-delete
spec:
components:
env:
# set chaos namespace
- name: NAME_SPACE
value: kube-system
# set chaos label name
- name: LABEL_NAME
value: k8s-app=calico-node
# pod endpoint
- name: APP_ENDPOINT
value: "localhost"
- name: FILE
value: "pod-custom-kill-health.json"
- name: REPORT
value: "true"
- name: REPORT_ENDPOINT
value: "none"
- name: TEST_NAMESPACE
value: "default"
```
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
### Watch Chaos progress
- View ChaosToolKit pod terminations & recovery by setting up a watch on the ChaosToolKit pods in the application namespace
`watch kubectl get pods -n kube-system`
### Check ChaosExperiment Result
- Check whether the application is resilient to the ChaosToolKit pod failure, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult k8-pod-delete -n <chaos-namespace>`
### Check ChaosExperiment logs
- Check the log and result for existing experiment
`kubectl log -f k8-pod-delete-<> -n <chaos-namespace>`

View File

@ -24,7 +24,7 @@ sidebar_label: Cluster Pod - kiam
## Prerequisites ## Prerequisites
- Ensure that the Litmus ChaosOperator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus ChaosOperator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `k8-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/k8-pod-delete/experiment.yaml) - Ensure that the `k8-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/kube-components/k8-kiam/experiment.yaml)
- Ensure you have nginx default application setup on default namespace ( if you are using specific namespace please execute below on that namespace) - Ensure you have nginx default application setup on default namespace ( if you are using specific namespace please execute below on that namespace)
## Entry Criteria ## Entry Criteria
@ -56,37 +56,37 @@ sidebar_label: Cluster Pod - kiam
<tr> <tr>
<td> ChaosToolKit </td> <td> ChaosToolKit </td>
<td> ChaosToolKit single, random pod delete experiment with count </td> <td> ChaosToolKit single, random pod delete experiment with count </td>
<td> Executing via label name app={"<>"} </td> <td> Executing via label name app=&lt;&gt; </td>
<td> pod-app-kill-count.json</td> <td> pod-app-kill-count.json</td>
</tr> </tr>
<tr> <tr>
<td> ChaosToolKit </td> <td> ChaosToolKit </td>
<td> ChaosToolKit single, random pod delete experiment </td> <td> ChaosToolKit single, random pod delete experiment </td>
<td> Executing via label name app={"<>"} </td> <td> Executing via label name app=&lt;&gt; </td>
<td> pod-app-kill-health.json </td> <td> pod-app-kill-health.json </td>
</tr> </tr>
<tr> <tr>
<td> ChaosToolKit </td> <td> ChaosToolKit </td>
<td> ChaosToolKit single, random pod delete experiment with count </td> <td> ChaosToolKit single, random pod delete experiment with count </td>
<td> Executing via Custom label name {"<"}custom{"}>"}={"<>"} </td> <td> Executing via Custom label name &lt;custom&gt;=&lt;&gt; </td>
<td> pod-app-kill-count.json</td> <td> pod-app-kill-count.json</td>
</tr> </tr>
<tr> <tr>
<td> ChaosToolKit </td> <td> ChaosToolKit </td>
<td> ChaosToolKit single, random pod delete experiment </td> <td> ChaosToolKit single, random pod delete experiment </td>
<td> Executing via Custom label name {"<"}custom{"}>"}={"<>"} </td> <td> Executing via Custom label name &lt;custom&gt;=&lt;&gt; </td>
<td> pod-app-kill-health.json </td> <td> pod-app-kill-health.json </td>
</tr> </tr>
<tr> <tr>
<td> ChaosToolKit </td> <td> ChaosToolKit </td>
<td> ChaosToolKit All pod delete experiment with health validation </td> <td> ChaosToolKit All pod delete experiment with health validation </td>
<td> Executing via Custom label name app={"<>"} </td> <td> Executing via Custom label name app=&lt;&gt; </td>
<td> pod-app-kill-all.json </td> <td> pod-app-kill-all.json </td>
</tr> </tr>
<tr> <tr>
<td> ChaosToolKit </td> <td> ChaosToolKit </td>
<td> ChaosToolKit All pod delete experiment with health validation</td> <td> ChaosToolKit All pod delete experiment with health validation</td>
<td> Executing via Custom label name {"<"}custom{"}>"}={"<>"} </td> <td> Executing via Custom label name &lt;custom&gt;=&lt;&gt; </td>
<td> pod-custom-kill-all.json </td> <td> pod-custom-kill-all.json </td>
</tr> </tr>
<tr> <tr>
@ -109,13 +109,13 @@ sidebar_label: Cluster Pod - kiam
## Prepare chaosServiceAccount ## Prepare chaosServiceAccount
- Based on your use case pick one of the choice from here `https://github.com/sumitnagal/chaos-charts/tree/testing/charts/chaostoolkit/k8-pod-delete` - Based on your use case pick one of the choice from here `https://hub.litmuschaos.io/kube-components/k8-kiam`
- Service owner use case - Service owner use case
- Install the rbac for cluster in namespace from where you are executing the experiments `kubectl apply Cluster/rbac-admin.yaml` - Install the rbac for cluster in namespace from where you are executing the experiments `kubectl apply rbac-admin.yaml`
### Sample Rbac Manifest for Service Owner use case ### Sample Rbac Manifest for Service Owner use case
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/k8-pod-delete/Cluster/rbac-admin.yaml yaml" [embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-components/k8-kiam/rbac-admin.yaml yaml"
```yaml ```yaml
apiVersion: v1 apiVersion: v1
@ -132,59 +132,26 @@ metadata:
labels: labels:
name: chaos-admin name: chaos-admin
rules: rules:
- apiGroups: - apiGroups: ["", "apps", "batch"]
[ resources: ["jobs", "deployments", "daemonsets"]
"", verbs: ["create", "list", "get", "patch", "delete"]
"apps", - apiGroups: ["", "litmuschaos.io"]
"batch",
"extensions",
"litmuschaos.io",
"openebs.io",
"storage.k8s.io",
]
resources: resources:
[ [
"chaosengines",
"chaosexperiments",
"chaosresults",
"configmaps",
"cstorpools",
"cstorvolumereplicas",
"events",
"jobs",
"persistentvolumeclaims",
"persistentvolumes",
"pods", "pods",
"pods/exec",
"pods/log",
"secrets",
"storageclasses",
"chaosengines",
"chaosexperiments",
"chaosresults",
"configmaps", "configmaps",
"cstorpools",
"cstorvolumereplicas",
"daemonsets",
"deployments",
"events", "events",
"jobs",
"persistentvolumeclaims",
"persistentvolumes",
"pods",
"pods/eviction",
"pods/exec",
"pods/log",
"replicasets",
"secrets",
"services", "services",
"statefulsets", "chaosengines",
"storageclasses", "chaosexperiments",
"chaosresults",
"deployments",
"jobs",
] ]
verbs: ["create", "delete", "get", "list", "patch", "update"] verbs: ["get", "create", "update", "patch", "delete", "list"]
- apiGroups: [""] - apiGroups: [""]
resources: ["nodes"] resources: ["nodes"]
verbs: ["get", "list", "patch"] verbs: ["get", "list"]
--- ---
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding kind: ClusterRoleBinding
@ -267,37 +234,37 @@ subjects:
#### Sample ChaosEngine Manifest #### Sample ChaosEngine Manifest
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/k8-pod-delete/Cluster/engine-kiam-health.yaml yaml" [embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-components/k8-kiam/engine.yaml yaml"
```yaml ```yaml
# chaosengine.yaml # Generic Chaos engine for Application team, who want to participate in Game Day
apiVersion: litmuschaos.io/v1alpha1 apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine kind: ChaosEngine
metadata: metadata:
name: k8-kiam-health name: k8-calico-node
namespace: default namespace: default
spec: spec:
#ex. values: ns1:name=percona,ns2:run=nginx
appinfo: appinfo:
appns: kube-system appns: "default"
# FYI, To see app label, apply kubectl get pods --show-labels
#applabel: "app=nginx"
applabel: "app=kiam" applabel: "app=kiam"
appkind: deployment appkind: deployment
jobCleanUpPolicy: retain
monitoring: false
annotationCheck: "false" annotationCheck: "false"
engineState: "active" engineState: "active"
chaosServiceAccount: chaos-admin chaosServiceAccount: chaos-admin
monitoring: false
jobCleanUpPolicy: "retain"
experiments: experiments:
- name: k8-pod-delete - name: k8-pod-delete
spec: spec:
components: components:
env: env:
# set chaos namespace
- name: NAME_SPACE - name: NAME_SPACE
value: kube-system value: kube-system
# set chaos label name
- name: LABEL_NAME - name: LABEL_NAME
value: kiam value: kiam
# pod endpoint
- name: APP_ENDPOINT - name: APP_ENDPOINT
value: "localhost" value: "localhost"
- name: FILE - name: FILE

View File

@ -0,0 +1,271 @@
---
id: Kubernetes-Chaostoolkit-Cluster-Wavefront
title: ChaosToolKit Cluster Level Pod Delete Experiment Details in kube-system
sidebar_label: Cluster Pod - Wavefront
---
---
## Experiment Metadata
<table>
<tr>
<th> Type </th>
<th> Description </th>
<th> Tested K8s Platform </th>
</tr>
<tr>
<td> ChaosToolKit </td>
<td> ChaosToolKit Cluster Level Pod delete experiment </td>
<td> Kubeadm, Minikube </td>
</tr>
</table>
## Prerequisites
- Ensure that the Litmus ChaosOperator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `k8-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/kube-components/k8-wavefront-collector/experiment.yaml)
- Ensure you have nginx default application setup on default namespace ( if you are using specific namespace please execute below on that namespace)
## Entry Criteria
- Application replicas are healthy before chaos injection
- Service resolution works successfully as determined by deploying a sample nginx application and a custom liveness app querying the nginx application health end point
- This application we are executing against kube-system type namespace
## Exit Criteria
- Application replicas are healthy after chaos injection
- Service resolution works successfully as determined by deploying a sample nginx application and a custom liveness app querying the nginx application health end point
## Details
- Causes graceful pod failure of an ChaosToolKit replicas bases on provided namespace and Label with endpoint
- Tests deployment sanity check with Steady state hypothesis pre and post pod failures
- Service resolution will failed if Application replicas are not present.
### Use Cases for executing the experiment
<table>
<tr>
<th> Type </th>
<th> Experiment </th>
<th> Details </th>
<th> json </th>
</tr>
<tr>
<td> ChaosToolKit </td>
<td> ChaosToolKit single, random pod delete experiment with count </td>
<td> Executing via label name k8s-app=&lt;&gt; </td>
<td> pod-custom-kill-health.json</td>
</tr>
<tr>
<td> TEST_NAMESPACE </td>
<td> Place holder from where the chaos experiment is executed</td>
<td> Optional </td>
<td> Defaults to is `default` </td>
</tr>
</table>
## Integrations
- Pod failures can be effected using one of these chaos libraries: `litmus`
## Steps to Execute the ChaosExperiment
- This ChaosExperiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
## Prepare chaosServiceAccount
- Based on your use case pick one of the choice from here `https://hub.litmuschaos.io/kube-components/k8-calico-node`
- Service owner use case
- Install the rbac for cluster in namespace from where you are executing the experiments `kubectl apply rbac-admin.yaml`
### Sample Rbac Manifest for Cluster Owner use case
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-components/k8-wavefront-collector/rbac-admin.yaml yaml"
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: chaos-admin
labels:
name: chaos-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaos-admin
labels:
name: chaos-admin
rules:
- apiGroups: ["", "apps", "batch"]
resources: ["jobs", "deployments", "daemonsets"]
verbs: ["create", "list", "get", "patch", "delete"]
- apiGroups: ["", "litmuschaos.io"]
resources:
[
"pods",
"configmaps",
"events",
"services",
"chaosengines",
"chaosexperiments",
"chaosresults",
"deployments",
"jobs",
]
verbs: ["get", "create", "update", "patch", "delete", "list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaos-admin
labels:
name: chaos-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaos-admin
subjects:
- kind: ServiceAccount
name: chaos-admin
namespace: default
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- It will be default as
```
appinfo:
appns: default
applabel: 'k8s-app=calico-node'
appkind: deployment
```
- Override the experiment tunables if desired in `experiments.spec.components.env`
- To understand the values to provided in a ChaosEngine specification, refer [ChaosEngine Concepts](chaosengine-concepts.md)
#### Supported Experiment Tunables
<table>
<tr>
<th> Variables </th>
<th> Description </th>
<th> Specify In ChaosEngine </th>
<th> Notes </th>
</tr>
<tr>
<td> NAME_SPACE </td>
<td> This is chaos namespace which will create all infra chaos resources in that namespace </td>
<td> Mandatory </td>
<td> Default to kube-system </td>
</tr>
<tr>
<td> LABEL_NAME </td>
<td> The default name of the label </td>
<td> Mandatory </td>
<td> Defaults to calico-node </td>
</tr>
<tr>
<td> APP_ENDPOINT </td>
<td> Endpoint where ChaosToolKit will make a call and ensure the application endpoint is healthy </td>
<td> Mandatory </td>
<td> Defaults to localhost </td>
</tr>
<tr>
<td> FILE </td>
<td> Type of chaos experiments we want to execute </td>
<td> Mandatory </td>
<td> Default to `pod-custom-kill-health.json` </td>
</tr>
<tr>
<td> REPORT </td>
<td> The Report of execution coming in json format </td>
<td> Optional </td>
<td> Defaults to is `false` </td>
</tr>
<tr>
<td> REPORT_ENDPOINT </td>
<td> Report endpoint which can take the json format and submit it</td>
<td> Optional </td>
<td> Default to setup for Kafka topic for chaos, but can support any reporting database </td>
</tr>
</table>
#### Sample ChaosEngine Manifest
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-components/k8-wavefront-collector/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-calico-node
namespace: default
spec:
appinfo:
appns: "default"
applabel: "k8s-app=wavefront-collector"
appkind: deployment
annotationCheck: "false"
engineState: "active"
chaosServiceAccount: chaos-admin
monitoring: false
jobCleanUpPolicy: "retain"
experiments:
- name: k8-pod-delete
spec:
components:
env:
# set chaos namespace, we assume you are using the kube-system if not modify the below namespace
- name: NAME_SPACE
value: kube-system
# set chaos label name
- name: LABEL_NAME
value: k8s-app=wavefront-collector
# pod endpoint
- name: APP_ENDPOINT
value: "localhost"
- name: FILE
value: "pod-custom-kill-health.json"
- name: REPORT
value: "true"
- name: REPORT_ENDPOINT
value: "none"
- name: TEST_NAMESPACE
value: "default"
```
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
### Watch Chaos progress
- View ChaosToolKit pod terminations & recovery by setting up a watch on the ChaosToolKit pods in the application namespace
`watch kubectl get pods -n kube-system`
### Check ChaosExperiment Result
- Check whether the application is resilient to the ChaosToolKit pod failure, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult k8-pod-delete -n <chaos-namespace>`
### Check ChaosExperiment logs
- Check the log and result for existing experiment
`kubectl log -f k8-pod-delete-<> -n <chaos-namespace>`

View File

@ -0,0 +1,278 @@
---
id: Kubernetes-Chaostoolkit-Cluster-active-monitor-controller
title: ChaosToolKit Cluster Level Pod Delete Experiment Details in kube-system
sidebar_label: Cluster Pod - active-monitor-controller
---
---
## Experiment Metadata
<table>
<tr>
<th> Type </th>
<th> Description </th>
<th> Tested K8s Platform </th>
</tr>
<tr>
<td> ChaosToolKit </td>
<td> ChaosToolKit Cluster Level Pod delete experiment </td>
<td> Kubeadm, Minikube </td>
</tr>
</table>
## Prerequisites
- Ensure that the Litmus ChaosOperator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `k8-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/k8-pod-delete/experiment.yaml)
- Ensure you have nginx default application setup on default namespace ( if you are using specific namespace please execute below on that namespace)
## Entry Criteria
- Application replicas are healthy before chaos injection
- Service resolution works successfully as determined by deploying a sample nginx application and a custom liveness app querying the nginx application health end point
- This application we are executing against kube-system type namespace
## Exit Criteria
- Application replicas are healthy after chaos injection
- Service resolution works successfully as determined by deploying a sample nginx application and a custom liveness app querying the nginx application health end point
## Details
- Causes graceful pod failure of an ChaosToolKit replicas bases on provided namespace and Label with endpoint
- Tests deployment sanity check with Steady state hypothesis pre and post pod failures
- Service resolution will failed if Application replicas are not present.
### Use Cases for executing the experiment
<table>
<tr>
<th> Type </th>
<th> Experiment </th>
<th> Details </th>
<th> json </th>
</tr>
<tr>
<td> ChaosToolKit </td>
<td> ChaosToolKit single, random pod delete experiment with count </td>
<td> Executing via label name k8s-app=&lt;&gt; </td>
<td> pod-custom-kill-health.json</td>
</tr>
<tr>
<td> TEST_NAMESPACE </td>
<td> Place holder from where the chaos experiment is executed</td>
<td> Optional </td>
<td> Defaults to is `default` </td>
</tr>
</table>
## Integrations
- Pod failures can be effected using one of these chaos libraries: `litmus`
## Steps to Execute the ChaosExperiment
- This ChaosExperiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
## Prepare chaosServiceAccount
- Based on your use case pick one of the choice from here `https://hub.litmuschaos.io/keiko/k8-keiko-active-monitor-controller`
- Service owner use case
- Install the rbac for cluster in namespace from where you are executing the experiments `kubectl apply rbac-admin.yaml`
### Sample Rbac Manifest for Cluster Owner use case
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/k8-pod-delete/rbac.yaml yaml"
```yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: ["", "apps", "batch"]
resources: ["jobs", "deployments", "daemonsets"]
verbs: ["create", "list", "get", "patch", "delete"]
- apiGroups: ["", "litmuschaos.io"]
resources:
[
"pods",
"configmaps",
"events",
"services",
"chaosengines",
"chaosexperiments",
"chaosresults",
"deployments",
"jobs",
]
verbs: ["get", "create", "update", "patch", "delete", "list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: k8-pod-delete-sa
subjects:
- kind: ServiceAccount
name: k8-pod-delete-sa
namespace: default
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- It will be default as
```
appinfo:
appns: default
applabel: 'app.kubernetes.io/name=addon-active-monitor'
appkind: deployment
```
- Override the experiment tunables if desired in `experiments.spec.components.env`
- To understand the values to provided in a ChaosEngine specification, refer [ChaosEngine Concepts](chaosengine-concepts.md)
#### Supported Experiment Tunables
<table>
<tr>
<th> Variables </th>
<th> Description </th>
<th> Specify In ChaosEngine </th>
<th> Notes </th>
</tr>
<tr>
<td> NAME_SPACE </td>
<td> This is chaos namespace which will create all infra chaos resources in that namespace </td>
<td> Mandatory </td>
<td> Default to kube-system </td>
</tr>
<tr>
<td> LABEL_NAME </td>
<td> The default name of the label </td>
<td> Mandatory </td>
<td> Defaults to `app.kubernetes.io/name=addon-active-monitor`</td>
</tr>
<tr>
<td> APP_ENDPOINT </td>
<td> Endpoint where ChaosToolKit will make a call and ensure the application endpoint is healthy </td>
<td> Mandatory </td>
<td> Defaults to localhost </td>
</tr>
<tr>
<td> FILE </td>
<td> Type of chaos experiments we want to execute </td>
<td> Mandatory </td>
<td> Default to `pod-custom-kill-health.json` </td>
</tr>
<tr>
<td> REPORT </td>
<td> The Report of execution coming in json format </td>
<td> Optional </td>
<td> Defaults to is `false` </td>
</tr>
<tr>
<td> REPORT_ENDPOINT </td>
<td> Report endpoint which can take the json format and submit it</td>
<td> Optional </td>
<td> Default to setup for Kafka topic for chaos, but can support any reporting database </td>
</tr>
</table>
#### Sample ChaosEngine Manifest
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/k8-pod-delete/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos-app-health
namespace: default
spec:
appinfo:
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
annotationCheck: "true"
engineState: "active"
chaosServiceAccount: k8-pod-delete-sa
monitoring: false
jobCleanUpPolicy: "retain"
experiments:
- name: k8-pod-delete
spec:
components:
env:
# set chaos namespace
- name: NAME_SPACE
value: "default"
# set chaos label name
- name: LABEL_NAME
value: "nginx"
# pod endpoint
- name: APP_ENDPOINT
value: "localhost"
- name: FILE
value: "pod-app-kill-health.json"
- name: REPORT
value: "true"
- name: REPORT_ENDPOINT
value: "none"
- name: TEST_NAMESPACE
value: "default"
```
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
### Watch Chaos progress
- View ChaosToolKit pod terminations & recovery by setting up a watch on the ChaosToolKit pods in the application namespace
`watch kubectl get pods -n kube-system`
### Check ChaosExperiment Result
- Check whether the application is resilient to the ChaosToolKit pod failure, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult k8-pod-delete -n <chaos-namespace>`
### Check ChaosExperiment logs
- Check the log and result for existing experiment
`kubectl log -f k8-pod-delete-<> -n <chaos-namespace>`

View File

@ -0,0 +1,271 @@
---
id: Kubernetes-Chaostoolkit-Cluster-kube-proxy
title: ChaosToolKit Cluster Level Pod Delete Experiment Details in kube-system
sidebar_label: Cluster Pod - kube-proxy
---
---
## Experiment Metadata
<table>
<tr>
<th> Type </th>
<th> Description </th>
<th> Tested K8s Platform </th>
</tr>
<tr>
<td> ChaosToolKit </td>
<td> ChaosToolKit Cluster Level Pod delete experiment </td>
<td> Kubeadm, Minikube </td>
</tr>
</table>
## Prerequisites
- Ensure that the Litmus ChaosOperator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `k8-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/kube-components/k8-kube-proxy/experiment.yaml)
- Ensure you have nginx default application setup on default namespace ( if you are using specific namespace please execute below on that namespace)
## Entry Criteria
- Application replicas are healthy before chaos injection
- Service resolution works successfully as determined by deploying a sample nginx application and a custom liveness app querying the nginx application health end point
- This application we are executing against kube-system type namespace
## Exit Criteria
- Application replicas are healthy after chaos injection
- Service resolution works successfully as determined by deploying a sample nginx application and a custom liveness app querying the nginx application health end point
## Details
- Causes graceful pod failure of an ChaosToolKit replicas bases on provided namespace and Label with endpoint
- Tests deployment sanity check with Steady state hypothesis pre and post pod failures
- Service resolution will failed if Application replicas are not present.
### Use Cases for executing the experiment
<table>
<tr>
<th> Type </th>
<th> Experiment </th>
<th> Details </th>
<th> json </th>
</tr>
<tr>
<td> ChaosToolKit </td>
<td> ChaosToolKit single, random pod delete experiment with count </td>
<td> Executing via label name k8s-app=&lt;&gt; </td>
<td> pod-custom-kill-health.json</td>
</tr>
<tr>
<td> TEST_NAMESPACE </td>
<td> Place holder from where the chaos experiment is executed</td>
<td> Optional </td>
<td> Defaults to is `default` </td>
</tr>
</table>
## Integrations
- Pod failures can be effected using one of these chaos libraries: `litmus`
## Steps to Execute the ChaosExperiment
- This ChaosExperiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
## Prepare chaosServiceAccount
- Based on your use case pick one of the choice from here `https://hub.litmuschaos.io/kube-components/k8-kube-proxy`
- Service owner use case
- Install the rbac for cluster in namespace from where you are executing the experiments `kubectl apply rbac-admin.yaml`
### Sample Rbac Manifest for Cluster Owner use case
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-components/k8-kube-proxy/rbac-admin.yaml yaml"
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: chaos-admin
labels:
name: chaos-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaos-admin
labels:
name: chaos-admin
rules:
- apiGroups: ["", "apps", "batch"]
resources: ["jobs", "deployments", "daemonsets"]
verbs: ["create", "list", "get", "patch", "delete"]
- apiGroups: ["", "litmuschaos.io"]
resources:
[
"pods",
"configmaps",
"events",
"services",
"chaosengines",
"chaosexperiments",
"chaosresults",
"deployments",
"jobs",
]
verbs: ["get", "create", "update", "patch", "delete", "list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaos-admin
labels:
name: chaos-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaos-admin
subjects:
- kind: ServiceAccount
name: chaos-admin
namespace: default
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- It will be default as
```
appinfo:
appns: default
applabel: 'k8s-app=kube-proxy'
appkind: deployment
```
- Override the experiment tunables if desired in `experiments.spec.components.env`
- To understand the values to provided in a ChaosEngine specification, refer [ChaosEngine Concepts](chaosengine-concepts.md)
#### Supported Experiment Tunables
<table>
<tr>
<th> Variables </th>
<th> Description </th>
<th> Specify In ChaosEngine </th>
<th> Notes </th>
</tr>
<tr>
<td> NAME_SPACE </td>
<td> This is chaos namespace which will create all infra chaos resources in that namespace </td>
<td> Mandatory </td>
<td> Default to kube-system </td>
</tr>
<tr>
<td> LABEL_NAME </td>
<td> The default name of the label </td>
<td> Mandatory </td>
<td> Defaults to `k8s-app=kube-proxy`</td>
</tr>
<tr>
<td> APP_ENDPOINT </td>
<td> Endpoint where ChaosToolKit will make a call and ensure the application endpoint is healthy </td>
<td> Mandatory </td>
<td> Defaults to localhost </td>
</tr>
<tr>
<td> FILE </td>
<td> Type of chaos experiments we want to execute </td>
<td> Mandatory </td>
<td> Default to `pod-custom-kill-health.json` </td>
</tr>
<tr>
<td> REPORT </td>
<td> The Report of execution coming in json format </td>
<td> Optional </td>
<td> Defaults to is `false` </td>
</tr>
<tr>
<td> REPORT_ENDPOINT </td>
<td> Report endpoint which can take the json format and submit it</td>
<td> Optional </td>
<td> Default to setup for Kafka topic for chaos, but can support any reporting database </td>
</tr>
</table>
#### Sample ChaosEngine Manifest
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-components/k8-kube-proxy/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-kube-proxy
namespace: default
spec:
appinfo:
appns: "default"
applabel: "k8s-app=kube-proxy"
appkind: deployment
annotationCheck: "false"
engineState: "active"
chaosServiceAccount: chaos-admin
monitoring: false
jobCleanUpPolicy: "retain"
experiments:
- name: k8-pod-delete
spec:
components:
env:
# set chaos namespace
- name: NAME_SPACE
value: kube-system
# set chaos label name
- name: LABEL_NAME
value: k8s-app=kube-proxy
# pod endpoint
- name: APP_ENDPOINT
value: "localhost"
- name: FILE
value: "pod-custom-kill-health.json"
- name: REPORT
value: "true"
- name: REPORT_ENDPOINT
value: "none"
- name: TEST_NAMESPACE
value: "default"
```
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
### Watch Chaos progress
- View ChaosToolKit pod terminations & recovery by setting up a watch on the ChaosToolKit pods in the application namespace
`watch kubectl get pods -n kube-system`
### Check ChaosExperiment Result
- Check whether the application is resilient to the ChaosToolKit pod failure, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult k8-pod-delete -n <chaos-namespace>`
### Check ChaosExperiment logs
- Check the log and result for existing experiment
`kubectl log -f k8-pod-delete-<> -n <chaos-namespace>`

View File

@ -24,7 +24,7 @@ sidebar_label: Service Pod - Application
## Prerequisites ## Prerequisites
- Ensure that the Litmus ChaosOperator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus ChaosOperator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `k8-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/k8-pod-delete/experiment.yaml) - Ensure that the `k8-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/k8-pod-delete/experiment.yaml)
- Ensure you have nginx default application setup on default namespace ( if you are using specific namespace please execute below on that namespace) - Ensure you have nginx default application setup on default namespace ( if you are using specific namespace please execute below on that namespace)
## Entry Criteria ## Entry Criteria
@ -55,37 +55,37 @@ sidebar_label: Service Pod - Application
<tr> <tr>
<td> ChaosToolKit </td> <td> ChaosToolKit </td>
<td> ChaosToolKit single, random pod delete experiment with count </td> <td> ChaosToolKit single, random pod delete experiment with count </td>
<td> Executing via label name app={"<>"} </td> <td> Executing via label name app=&lt;&gt; </td>
<td> pod-app-kill-count.json</td> <td> pod-app-kill-count.json</td>
</tr> </tr>
<tr> <tr>
<td> ChaosToolKit </td> <td> ChaosToolKit </td>
<td> ChaosToolKit single, random pod delete experiment </td> <td> ChaosToolKit single, random pod delete experiment </td>
<td> Executing via label name app={"<>"}</td> <td> Executing via label name app=&lt;&gt; </td>
<td> pod-app-kill-health.json </td> <td> pod-app-kill-health.json </td>
</tr> </tr>
<tr> <tr>
<td> ChaosToolKit </td> <td> ChaosToolKit </td>
<td> ChaosToolKit single, random pod delete experiment with count </td> <td> ChaosToolKit single, random pod delete experiment with count </td>
<td> Executing via Custom label name {"<"}custom{">"}={"<>"} </td> <td> Executing via Custom label name &lt;custom&gt;=&lt;&gt; </td>
<td> pod-app-kill-count.json</td> <td> pod-app-kill-count.json</td>
</tr> </tr>
<tr> <tr>
<td> ChaosToolKit </td> <td> ChaosToolKit </td>
<td> ChaosToolKit single, random pod delete experiment </td> <td> ChaosToolKit single, random pod delete experiment </td>
<td> Executing via Custom label name {"<"}custom{">"}={"<>"} </td> <td> Executing via Custom label name &lt;custom&gt;=&lt;&gt; </td>
<td> pod-app-kill-health.json </td> <td> pod-app-kill-health.json </td>
</tr> </tr>
<tr> <tr>
<td> ChaosToolKit </td> <td> ChaosToolKit </td>
<td> ChaosToolKit All pod delete experiment with health validation </td> <td> ChaosToolKit All pod delete experiment with health validation </td>
<td> Executing via Custom label name app={"<>"} </td> <td> Executing via Custom label name app=&lt;&gt; </td>
<td> pod-app-kill-all.json </td> <td> pod-app-kill-all.json </td>
</tr> </tr>
<tr> <tr>
<td> ChaosToolKit </td> <td> ChaosToolKit </td>
<td> ChaosToolKit All pod delete experiment with health validation</td> <td> ChaosToolKit All pod delete experiment with health validation</td>
<td> Executing via Custom label name {"<"}custom{">"}={"<>"} </td> <td> Executing via Custom label name &lt;custom&gt;=&lt;&gt; </td>
<td> pod-custom-kill-all.json </td> <td> pod-custom-kill-all.json </td>
</tr> </tr>
</table> </table>
@ -119,6 +119,7 @@ metadata:
namespace: default namespace: default
labels: labels:
name: k8-pod-delete-sa name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
--- ---
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
kind: Role kind: Role
@ -127,19 +128,25 @@ metadata:
namespace: default namespace: default
labels: labels:
name: k8-pod-delete-sa name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
rules: rules:
- apiGroups: ["", "litmuschaos.io", "batch", "apps"] - apiGroups: ["", "apps", "batch"]
resources: ["jobs", "deployments", "daemonsets"]
verbs: ["create", "list", "get", "patch", "delete"]
- apiGroups: ["", "litmuschaos.io"]
resources: resources:
[ [
"pods", "pods",
"deployments",
"jobs",
"configmaps", "configmaps",
"events",
"services",
"chaosengines", "chaosengines",
"chaosexperiments", "chaosexperiments",
"chaosresults", "chaosresults",
"deployments",
"jobs",
] ]
verbs: ["create", "list", "get", "patch", "update", "delete"] verbs: ["get", "create", "update", "patch", "delete", "list"]
- apiGroups: [""] - apiGroups: [""]
resources: ["nodes"] resources: ["nodes"]
verbs: ["get", "list"] verbs: ["get", "list"]
@ -151,6 +158,7 @@ metadata:
namespace: default namespace: default
labels: labels:
name: k8-pod-delete-sa name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
roleRef: roleRef:
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
kind: Role kind: Role

View File

@ -0,0 +1,271 @@
---
id: Kubernetes-Chaostoolkit-Cluster-prometheus-k8s-prometheus
title: ChaosToolKit Cluster Level Pod Delete Experiment Details in kube-system
sidebar_label: Cluster Pod - prometheus-k8s-prometheus
---
---
## Experiment Metadata
<table>
<tr>
<th> Type </th>
<th> Description </th>
<th> Tested K8s Platform </th>
</tr>
<tr>
<td> ChaosToolKit </td>
<td> ChaosToolKit Cluster Level Pod delete experiment </td>
<td> Kubeadm, Minikube </td>
</tr>
</table>
## Prerequisites
- Ensure that the Litmus ChaosOperator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `k8-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/kube-components/k8-prometheus-k8s-prometheus/experiment.yaml)
- Ensure you have nginx default application setup on default namespace ( if you are using specific namespace please execute below on that namespace)
## Entry Criteria
- Application replicas are healthy before chaos injection
- Service resolution works successfully as determined by deploying a sample nginx application and a custom liveness app querying the nginx application health end point
- This application we are executing against kube-system type namespace
## Exit Criteria
- Application replicas are healthy after chaos injection
- Service resolution works successfully as determined by deploying a sample nginx application and a custom liveness app querying the nginx application health end point
## Details
- Causes graceful pod failure of an ChaosToolKit replicas bases on provided namespace and Label with endpoint
- Tests deployment sanity check with Steady state hypothesis pre and post pod failures
- Service resolution will failed if Application replicas are not present.
### Use Cases for executing the experiment
<table>
<tr>
<th> Type </th>
<th> Experiment </th>
<th> Details </th>
<th> json </th>
</tr>
<tr>
<td> ChaosToolKit </td>
<td> ChaosToolKit single, random pod delete experiment with count </td>
<td> Executing via label name k8s-app=&lt;&gt; </td>
<td> pod-custom-kill-health.json</td>
</tr>
<tr>
<td> TEST_NAMESPACE </td>
<td> Place holder from where the chaos experiment is executed</td>
<td> Optional </td>
<td> Defaults to is `default` </td>
</tr>
</table>
## Integrations
- Pod failures can be effected using one of these chaos libraries: `litmus`
## Steps to Execute the ChaosExperiment
- This ChaosExperiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
## Prepare chaosServiceAccount
- Based on your use case pick one of the choice from here `https://hub.litmuschaos.io/kube-components/k8-prometheus-k8s-prometheus`
- Service owner use case
- Install the rbac for cluster in namespace from where you are executing the experiments `kubectl apply rbac-admin.yaml`
### Sample Rbac Manifest for Cluster Owner use case
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-components/k8-prometheus-k8s-prometheus/rbac-admin.yaml yaml"
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: chaos-admin
labels:
name: chaos-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaos-admin
labels:
name: chaos-admin
rules:
- apiGroups: ["", "apps", "batch"]
resources: ["jobs", "deployments", "daemonsets"]
verbs: ["create", "list", "get", "patch", "delete"]
- apiGroups: ["", "litmuschaos.io"]
resources:
[
"pods",
"configmaps",
"events",
"services",
"chaosengines",
"chaosexperiments",
"chaosresults",
"deployments",
"jobs",
]
verbs: ["get", "create", "update", "patch", "delete", "list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaos-admin
labels:
name: chaos-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaos-admin
subjects:
- kind: ServiceAccount
name: chaos-admin
namespace: default
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- It will be default as
```
appinfo:
appns: addon-metricset-ns
applabel: 'app=prometheus'
appkind: deployment
```
- Override the experiment tunables if desired in `experiments.spec.components.env`
- To understand the values to provided in a ChaosEngine specification, refer [ChaosEngine Concepts](chaosengine-concepts.md)
#### Supported Experiment Tunables
<table>
<tr>
<th> Variables </th>
<th> Description </th>
<th> Specify In ChaosEngine </th>
<th> Notes </th>
</tr>
<tr>
<td> NAME_SPACE </td>
<td> This is chaos namespace which will create all infra chaos resources in that namespace </td>
<td> Mandatory </td>
<td> Default to kube-system </td>
</tr>
<tr>
<td> LABEL_NAME </td>
<td> The default name of the label </td>
<td> Mandatory </td>
<td> Defaults to `app=prometheus`</td>
</tr>
<tr>
<td> APP_ENDPOINT </td>
<td> Endpoint where ChaosToolKit will make a call and ensure the application endpoint is healthy </td>
<td> Mandatory </td>
<td> Defaults to localhost </td>
</tr>
<tr>
<td> FILE </td>
<td> Type of chaos experiments we want to execute </td>
<td> Mandatory </td>
<td> Default to `pod-custom-kill-health.json` </td>
</tr>
<tr>
<td> REPORT </td>
<td> The Report of execution coming in json format </td>
<td> Optional </td>
<td> Defaults to is `false` </td>
</tr>
<tr>
<td> REPORT_ENDPOINT </td>
<td> Report endpoint which can take the json format and submit it</td>
<td> Optional </td>
<td> Default to setup for Kafka topic for chaos, but can support any reporting database </td>
</tr>
</table>
#### Sample ChaosEngine Manifest
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-components/k8-prometheus-k8s-prometheus/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-prometheus-k8s-prometheus
namespace: default
spec:
appinfo:
appns: "default"
applabel: "app=prometheus"
appkind: deployment
annotationCheck: "false"
engineState: "active"
chaosServiceAccount: chaos-admin
monitoring: false
jobCleanUpPolicy: "retain"
experiments:
- name: k8-pod-delete
spec:
components:
env:
# set chaos namespace, we assume you are using the addon-metricset-ns if not modify the below namespace
- name: NAME_SPACE
value: addon-metricset-ns
# set chaos label name
- name: LABEL_NAME
value: prometheus
# pod endpoint
- name: APP_ENDPOINT
value: "localhost"
- name: FILE
value: "pod-app-kill-health.json"
- name: REPORT
value: "false"
- name: REPORT_ENDPOINT
value: "none"
- name: TEST_NAMESPACE
value: "default"
```
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
### Watch Chaos progress
- View ChaosToolKit pod terminations & recovery by setting up a watch on the ChaosToolKit pods in the application namespace
`watch kubectl get pods -n kube-system`
### Check ChaosExperiment Result
- Check whether the application is resilient to the ChaosToolKit pod failure, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult k8-pod-delete -n <chaos-namespace>`
### Check ChaosExperiment logs
- Check the log and result for existing experiment
`kubectl log -f k8-pod-delete-<> -n <chaos-namespace>`

View File

@ -0,0 +1,271 @@
---
id: Kubernetes-Chaostoolkit-Cluster-prometheus-operator
title: ChaosToolKit Cluster Level Pod Delete Experiment Details in kube-system
sidebar_label: Cluster Pod - prometheus-operator
---
---
## Experiment Metadata
<table>
<tr>
<th> Type </th>
<th> Description </th>
<th> Tested K8s Platform </th>
</tr>
<tr>
<td> ChaosToolKit </td>
<td> ChaosToolKit Cluster Level Pod delete experiment </td>
<td> Kubeadm, Minikube </td>
</tr>
</table>
## Prerequisites
- Ensure that the Litmus ChaosOperator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `k8-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/kube-components/k8-prometheus-operator/experiment.yaml)
- Ensure you have nginx default application setup on default namespace ( if you are using specific namespace please execute below on that namespace)
## Entry Criteria
- Application replicas are healthy before chaos injection
- Service resolution works successfully as determined by deploying a sample nginx application and a custom liveness app querying the nginx application health end point
- This application we are executing against kube-system type namespace
## Exit Criteria
- Application replicas are healthy after chaos injection
- Service resolution works successfully as determined by deploying a sample nginx application and a custom liveness app querying the nginx application health end point
## Details
- Causes graceful pod failure of an ChaosToolKit replicas bases on provided namespace and Label with endpoint
- Tests deployment sanity check with Steady state hypothesis pre and post pod failures
- Service resolution will failed if Application replicas are not present.
### Use Cases for executing the experiment
<table>
<tr>
<th> Type </th>
<th> Experiment </th>
<th> Details </th>
<th> json </th>
</tr>
<tr>
<td> ChaosToolKit </td>
<td> ChaosToolKit single, random pod delete experiment with count </td>
<td> Executing via label name k8s-app=&lt;&gt; </td>
<td> pod-custom-kill-health.json</td>
</tr>
<tr>
<td> TEST_NAMESPACE </td>
<td> Place holder from where the chaos experiment is executed</td>
<td> Optional </td>
<td> Defaults to is `default` </td>
</tr>
</table>
## Integrations
- Pod failures can be effected using one of these chaos libraries: `litmus`
## Steps to Execute the ChaosExperiment
- This ChaosExperiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
## Prepare chaosServiceAccount
- Based on your use case pick one of the choice from here `https://hub.litmuschaos.io/kube-components/k8-prometheus-operator`
- Service owner use case
- Install the rbac for cluster in namespace from where you are executing the experiments `kubectl apply rbac-admin.yaml`
### Sample Rbac Manifest for Cluster Owner use case
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-components/k8-prometheus-operator/rbac-admin.yaml yaml"
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: chaos-admin
labels:
name: chaos-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaos-admin
labels:
name: chaos-admin
rules:
- apiGroups: ["", "apps", "batch"]
resources: ["jobs", "deployments", "daemonsets"]
verbs: ["create", "list", "get", "patch", "delete"]
- apiGroups: ["", "litmuschaos.io"]
resources:
[
"pods",
"configmaps",
"events",
"services",
"chaosengines",
"chaosexperiments",
"chaosresults",
"deployments",
"jobs",
]
verbs: ["get", "create", "update", "patch", "delete", "list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaos-admin
labels:
name: chaos-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaos-admin
subjects:
- kind: ServiceAccount
name: chaos-admin
namespace: default
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- It will be default as
```
appinfo:
appns: addon-metricset-ns
applabel: 'k8s-app=prometheus-operator'
appkind: deployment
```
- Override the experiment tunables if desired in `experiments.spec.components.env`
- To understand the values to provided in a ChaosEngine specification, refer [ChaosEngine Concepts](chaosengine-concepts.md)
#### Supported Experiment Tunables
<table>
<tr>
<th> Variables </th>
<th> Description </th>
<th> Specify In ChaosEngine </th>
<th> Notes </th>
</tr>
<tr>
<td> NAME_SPACE </td>
<td> This is chaos namespace which will create all infra chaos resources in that namespace </td>
<td> Mandatory </td>
<td> Default to kube-system </td>
</tr>
<tr>
<td> LABEL_NAME </td>
<td> The default name of the label </td>
<td> Mandatory </td>
<td> Defaults to `k8s-app=prometheus-operator`</td>
</tr>
<tr>
<td> APP_ENDPOINT </td>
<td> Endpoint where ChaosToolKit will make a call and ensure the application endpoint is healthy </td>
<td> Mandatory </td>
<td> Defaults to localhost </td>
</tr>
<tr>
<td> FILE </td>
<td> Type of chaos experiments we want to execute </td>
<td> Mandatory </td>
<td> Default to `pod-custom-kill-health.json` </td>
</tr>
<tr>
<td> REPORT </td>
<td> The Report of execution coming in json format </td>
<td> Optional </td>
<td> Defaults to is `false` </td>
</tr>
<tr>
<td> REPORT_ENDPOINT </td>
<td> Report endpoint which can take the json format and submit it</td>
<td> Optional </td>
<td> Default to setup for Kafka topic for chaos, but can support any reporting database </td>
</tr>
</table>
#### Sample ChaosEngine Manifest
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-components/k8-prometheus-operator/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-prometheus-operator
namespace: default
spec:
appinfo:
appns: "default"
applabel: "k8s-app=prometheus-operator"
appkind: deployment
annotationCheck: "false"
engineState: "active"
chaosServiceAccount: chaos-admin
monitoring: false
jobCleanUpPolicy: "retain"
experiments:
- name: k8-pod-delete
spec:
components:
env:
# set chaos namespace, we assume you are using the addon-metricset-ns if not modify the below namespace
- name: NAME_SPACE
value: addon-metricset-ns
# set chaos label name
- name: LABEL_NAME
value: k8s-app=prometheus-operator
# pod endpoint
- name: APP_ENDPOINT
value: "localhost"
- name: FILE
value: "pod-custom-kill-health.json"
- name: REPORT
value: "false"
- name: REPORT_ENDPOINT
value: "none"
- name: TEST_NAMESPACE
value: "default"
```
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
### Watch Chaos progress
- View ChaosToolKit pod terminations & recovery by setting up a watch on the ChaosToolKit pods in the application namespace
`watch kubectl get pods -n kube-system`
### Check ChaosExperiment Result
- Check whether the application is resilient to the ChaosToolKit pod failure, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult k8-pod-delete -n <chaos-namespace>`
### Check ChaosExperiment logs
- Check the log and result for existing experiment
`kubectl log -f k8-pod-delete-<> -n <chaos-namespace>`

View File

@ -0,0 +1,271 @@
---
id: Kubernetes-Chaostoolkit-Cluster-prometheus-pushgateway
title: ChaosToolKit Cluster Level Pod Delete Experiment Details in kube-system
sidebar_label: Cluster Pod - prometheus-pushgateway
---
---
## Experiment Metadata
<table>
<tr>
<th> Type </th>
<th> Description </th>
<th> Tested K8s Platform </th>
</tr>
<tr>
<td> ChaosToolKit </td>
<td> ChaosToolKit Cluster Level Pod delete experiment </td>
<td> Kubeadm, Minikube </td>
</tr>
</table>
## Prerequisites
- Ensure that the Litmus ChaosOperator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `k8-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/kube-components/k8-prometheus-pushgateway/experiment.yaml)
- Ensure you have nginx default application setup on default namespace ( if you are using specific namespace please execute below on that namespace)
## Entry Criteria
- Application replicas are healthy before chaos injection
- Service resolution works successfully as determined by deploying a sample nginx application and a custom liveness app querying the nginx application health end point
- This application we are executing against kube-system type namespace
## Exit Criteria
- Application replicas are healthy after chaos injection
- Service resolution works successfully as determined by deploying a sample nginx application and a custom liveness app querying the nginx application health end point
## Details
- Causes graceful pod failure of an ChaosToolKit replicas bases on provided namespace and Label with endpoint
- Tests deployment sanity check with Steady state hypothesis pre and post pod failures
- Service resolution will failed if Application replicas are not present.
### Use Cases for executing the experiment
<table>
<tr>
<th> Type </th>
<th> Experiment </th>
<th> Details </th>
<th> json </th>
</tr>
<tr>
<td> ChaosToolKit </td>
<td> ChaosToolKit single, random pod delete experiment with count </td>
<td> Executing via label name k8s-app=&lt;&gt; </td>
<td> pod-custom-kill-health.json</td>
</tr>
<tr>
<td> TEST_NAMESPACE </td>
<td> Place holder from where the chaos experiment is executed</td>
<td> Optional </td>
<td> Defaults to is `default` </td>
</tr>
</table>
## Integrations
- Pod failures can be effected using one of these chaos libraries: `litmus`
## Steps to Execute the ChaosExperiment
- This ChaosExperiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
## Prepare chaosServiceAccount
- Based on your use case pick one of the choice from here `https://hub.litmuschaos.io/generic/k8-prometheus-pushgateway`
- Service owner use case
- Install the rbac for cluster in namespace from where you are executing the experiments `kubectl apply rbac-admin.yaml`
### Sample Rbac Manifest for Cluster Owner use case
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-components/k8-prometheus-pushgateway/rbac-admin.yaml yaml"
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: chaos-admin
labels:
name: chaos-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaos-admin
labels:
name: chaos-admin
rules:
- apiGroups: ["", "apps", "batch"]
resources: ["jobs", "deployments", "daemonsets"]
verbs: ["create", "list", "get", "patch", "delete"]
- apiGroups: ["", "litmuschaos.io"]
resources:
[
"pods",
"configmaps",
"events",
"services",
"chaosengines",
"chaosexperiments",
"chaosresults",
"deployments",
"jobs",
]
verbs: ["get", "create", "update", "patch", "delete", "list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaos-admin
labels:
name: chaos-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaos-admin
subjects:
- kind: ServiceAccount
name: chaos-admin
namespace: default
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- It will be default as
```
appinfo:
appns: addon-metricset-ns
applabel: 'k8s-app=prometheus-pushgateway'
appkind: deployment
```
- Override the experiment tunables if desired in `experiments.spec.components.env`
- To understand the values to provided in a ChaosEngine specification, refer [ChaosEngine Concepts](chaosengine-concepts.md)
#### Supported Experiment Tunables
<table>
<tr>
<th> Variables </th>
<th> Description </th>
<th> Specify In ChaosEngine </th>
<th> Notes </th>
</tr>
<tr>
<td> NAME_SPACE </td>
<td> This is chaos namespace which will create all infra chaos resources in that namespace </td>
<td> Mandatory </td>
<td> Default to kube-system </td>
</tr>
<tr>
<td> LABEL_NAME </td>
<td> The default name of the label </td>
<td> Mandatory </td>
<td> Defaults to `k8s-app=prometheus-pushgateway`</td>
</tr>
<tr>
<td> APP_ENDPOINT </td>
<td> Endpoint where ChaosToolKit will make a call and ensure the application endpoint is healthy </td>
<td> Mandatory </td>
<td> Defaults to localhost </td>
</tr>
<tr>
<td> FILE </td>
<td> Type of chaos experiments we want to execute </td>
<td> Mandatory </td>
<td> Default to `pod-custom-kill-health.json` </td>
</tr>
<tr>
<td> REPORT </td>
<td> The Report of execution coming in json format </td>
<td> Optional </td>
<td> Defaults to is `false` </td>
</tr>
<tr>
<td> REPORT_ENDPOINT </td>
<td> Report endpoint which can take the json format and submit it</td>
<td> Optional </td>
<td> Default to setup for Kafka topic for chaos, but can support any reporting database </td>
</tr>
</table>
#### Sample ChaosEngine Manifest
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-components/k8-prometheus-pushgateway/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-prometheus-pushgateway
namespace: default
spec:
appinfo:
appns: "default"
applabel: "k8s-app=prometheus-pushgateway"
appkind: deployment
annotationCheck: "false"
engineState: "active"
chaosServiceAccount: chaos-admin
monitoring: false
jobCleanUpPolicy: "retain"
experiments:
- name: k8-pod-delete
spec:
components:
env:
# set chaos namespace, we assume you are using the addon-metricset-ns if not modify the below namespace
- name: NAME_SPACE
value: addon-metricset-ns
# set chaos label name
- name: LABEL_NAME
value: k8s-app=prometheus-pushgateway
# pod endpoint
- name: APP_ENDPOINT
value: "localhost"
- name: FILE
value: "pod-custom-kill-health.json"
- name: REPORT
value: "false"
- name: REPORT_ENDPOINT
value: "none"
- name: TEST_NAMESPACE
value: "default"
```
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
### Watch Chaos progress
- View ChaosToolKit pod terminations & recovery by setting up a watch on the ChaosToolKit pods in the application namespace
`watch kubectl get pods -n kube-system`
### Check ChaosExperiment Result
- Check whether the application is resilient to the ChaosToolKit pod failure, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult k8-pod-delete -n <chaos-namespace>`
### Check ChaosExperiment logs
- Check the log and result for existing experiment
`kubectl log -f k8-pod-delete-<> -n <chaos-namespace>`

View File

@ -56,13 +56,13 @@ sidebar_label: Application Service
<tr> <tr>
<td> ChaosToolKit </td> <td> ChaosToolKit </td>
<td> ChaosToolKit Micro Service delete with count validation </td> <td> ChaosToolKit Micro Service delete with count validation </td>
<td> Executing via Custom label name app={"<>"} </td> <td> Executing via Custom label name app=&lt;&gt; </td>
<td> service-app-kill-count.json </td> <td> service-app-kill-count.json </td>
</tr> </tr>
<tr> <tr>
<td> ChaosToolKit </td> <td> ChaosToolKit </td>
<td> ChaosToolKit Micro Service delete with health validation</td> <td> ChaosToolKit Micro Service delete with health validation</td>
<td> Executing via Custom label name {"<"}custom{">"}={"<"}custom{">"} </td> <td> Executing via Custom label name &lt;custom&gt;=&lt;&gt; </td>
<td> service-app-kill-health.json </td> <td> service-app-kill-health.json </td>
</tr> </tr>
</table> </table>
@ -96,6 +96,7 @@ metadata:
namespace: default namespace: default
labels: labels:
name: k8-pod-delete-sa name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
--- ---
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
kind: Role kind: Role
@ -104,19 +105,25 @@ metadata:
namespace: default namespace: default
labels: labels:
name: k8-pod-delete-sa name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
rules: rules:
- apiGroups: ["", "litmuschaos.io", "batch", "apps"] - apiGroups: ["", "apps", "batch"]
resources: ["jobs", "deployments", "daemonsets"]
verbs: ["create", "list", "get", "patch", "delete"]
- apiGroups: ["", "litmuschaos.io"]
resources: resources:
[ [
"pods", "pods",
"deployments",
"jobs",
"configmaps", "configmaps",
"events",
"services",
"chaosengines", "chaosengines",
"chaosexperiments", "chaosexperiments",
"chaosresults", "chaosresults",
"deployments",
"jobs",
] ]
verbs: ["create", "list", "get", "patch", "update", "delete"] verbs: ["get", "create", "update", "patch", "delete", "list"]
- apiGroups: [""] - apiGroups: [""]
resources: ["nodes"] resources: ["nodes"]
verbs: ["get", "list"] verbs: ["get", "list"]
@ -128,6 +135,7 @@ metadata:
namespace: default namespace: default
labels: labels:
name: k8-pod-delete-sa name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
roleRef: roleRef:
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
kind: Role kind: Role

View File

@ -1,23 +0,0 @@
---
id: community
title: Join Litmus Community
sidebar_label: Community
---
---
Litmus community is a subset of the larger Kubernetes community. Have a question? Want to stay in touch with the happenings on Chaos Engineering on Kubernetes? Join `#litmus` channel on Kubernetes Slack.
<br/><br/>
<a href="https://kubernetes.slack.com/messages/CNXNB0ZTN" target="_blank"><img src={require('./assets/join-community.png').default} width="400"/></a>
<br/>
<br/>
<hr/>
<br/>
<br/>

View File

@ -24,7 +24,7 @@ sidebar_label: Container Kill
## Prerequisites ## Prerequisites
- Ensure that the Litmus ChaosOperator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus ChaosOperator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `container-kill` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/container-kill/experiment.yaml) - Ensure that the `container-kill` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/container-kill/experiment.yaml)
## Entry Criteria ## Entry Criteria
@ -36,20 +36,22 @@ sidebar_label: Container Kill
## Details ## Details
- litmus lib in docker runtime details - litmus LIB Details:
- It can kill the container of multiple pods in parallel (can be tuned by `PODS_AFFECTED_PERC` env). It kill the container by sending SIGKILL termination signal to its docker socket (hence docker runtime is required) - It supports docker, containerd and crio container runtime.
- It can kill the container of multiple pods in parallel (can be tuned by `PODS_AFFECTED_PERC` env).
- Containers are killed using the `docker kill` in docker runtime and `crictl stop` command in containerd or crio runtime.
- container-kill is run as a pod on the application node. It have ability to kill the application containers multiple times. Which can be varied by `TOTAL_CHAOS_DURATION` and `CHAOS_INTERVAL`.
- pumba LIB Details:
- It support only docker container runtime.
- It can kill the container of multiple pods in parallel (can be tuned by `PODS_AFFECTED_PERC` env). It kill the container by sending SIGKILL termination signal to its docker socket (hence docker runtime is required).
- Containers are killed using the `kill` command provided by [pumba](https://github.com/alexei-led/pumba) - Containers are killed using the `kill` command provided by [pumba](https://github.com/alexei-led/pumba)
- Pumba is run as a pod on the application node. It have ability to kill the application containers multiple times. Which can be varied by `TOTAL_CHAOS_DURATION` and `CHAOS_INTERVAL`. - Pumba is run as a pod on the application node. It have ability to kill the application containers multiple times. Which can be varied by `TOTAL_CHAOS_DURATION` and `CHAOS_INTERVAL`.
- litmus chaoslib in containerd and crio runtime codetails
- It can kill the container of multiple pods in parallel (can be tuned by `PODS_AFFECTED_PERC` env).
- Containers are killed using the `crictl stop` command.
- container-kill is run as a pod on the application node. It have ability to kill the application containers multiple times. Which can be varied by `TOTAL_CHAOS_DURATION` and `CHAOS_INTERVAL`.
- Tests deployment sanity (replica availability & uninterrupted service) and recovery workflow of the application - Tests deployment sanity (replica availability & uninterrupted service) and recovery workflow of the application
- Good for testing recovery of pods having side-car containers - Good for testing recovery of pods having side-car containers
## Integrations ## Integrations
- Container kill is achieved using the `litmus` chaos library - Container kill supports `litmus` and `pumba` LIB.
- The container runtime can be choose via setting `CONTAINER_RUNTIME` env. supported values: `docker`, `containerd`, `crio` - The container runtime can be choose via setting `CONTAINER_RUNTIME` env. supported values: `docker`, `containerd`, `crio`
- The desired pumba and litmus image can be configured in the env variable `LIB_IMAGE`. - The desired pumba and litmus image can be configured in the env variable `LIB_IMAGE`.
@ -63,7 +65,7 @@ sidebar_label: Container Kill
- Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment. - Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
#### Sample Rbac Manifest #### Sample RBAC Manifest
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/container-kill/rbac.yaml yaml" [embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/container-kill/rbac.yaml yaml"
@ -120,6 +122,8 @@ subjects:
namespace: default namespace: default
``` ```
**_Note:_** In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The `chaosServiceAccount` can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found [here](https://docs.litmuschaos.io/docs/next/litmus-psp/).
### Prepare ChaosEngine ### Prepare ChaosEngine
- Provide the application info in `spec.appinfo` - Provide the application info in `spec.appinfo`
@ -159,13 +163,13 @@ subjects:
<td> PODS_AFFECTED_PERC </td> <td> PODS_AFFECTED_PERC </td>
<td> The Percentage of total pods to target </td> <td> The Percentage of total pods to target </td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to 0% (corresponds to 1 replica) </td> <td> Defaults to 0 (corresponds to 1 replica), provide numeric value only </td>
</tr> </tr>
<tr> <tr>
<td> TARGET_POD </td> <td> TARGET_PODS </td>
<td> Name of the application pod subjected to container kill chaos</td> <td> Comma separated list of application pod name subjected to container kill chaos</td>
<td> Optional </td> <td> Optional </td>
<td> If not provided it will select from the appLabel provided</td> <td> If not provided, it will select target pods randomly based on provided appLabels</td>
</tr> </tr>
<tr> <tr>
<td> LIB_IMAGE </td> <td> LIB_IMAGE </td>
@ -177,7 +181,7 @@ subjects:
<td> LIB </td> <td> LIB </td>
<td> The category of lib use to inject chaos </td> <td> The category of lib use to inject chaos </td>
<td> Optional </td> <td> Optional </td>
<td> Default value: litmus, only litmus supported </td> <td> Default value: litmus, supported values: pumba and litmus </td>
</tr> </tr>
<tr> <tr>
<td> RAMP_TIME </td> <td> RAMP_TIME </td>
@ -193,21 +197,21 @@ subjects:
</tr> </tr>
<tr> <tr>
<td> SOCKET_PATH </td> <td> SOCKET_PATH </td>
<td> Path of the containerd/crio socket file </td> <td> Path of the containerd/crio/docker socket file </td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to `/run/containerd/containerd.sock` </td> <td> Defaults to `/var/run/docker.sock` </td>
</tr> </tr>
<tr> <tr>
<td> CONTAINER_RUNTIME </td> <td> CONTAINER_RUNTIME </td>
<td> container runtime interface for the cluster</td> <td> container runtime interface for the cluster</td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to docker, supported values: docker, containerd, crio </td> <td> Defaults to docker, supported values: docker, containerd and crio for litmus and only docker for pumba LIB </td>
</tr> </tr>
<tr> <tr>
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>
@ -226,8 +230,6 @@ spec:
annotationCheck: "true" annotationCheck: "true"
# It can be active/stop # It can be active/stop
engineState: "active" engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo: appinfo:
appns: "default" appns: "default"
applabel: "app=nginx" applabel: "app=nginx"
@ -241,10 +243,6 @@ spec:
spec: spec:
components: components:
env: env:
# specify the name of the container to be killed
- name: TARGET_CONTAINER
value: "nginx"
# provide the chaos interval # provide the chaos interval
- name: CHAOS_INTERVAL - name: CHAOS_INTERVAL
value: "10" value: "10"
@ -254,15 +252,14 @@ spec:
value: "20" value: "20"
# provide the name of container runtime # provide the name of container runtime
# it supports docker, containerd, crio # for litmus LIB, it supports docker, containerd, crio
# default to docker # for pumba LIB, it supports docker only
- name: CONTAINER_RUNTIME - name: CONTAINER_RUNTIME
value: "docker" value: "docker"
# provide the socket file path # provide the socket file path
# applicable only for containerd runtime
- name: SOCKET_PATH - name: SOCKET_PATH
value: "/run/containerd/containerd.sock" value: "/var/run/docker.sock"
``` ```
### Create the ChaosEngine Resource ### Create the ChaosEngine Resource

View File

@ -1,7 +1,7 @@
--- ---
id: "coredns-pod-delete" id: coredns-pod-delete
title: "CoreDNS Pod Delete Experiment Details" title: CoreDNS Pod Delete Experiment Details
sidebar_label: "CoreDNS Pod Delete" sidebar_label: CoreDNS Pod Delete
--- ---
--- ---
@ -24,7 +24,7 @@ sidebar_label: "CoreDNS Pod Delete"
## Prerequisites ## Prerequisites
- Ensure that the Litmus ChaosOperator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus ChaosOperator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `coredns-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/coredns/coredns-pod-delete/experiment.yaml) - Ensure that the `coredns-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/coredns/coredns-pod-delete/experiment.yaml)
## Entry Criteria ## Entry Criteria
@ -167,7 +167,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>

View File

@ -1,7 +1,7 @@
--- ---
id: "cStor-pool-chaos" id: cStor-pool-chaos
title: "cStor Pool Chaos Experiment Details" title: cStor Pool Chaos Experiment Details
sidebar_label: "cStor Pool Chaos" sidebar_label: cStor Pool Chaos
--- ---
--- ---
@ -24,7 +24,7 @@ sidebar_label: "cStor Pool Chaos"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`).If not, install from [here](https://litmuschaos.github.io/litmus/litmus-operator-latest.yaml) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`).If not, install from [here](https://litmuschaos.github.io/litmus/litmus-operator-latest.yaml)
- Ensure that the `openebs-pool-pod-failure` experiment resource is available in the cluster by executing `kubectl get chaosexperiments -n openebs` in the openebs namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/openebs/openebs-pool-pod-failure/experiment.yaml) - Ensure that the `openebs-pool-pod-failure` experiment resource is available in the cluster by executing `kubectl get chaosexperiments -n openebs` in the openebs namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/openebs/openebs-pool-pod-failure/experiment.yaml)
## Entry Criteria ## Entry Criteria
@ -137,7 +137,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>

View File

@ -1,7 +1,7 @@
--- ---
id: "dashboard-overview" id: dashboard-overview
title: "CI/E2E Result Visualization Portal" title: CI/E2E Result Visualization Portal
sidebar_label: "Overview" sidebar_label: Overview
--- ---
--- ---

View File

@ -12,13 +12,15 @@ Below are some key points to remember before understanding how to write a new ch
> ChaosCharts repository : https://github.com/litmuschaos/chaos-charts > ChaosCharts repository : https://github.com/litmuschaos/chaos-charts
> >
> Litmusbooks repository : https://github.com/litmuschaos/litmus-ansible/tree/master/experiments > Litmus-Go repository : https://github.com/litmuschaos/litmus-go/tree/master/experiments
> >
> Website rendering code repository: https://github.com/litmuschaos/charthub.litmuschaos.io > Website rendering code repository: https://github.com/litmuschaos/charthub.litmuschaos.io
The experiments & chaos libraries are typically written in Ansible, though not mandatory. Ensure that The experiments & chaos libraries are typically written in Go, though not mandatory. Ensure that
the experiments can be executed in a container & can read/update the litmuschaos custom resources. For example, the experiments can be executed in a container & can read/update the litmuschaos custom resources. For example,
if you are writing an experiment in Go, use this [clientset](https://github.com/litmuschaos/chaos-operator/tree/master/pkg/client) if you are writing an experiment in Go, use this [clientset](https://github.com/litmuschaos/chaos-operator/tree/master/pkg/client).
Litmus Experiment contains the logic of pre-checks, chaos-injection, litmus-probes, post-checks, and result-updates.
Typically, these are accompanied by a Kubernetes job that can execute the respective experiment.
<hr/> <hr/>
@ -30,7 +32,7 @@ A group of ChaosExperiments put together in a YAML file. Each group or chart has
that holds data such as `ChartVersion`, `Contributors`, `Description`, `links` etc.., This metadata is rendered on the ChartHub. that holds data such as `ChartVersion`, `Contributors`, `Description`, `links` etc.., This metadata is rendered on the ChartHub.
A ChaosChart also consists of a `package` manifest that is an index of available experiments in the chart. A ChaosChart also consists of a `package` manifest that is an index of available experiments in the chart.
Here is an example of the [ChartServiceVersion](https://github.com/litmuschaos/chaos-charts/blob/master/charts/generic/generic.chartserviceversion.yaml) & [package](https://github.com/litmuschaos/chaos-charts/blob/master/charts/generic/generic.package.yaml) manifests of the generic ChaosChart. --> Here is an example of the [ChartServiceVersion](https://github.com/litmuschaos/chaos-charts/blob/master/charts/generic/generic.chartserviceversion.yaml) & [package](https://github.com/litmuschaos/chaos-charts/blob/master/charts/generic/generic.package.yaml) manifests of the generic ChaosChart.
### ChaosExperiment ### ChaosExperiment
@ -40,22 +42,11 @@ to their default values.
Here is an example chaos experiment CR for a [pod-delete](https://github.com/litmuschaos/chaos-charts/blob/master/charts/generic/pod-delete/experiment.yaml) experiment Here is an example chaos experiment CR for a [pod-delete](https://github.com/litmuschaos/chaos-charts/blob/master/charts/generic/pod-delete/experiment.yaml) experiment
### Litmus Book
Litmus book is an `ansible` playbook that encompasses the logic of pre-checks, chaos-injection, post-checks, and result-updates.
Typically, these are accompanied by a Kubernetes job that can execute the respective playbook.
Here is an example of the litmus book for the [pod-delete](https://github.com/litmuschaos/litmus-ansible/tree/master/experiments/generic/pod_delete) experiment.
### Chaos functions
The `ansible` business logic inside Litmus books can make use of readily available chaos functions. The chaos functions are available as `task-files` which are wrapped in one of the chaos libraries. See [plugins](plugins.md) for more details.
<hr/> <hr/>
## Developing a ChaosExperiment ## Developing a ChaosExperiment
A detailed how-to guide on developing chaos experiments is available [here](https://github.com/litmuschaos/litmus-ansible/tree/master/contribute/developer_guide) A detailed how-to guide on developing chaos experiments is available [here](https://github.com/litmuschaos/litmus-go/tree/master/contribute/developer-guide)
<br/> <br/>

View File

@ -1,7 +1,7 @@
--- ---
id: "disk-fill" id: disk-fill
title: "Disk Fill Experiment Details" title: Disk Fill Experiment Details
sidebar_label: "Disk Fill" sidebar_label: Disk Fill
--- ---
--- ---
@ -25,7 +25,7 @@ sidebar_label: "Disk Fill"
- Ensure that Kubernetes Version > 1.13 - Ensure that Kubernetes Version > 1.13
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `disk-fill` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/disk-fill/experiment.yaml) - Ensure that the `disk-fill` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/disk-fill/experiment.yaml)
- Cluster must run docker container runtime - Cluster must run docker container runtime
- Appropriate Ephemeral Storage Requests and Limits should be set for the application before running the experiment. - Appropriate Ephemeral Storage Requests and Limits should be set for the application before running the experiment.
An example specification is shown below: An example specification is shown below:
@ -142,6 +142,8 @@ subjects:
namespace: default namespace: default
``` ```
**_Note:_** In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The `chaosServiceAccount` can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found [here](https://docs.litmuschaos.io/docs/next/litmus-psp/).
### Prepare ChaosEngine ### Prepare ChaosEngine
- Provide the application info in `spec.appinfo` - Provide the application info in `spec.appinfo`
@ -183,16 +185,16 @@ subjects:
<td> Defaults to 60s </td> <td> Defaults to 60s </td>
</tr> </tr>
<tr> <tr>
<td> TARGET_POD </td> <td> TARGET_PODS </td>
<td> Name of the application pod subjected to disk fill chaos</td> <td> Comma separated list of application pod name subjected to disk fill chaos</td>
<td> Optional </td> <td> Optional </td>
<td> If not provided it will select from the appLabel provided</td> <td> If not provided, it will select target pods randomly based on provided appLabels</td>
</tr> </tr>
<tr> <tr>
<td> PODS_AFFECTED_PERC </td> <td> PODS_AFFECTED_PERC </td>
<td> The Percentage of total pods to target </td> <td> The Percentage of total pods to target </td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to 0% (corresponds to 1 replica) </td> <td> Defaults to 0 (corresponds to 1 replica), provide numeric value only </td>
</tr> </tr>
<tr> <tr>
<td> LIB </td> <td> LIB </td>
@ -222,7 +224,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>

View File

@ -1,7 +1,7 @@
--- ---
id: "disk-loss" id: disk-loss
title: "Disk Loss Experiment Details" title: Disk Loss Experiment Details
sidebar_label: "Disk Loss" sidebar_label: Disk Loss
--- ---
--- ---
@ -24,7 +24,7 @@ sidebar_label: "Disk Loss"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `disk-loss` experiment resource is available in the cluster by `kubectl get chaosexperiments` in the desired namespace. If not, install from <a href="https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/disk-loss/experiment.yaml" target="_blank">here</a> - Ensure that the `disk-loss` experiment resource is available in the cluster by `kubectl get chaosexperiments` in the desired namespace. If not, install from <a href="https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/disk-loss/experiment.yaml" target="_blank">here</a>
- Ensure to create a Kubernetes secret having the gcloud/aws access configuration(key) in the namespace of `CHAOS_NAMESPACE`. - Ensure to create a Kubernetes secret having the gcloud/aws access configuration(key) in the namespace of `CHAOS_NAMESPACE`.
- There should be administrative access to the platform on which the cluster is hosted, as the recovery of the affected node could be manual. Example gcloud access to the project - There should be administrative access to the platform on which the cluster is hosted, as the recovery of the affected node could be manual. Example gcloud access to the project
@ -198,7 +198,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>
@ -284,7 +284,7 @@ spec:
## Check Chaos Experiment Result ## Check Chaos Experiment Result
- Check whether the application is resilient to the disk loss, once the experiment (job) is completed. The ChaosResult resource name is derived like this: {"<"}ChaosEngine-Name{">"}-{"<"}ChaosExperiment-Name{">"}. - Check whether the application is resilient to the disk loss, once the experiment (job) is completed. The ChaosResult resource name is derived like this: &lt;ChaosEngine-Name&gt;-&lt;ChaosExperiment-Name&gt;.
`kubectl describe chaosresult nginx-chaos-disk-loss -n <CHAOS_NAMESPACE>` `kubectl describe chaosresult nginx-chaos-disk-loss -n <CHAOS_NAMESPACE>`

View File

@ -1,7 +1,7 @@
--- ---
id: "docker-service-kill" id: docker-service-kill
title: "Docker Service Kill Experiment Details" title: Docker Service Kill Experiment Details
sidebar_label: "Docker Service Kill" sidebar_label: Docker Service Kill
--- ---
--- ---
@ -24,7 +24,7 @@ sidebar_label: "Docker Service Kill"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `docker-service-kill` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/docker-service-kill/experiment.yaml) - Ensure that the `docker-service-kill` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/docker-service-kill/experiment.yaml)
- Ensure that the node on which application pod is running should be cordoned before execution of the chaos experiment (before applying the chaosengine manifest) to ensure that the litmus experiment runner pods are not scheduled on it / subjected to eviction. This can be achieved with the following steps: - Ensure that the node on which application pod is running should be cordoned before execution of the chaos experiment (before applying the chaosengine manifest) to ensure that the litmus experiment runner pods are not scheduled on it / subjected to eviction. This can be achieved with the following steps:
- Get node names against the applications pods: `kubectl get pods -o wide` - Get node names against the applications pods: `kubectl get pods -o wide`
@ -154,7 +154,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>

258
website/docs/ebs-loss.md Normal file
View File

@ -0,0 +1,258 @@
---
id: ebs-loss
title: EBS Loss Experiment Details
sidebar_label: EBS Loss
---
---
## Experiment Metadata
<table>
<tr>
<th> Type </th>
<th> Description </th>
<th> Tested K8s Platform </th>
</tr>
<tr>
<td> Kube AWS </td>
<td> EBS volume loss against specified application </td>
<td> EKS </td>
</tr>
</table>
## Prerequisites
- Ensure that Kubernetes Version > 1.13
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `ebs-loss` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/kube-aws/ebs-loss/experiment.yaml)
- Ensure that you have sufficient AWS access to attach or detach an ebs volume from the instance.
- Ensure to create a Kubernetes secret having the AWS access configuration(key) in the `CHAOS_NAMESPACE`. A sample secret file looks like:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: cloud-secret
type: Opaque
stringData:
cloud_config.yml: |-
# Add the cloud AWS credentials respectively
[default]
aws_access_key_id = XXXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXX
```
## Entry-Criteria
- Application pods are healthy before chaos injection also ebs volume is attached to the instance.
## Exit-Criteria
- Application pods are healthy post chaos injection and ebs volume is attached to the instance.
## Details
- Causes chaos to disrupt state of infra resources ebs volume loss from node or ec2 instance for a certain chaos duration.
- Causes Pod to get Evicted if the Pod exceeds it Ephemeral Storage Limit.
- Tests deployment sanity (replica availability & uninterrupted service) and recovery workflows of the application pod
## Integrations
- EBS Loss can be effected using the chaos library: `litmus`, which makes use of aws sdk to attach/detach an ebs volume from the target instance.
specified capacity on the node.
- The desired chaoslib can be selected by setting the above options as value for the env variable `LIB`
## Steps to Execute the Chaos Experiment
- This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
### Prepare chaosServiceAccount
- Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
#### Sample Rbac Manifest
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-aws/ebs-loss/rbac.yaml yaml"
```yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ebs-loss-sa
namespace: default
labels:
name: ebs-loss-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ebs-loss-sa
labels:
name: ebs-loss-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: ["", "litmuschaos.io", "batch"]
resources:
[
"pods",
"jobs",
"secrets",
"events",
"pods/log",
"pods/exec",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ebs-loss-sa
labels:
name: ebs-loss-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ebs-loss-sa
subjects:
- kind: ServiceAccount
name: ebs-loss-sa
namespace: default
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- Provide the auxiliary applications info (ns & labels) in `spec.auxiliaryAppInfo`
- Override the experiment tunables if desired in `experiments.spec.components.env`
- To understand the values to provided in a ChaosEngine specification, refer [ChaosEngine Concepts](chaosengine-concepts.md)
#### Supported Experiment Tunables
<table>
<tr>
<th> Variables </th>
<th> Description </th>
<th> Specify In ChaosEngine </th>
<th> Notes </th>
</tr>
<tr>
<td> EC2_INSTANCE_ID </td>
<td> Instance Id of the target ec2 instance.</td>
<td> Mandatory </td>
<td> </td>
</tr>
<tr>
<td> EBS_VOL_ID </td>
<td> The EBS volume id attached to the given instance </td>
<td> Mandatory </td>
<td> </td>
</tr>
<tr>
<td> DEVICE_NAME </td>
<td> The device name which you wanted to mount</td>
<td> Mandatory </td>
<td> Defaults to '/dev/sdb'</td>
</tr>
<tr>
<td> TOTAL_CHAOS_DURATION </td>
<td> The time duration for chaos insertion (sec) </td>
<td> Optional </td>
<td> Defaults to 60s </td>
</tr>
<tr>
<td> REGION </td>
<td> The region name of the target instance</td>
<td> Optional </td>
<td> </td>
</tr>
<tr>
<td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr>
</table>
#### Sample ChaosEngine Manifest
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-aws/ebs-loss/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
annotationCheck: "false"
engineState: "active"
chaosServiceAccount: ebs-loss-sa
monitoring: false
# It can be retain/delete
jobCleanUpPolicy: "delete"
experiments:
- name: ebs-loss
spec:
components:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: "60"
# Instance ID of the target ec2 instance
- name: EC2_INSTANCE_ID
value: ""
# provide EBS volume id attached to the given instance
- name: EBS_VOL_ID
value: ""
# Enter the device name which you wanted to mount only for AWS.
- name: DEVICE_NAME
value: "/dev/sdb"
# provide the region name of the instace
- name: REGION
value: ""
```
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
- If the chaos experiment is not executed, refer to the [troubleshooting](https://docs.litmuschaos.io/docs/faq-troubleshooting/)
section to identify the root cause and fix the issues.
### Watch Chaos progress
- View the status of the pods as they are subjected to ebs loss.
`watch -n 1 kubectl get pods -n <application-namespace>`
- Monitor the attachment status for ebs volume from AWS CLI.
`aws ec2 describe-volumes --volume-ids <vol-id>`
- You can also use aws console to keep a watch over ebs attachment status.
### Check Chaos Experiment Result
- Check whether the application is resilient to the ebs loss, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult nginx-chaos-ebs-loss -n <application-namespace>`
### EBS Loss Experiment Demo
- A sample recording of this experiment execution will be added soon.

View File

@ -0,0 +1,232 @@
---
id: ec2-terminate
title: EC2 Terminate Experiment Details
sidebar_label: EC2 Terminate
---
---
## Experiment Metadata
<table>
<tr>
<th> Type </th>
<th> Description </th>
<th> Tested K8s Platform </th>
</tr>
<tr>
<td> Kube AWS </td>
<td> Termination of an EC2 instance for a certain chaos duration</td>
<td> EKS </td>
</tr>
</table>
## Prerequisites
- Ensure that Kubernetes Version > 1.13
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `ec2-terminate` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/kube-aws/ec2-terminate/experiment.yaml)
- Ensure that you have sufficient AWS access to stop and start an ec2 instance.
- Ensure to create a Kubernetes secret having the AWS access configuration(key) in the `CHAOS_NAMESPACE`. A sample secret file looks like:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: cloud-secret
type: Opaque
stringData:
cloud_config.yml: |-
# Add the cloud AWS credentials respectively
[default]
aws_access_key_id = XXXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXX
```
## Entry-Criteria
- EC2 instance is healthy before chaos injection.
## Exit-Criteria
- EC2 instance is healthy post chaos injection.
## Details
- Causes termination of an EC2 instance before bringing it back to running state after the specified chaos duration.
- It helps to check the performance of the application/process running on the ec2 instance.
## Integrations
- EC2 Terminate can be effected using the chaos library: `litmus`, which makes use of aws sdk to start/stop an EC2 instance.
- The desired chaoslib can be selected by setting the above options as value for the env variable `LIB`
## Steps to Execute the Chaos Experiment
- This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
### Prepare chaosServiceAccount
- Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
#### Sample Rbac Manifest
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-aws/ec2-terminate/rbac.yaml yaml"
```yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ec2-terminate-sa
namespace: default
labels:
name: ec2-terminate-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ec2-terminate-sa
labels:
name: ec2-terminate-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: ["", "litmuschaos.io", "batch"]
resources:
[
"pods",
"jobs",
"secrets",
"events",
"pods/log",
"pods/exec",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs: ["create", "list", "get", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ec2-terminate-sa
labels:
name: ec2-terminate-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ec2-terminate-sa
subjects:
- kind: ServiceAccount
name: ec2-terminate-sa
namespace: default
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- Provide the auxiliary applications info (ns & labels) in `spec.auxiliaryAppInfo`
- Override the experiment tunables if desired in `experiments.spec.components.env`
- To understand the values to provided in a ChaosEngine specification, refer [ChaosEngine Concepts](chaosengine-concepts.md)
#### Supported Experiment Tunables
<table>
<tr>
<th> Variables </th>
<th> Description </th>
<th> Specify In ChaosEngine </th>
<th> Notes </th>
</tr>
<tr>
<td> EC2_INSTANCE_ID </td>
<td> Instance Id of the target ec2 instance.</td>
<td> Mandatory </td>
<td> </td>
</tr>
<tr>
<td> TOTAL_CHAOS_DURATION </td>
<td> The time duration for chaos insertion (sec) </td>
<td> Optional </td>
<td> Defaults to 60s </td>
</tr>
<tr>
<td> REGION </td>
<td> The region name of the target instace</td>
<td> Optional </td>
<td> </td>
</tr>
<tr>
<td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr>
</table>
#### Sample ChaosEngine Manifest
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/kube-aws/ec2-terminate/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
annotationCheck: "false"
engineState: "active"
chaosServiceAccount: ec2-terminate-sa
monitoring: false
# It can be retain/delete
jobCleanUpPolicy: "delete"
experiments:
- name: ec2-terminate
spec:
components:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: "60"
# Instance ID of the target ec2 instance
- name: EC2_INSTANCE_ID
value: ""
# provide the region name of the instace
- name: REGION
value: ""
```
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
- If the chaos experiment is not executed, refer to the [troubleshooting](https://docs.litmuschaos.io/docs/faq-troubleshooting/)
section to identify the root cause and fix the issues.
### Watch Chaos progress
- Monitor the ec2 state from AWS CLI.
`aws ec2 describe-instance-status --instance-ids <instance-id>`
- You can also use aws console to keep a watch over the instance state.
### Check Chaos Experiment Result
- Check whether the application is resilient to the ec2-terminate, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult nginx-chaos-ec2-terminate -n <application-namespace>`
### EC2 Terminate Experiment Demo
- A sample recording of this experiment execution will be added soon.

View File

@ -1,7 +1,7 @@
--- ---
id: "faq-general" id: faq-general
title: "The What, Why & How of Litmus" title: The What, Why & How of Litmus
sidebar_label: "General" sidebar_label: General
--- ---
--- ---

View File

@ -1,7 +1,7 @@
--- ---
id: "faq-troubleshooting" id: faq-troubleshooting
title: "Troubleshooting Litmus" title: Troubleshooting Litmus
sidebar_label: "Troubleshooting" sidebar_label: Troubleshooting
--- ---
--- ---
@ -131,7 +131,7 @@ perform the following checks:
kubectl describe chaosengine <chaosengine-name> -n <namespace> kubectl describe chaosengine <chaosengine-name> -n <namespace>
``` ```
Look for the event with reason _Summary_ with message _{"<"}experiment-name{">"} experiment has been failed_ Look for the event with reason _Summary_ with message _&lt;experiment-name&gt; experiment has been failed_
- Check the logs of the chaos-experiment pod. - Check the logs of the chaos-experiment pod.

View File

@ -1,7 +1,7 @@
--- ---
id: "features" id: features
title: "Litmus Features" title: Litmus Features
sidebar_label: "Features" sidebar_label: Features
--- ---
--- ---

View File

@ -1,9 +1,12 @@
--- ---
id: getstarted id: getstarted
title: Getting Started with Litmus title: Getting Started with Litmus
slug: "/"
sidebar_label: Introduction sidebar_label: Introduction
--- ---
---
## Pre-requisites ## Pre-requisites
Kubernetes 1.11 or later. Kubernetes 1.11 or later.
@ -33,7 +36,7 @@ Running chaos on your application involves the following steps:
Apply the LitmusChaos Operator manifest: Apply the LitmusChaos Operator manifest:
``` ```
kubectl apply -f https://litmuschaos.github.io/litmus/litmus-operator-v1.9.0.yaml kubectl apply -f https://litmuschaos.github.io/litmus/litmus-operator-v1.10.0.yaml
``` ```
The above command installs all the CRDs, required service account configuration, and chaos-operator. The above command installs all the CRDs, required service account configuration, and chaos-operator.
@ -109,10 +112,10 @@ Expected output:
**NOTE**: **NOTE**:
- In this guide, we shall describe the steps to inject pod-delete chaos on an nginx application already deployed in the - In this guide, we shall describe the steps to inject pod-delete chaos on an nginx application already deployed in the
nginx namespace. It is a mandatory requirement to ensure that the chaos custom resources (chaosexperiment and chaosengine) nginx namespace. If you don't have this setup you can easily create one by running these two commands:
and the experiment specific serviceaccount are created in the same namespace (typically, the same as the namespace of the
application under test (AUT), in this case nginx). This is done to ensure that the developers/users of the experiment isolate - Create nginx namespace `kubectl create ns nginx`.
the chaos to their respective work-namespaces in shared environments. - Create nginx deployment in nginx namespace `kubectl create deployment nginx --image nginx -n nginx`.
- In all subsequent steps, please follow these instructions by replacing the nginx namespace and labels with that of your - In all subsequent steps, please follow these instructions by replacing the nginx namespace and labels with that of your
application. application.
@ -125,13 +128,13 @@ Expected output:
### Install Chaos Experiments ### Install Chaos Experiments
Chaos experiments contain the actual chaos details. These experiments are installed on your cluster as Kubernetes CRs. Chaos experiments contain the actual chaos details. These experiments are installed on your cluster as Kubernetes CRs.
The Chaos Experiments are grouped as Chaos Charts and are published on <a href="https://hub.litmuschaos.io" target="_blank">ChaosHub</a>.. The Chaos Experiments are grouped as Chaos Charts and are published on <a href="https://hub.litmuschaos.io" target="_blank">Chaos Hub</a>.
The generic chaos experiments such as `pod-delete`, `container-kill`,` pod-network-latency` are available under Generic Chaos Chart. The generic chaos experiments such as `pod-delete`, `container-kill`,` pod-network-latency` are available under Generic Chaos Chart.
This is the first chart you are recommended to install. This is the first chart you are recommended to install.
``` ```
kubectl apply -f https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/experiments.yaml -n nginx kubectl apply -f https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/experiments.yaml -n nginx
``` ```
Verify if the chaos experiments are installed. Verify if the chaos experiments are installed.
@ -176,6 +179,7 @@ rules:
"pods", "pods",
"deployments", "deployments",
"pods/log", "pods/log",
"pods/exec",
"events", "events",
"jobs", "jobs",
"chaosengines", "chaosengines",
@ -297,7 +301,7 @@ kubectl apply -f chaosengine.yaml
Describe the ChaosResult CR to know the status of each experiment. The `status.verdict` is set to `Awaited` when the experiment is in progress, eventually changing to either `Pass` or `Fail`. Describe the ChaosResult CR to know the status of each experiment. The `status.verdict` is set to `Awaited` when the experiment is in progress, eventually changing to either `Pass` or `Fail`.
<strong> NOTE:</strong> ChaosResult CR name will be `{"<"}chaos-engine-name{">"}-{"<"}chaos-experiment-name{">"}` <strong> NOTE:</strong> ChaosResult CR name will be `&lt;chaos-engine-name&gt;-&lt;chaos-experiment-name&gt;`
```console ```console
kubectl describe chaosresult nginx-chaos-pod-delete -n nginx kubectl describe chaosresult nginx-chaos-pod-delete -n nginx
@ -312,7 +316,7 @@ kubectl delete chaosengine --all -n <namespace>
``` ```
```console ```console
kubectl delete -f https://litmuschaos.github.io/litmus/litmus-operator-v1.9.0.yaml kubectl delete -f https://litmuschaos.github.io/litmus/litmus-operator-v1.10.0.yaml
``` ```
**NOTE** **NOTE**

View File

@ -1,7 +1,7 @@
--- ---
id: "gettingstarted" id: gettingstarted
title: "Setting Up Litmus" title: Setting Up Litmus
sidebar_label: "Getting Started" sidebar_label: Getting Started
--- ---
--- ---

View File

@ -1,7 +1,7 @@
--- ---
id: "kafka-broker-disk-failure" id: kafka-broker-disk-failure
title: "Kafka Broker Disk Failure Experiment Details" title: Kafka Broker Disk Failure Experiment Details
sidebar_label: "Broker Disk Failure" sidebar_label: Broker Disk Failure
--- ---
## Experiment Metadata ## Experiment Metadata
@ -33,7 +33,7 @@ sidebar_label: "Broker Disk Failure"
Zookeeper uses this to construct a path in which kafka cluster data is stored. Zookeeper uses this to construct a path in which kafka cluster data is stored.
- Ensure that the kafka-broker-disk failure experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/kafka/kafka-broker-disk-failure/experiment.yaml) - Ensure that the kafka-broker-disk failure experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/kafka/kafka-broker-disk-failure/experiment.yaml)
- Create a secret with the gcloud serviceaccount key (placed in a file `cloud_config.yml`) named `kafka-broker-disk-failure` in the namespace where the experiment CRs are created. This is necessary to perform the disk-detach steps from the litmus experiment container. - Create a secret with the gcloud serviceaccount key (placed in a file `cloud_config.yml`) named `kafka-broker-disk-failure` in the namespace where the experiment CRs are created. This is necessary to perform the disk-detach steps from the litmus experiment container.
@ -234,7 +234,7 @@ subjects:
<td> KAFKA_LIVENESS_IMAGE </td> <td> KAFKA_LIVENESS_IMAGE </td>
<td> Image used for liveness message stream </td> <td> Image used for liveness message stream </td>
<td> Optional </td> <td> Optional </td>
<td> Image as {"<registry_url>/<repository>/<image>:<tag>"} </td> <td> Image as `&lt;registry_url&gt;/&lt;repository&gt;/&lt;image&gt;:&lt;tag&gt;` </td>
</tr> </tr>
<tr> <tr>
<td> KAFKA_REPLICATION_FACTOR </td> <td> KAFKA_REPLICATION_FACTOR </td>
@ -264,7 +264,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>

View File

@ -1,7 +1,7 @@
--- ---
id: "kafka-broker-pod-failure" id: kafka-broker-pod-failure
title: "Kafka Broker Pod Failure Experiment Details" title: Kafka Broker Pod Failure Experiment Details
sidebar_label: "Broker Pod Failure" sidebar_label: Broker Pod Failure
--- ---
## Experiment Metadata ## Experiment Metadata
@ -24,7 +24,7 @@ sidebar_label: "Broker Pod Failure"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `kafka-broker-pod-failure` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/kafka/kafka-broker-pod-failure/experiment.yaml) - Ensure that the `kafka-broker-pod-failure` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/kafka/kafka-broker-pod-failure/experiment.yaml)
- Ensure that Kafka & Zookeeper are deployed as Statefulsets - Ensure that Kafka & Zookeeper are deployed as Statefulsets
- If Confluent/Kudo Operators have been used to deploy Kafka, note the instance name, which will be - If Confluent/Kudo Operators have been used to deploy Kafka, note the instance name, which will be
used as the value of `KAFKA_INSTANCE_NAME` experiment environment variable used as the value of `KAFKA_INSTANCE_NAME` experiment environment variable
@ -34,7 +34,7 @@ sidebar_label: "Broker Pod Failure"
Zookeeper uses this to construct a path in which kafka cluster data is stored. Zookeeper uses this to construct a path in which kafka cluster data is stored.
- Ensure that the kafka-broker-disk failure experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/kafka/kafka-broker-pod-failure/experiment.yaml) - Ensure that the kafka-broker-disk failure experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/kafka/kafka-broker-pod-failure/experiment.yaml)
## Entry Criteria ## Entry Criteria
@ -211,7 +211,7 @@ subjects:
<td> KAFKA_LIVENESS_IMAGE </td> <td> KAFKA_LIVENESS_IMAGE </td>
<td> Image used for liveness message stream </td> <td> Image used for liveness message stream </td>
<td> Optional </td> <td> Optional </td>
<td> Image as {`<registry_url>/<repository>/<image>:<tag>`} </td> <td> Image as `&lt;registry_url&gt;/&lt;repository&gt;/&lt;image&gt;:&lt;tag&gt;` </td>
</tr> </tr>
<tr> <tr>
<td> KAFKA_REPLICATION_FACTOR </td> <td> KAFKA_REPLICATION_FACTOR </td>
@ -259,7 +259,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>

View File

@ -1,7 +1,7 @@
--- ---
id: "kubelet-service-kill" id: kubelet-service-kill
title: "Kubelet Service Kill Experiment Details" title: Kubelet Service Kill Experiment Details
sidebar_label: "Kubelet Service Kill" sidebar_label: Kubelet Service Kill
--- ---
--- ---
@ -24,8 +24,8 @@ sidebar_label: "Kubelet Service Kill"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `kubelet-service-kill` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/kubelet-service-kill/experiment.yaml) - Ensure that the `kubelet-service-kill` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/kubelet-service-kill/experiment.yaml)
- Ensure that the node specified in the experiment ENV variable `APP_NODE` (the node for which kubelet service need to be killed) should be cordoned before execution of the chaos experiment (before applying the chaosengine manifest) to ensure that the litmus experiment runner pods are not scheduled on it / subjected to eviction. This can be achieved with the following steps: - Ensure that the node specified in the experiment ENV variable `TARGET_NODE` (the node for which kubelet service need to be killed) should be cordoned before execution of the chaos experiment (before applying the chaosengine manifest) to ensure that the litmus experiment runner pods are not scheduled on it / subjected to eviction. This can be achieved with the following steps:
- Get node names against the applications pods: `kubectl get pods -o wide` - Get node names against the applications pods: `kubectl get pods -o wide`
- Cordon the node `kubectl cordon <nodename>` - Cordon the node `kubectl cordon <nodename>`
@ -90,6 +90,7 @@ rules:
"pods", "pods",
"jobs", "jobs",
"pods/log", "pods/log",
"pods/exec",
"events", "events",
"chaosengines", "chaosengines",
"chaosexperiments", "chaosexperiments",
@ -117,6 +118,8 @@ subjects:
namespace: default namespace: default
``` ```
**_Note:_** In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The `chaosServiceAccount` can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found [here](https://docs.litmuschaos.io/docs/next/litmus-psp/).
### Prepare ChaosEngine ### Prepare ChaosEngine
- Provide the application info in `spec.appinfo` - Provide the application info in `spec.appinfo`
@ -134,7 +137,7 @@ subjects:
<th> Notes </th> <th> Notes </th>
</tr> </tr>
<tr> <tr>
<td> APP_NODE </td> <td> TARGET_NODE </td>
<td> Name of the node, to which kubelet service need to be killed </td> <td> Name of the node, to which kubelet service need to be killed </td>
<td> Mandatory </td> <td> Mandatory </td>
<td> </td> <td> </td>
@ -167,7 +170,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>
@ -208,8 +211,8 @@ spec:
- name: TOTAL_CHAOS_DURATION - name: TOTAL_CHAOS_DURATION
value: "90" # in seconds value: "90" # in seconds
# provide the actual name of node under test # provide the target node name
- name: APP_NODE - name: TARGET_NODE
value: "node-01" value: "node-01"
``` ```

View File

@ -1,7 +1,7 @@
--- ---
id: "litmus-demo" id: litmus-demo
title: "Chaos Engineering in a Microservices Environment" title: Chaos Engineering in a Microservices Environment
sidebar_label: "Litmus Demo" sidebar_label: Litmus Demo
--- ---
--- ---
@ -16,18 +16,22 @@ more involved exploration of LitmusChaos framework for your own business applica
## The Demo Environment ## The Demo Environment
Two cluster types are supported: Three cluster types are supported:
- KinD: A 3 node KinD cluster pre-installed with the sock-shop demo application, litmus chaos CRDs, operator, and a minimal - **KinD:** A 3 node KinD cluster pre-installed with the sock-shop demo application, litmus chaos CRDs, operator, and a minimal
set of prebuilt chaos experiment CRs is setup. set of prebuilt chaos experiment CRs is setup.
- GKE: A 3 node GKE cluster pre-installed with the sock-shop demo application, litmus chaos CRDs, operator, and the full generic - **GKE:** A 3 node GKE cluster pre-installed with the sock-shop demo application, litmus chaos CRDs, operator, and the full generic
Kubernetes chaos experiment suite is setup.
- **EKS:** A 3 node EKS cluster pre-installed with the sock-shop demo application, litmus chaos CRDs, operator, and the full generic
Kubernetes chaos experiment suite is setup. Kubernetes chaos experiment suite is setup.
## PreRequisites ## PreRequisites
- Docker, Kubectl & Python3.7+ (with the PyYaml package) are all you will need for running the KinD platform based chaos demo. - Docker 18.09 or greater (When using containerized setup)
If GKE is your platform choice, you may need to configure gcloud.
- Docker, Kubectl & Python3.7+ (with the PyYaml package) are all you will need for running the KinD platform based chaos demo. If GKE/EKS is your platform choice, you may need to configure gcloud/aws (When using a non-containerized setup).
## Getting started: ## Getting started:
@ -35,24 +39,147 @@ To get started with any of the above platforms we will follow the following step
- Clone litmus demo repository in your system. - Clone litmus demo repository in your system.
``` ```bash
git clone https://github.com/litmuschaos/litmus-demo.git git clone https://github.com/litmuschaos/litmus-demo.git
cd litmus-demo cd litmus-demo
``` ```
- Install the demo environment using one of the platforms with start argument We can Setup litmus demo in two different ways:
<table>
<tr>
<th>Containerized Setup</th>
<td>All the dependencies are installed inside a container we just need to have docker installed to run the container. It has been tested for Kind platform. </td>
</tr>
<tr>
<th>Non-Containerized Setup</th>
<td>We need to make sure that all the prerequisites are installed manually before running the demo script.</td>
</tr>
</table>
### Containerized Setup
You can setup & run the demo from a containerized environment by following the below-mentioned steps:
```bash
make build
```
OR
```bash
docker build -t litmuschaos/litmus-demo .
```
Run docker container interactive, now you can run any commands mentioned in the usage section with python3.
```bash
make exec
```
OR
```bash
docker run -v /var/run/docker.sock:/var/run/docker.sock --net="host" -it --entrypoint bash litmuschaos/litmus-demo
```
Now you can run the litmus demo script in the following ways:
- **Execing into the container:** The `make exec` will exec into the litmus demo container where you can use the `./manage.py` script directly to run the demo script.
```bash
$ make exec
------------------
--> Login to Litmus Demo container
bash-5.0# ./manage.py -h
usage: manage.py [-h] {start,test,list,stop} ...
Spin up a Demo Environment on Kubernetes.
positional arguments:
{start,test,list,stop}
start Start a Cluster with the demo environment deployed.
test Run Litmus ChaosEngine Experiments inside litmus demo environment.
list List all available Litmus ChaosEngine Experiments available to run.
stop Shutdown the Cluster with the demo environment deployed.
optional arguments:
-h, --help show this help message and exit
bash-5.0#
```
- **Without execing into the container:** You can run the demo script from outside the container using the `runcmd` script.
```bash
$ ./runcmd -h
running -h inside container
usage: manage.py [-h] {start,test,list,stop} ...
Spin up a Demo Environment on Kubernetes.
positional arguments:
{start,test,list,stop}
start Start a Cluster with the demo environment deployed.
test Run Litmus ChaosEngine Experiments inside litmus demo environment.
list List all available Litmus ChaosEngine Experiments available to run.
stop Shutdown the Cluster with the demo environment deployed.
optional arguments:
-h, --help show this help message and exit
```
### Non Containerized Setup
To get started with the non containerized setup we need to clone the litmus-demo repository and run the demo script as mentioned in the Usage section below.
```bash
git clone https://github.com/litmuschaos/litmus-demo.git
cd litmus-demo
```
Now we can use the `manage.py` python script to setup the litmus demo environment.
```bash
$ ./manage.py -h
usage: manage.py [-h] {start,test,list,stop} ...
Spin up a Demo Environment on Kubernetes.
positional arguments:
{start,test,list,stop}
start Start a Cluster with the demo environment deployed.
test Run Litmus ChaosEngine Experiments inside litmus demo
environment.
list List all available Litmus ChaosEngine Experiments
available to run.
stop Shutdown the Cluster with the demo environment
deployed.
optional arguments:
-h, --help show this help message and exit
```
## Usage
If you are using containerized setup. Follow one of the steps mentioned above to run the litmus demo. If you want to run the demo script without execing into the container
replace `./manage.py` with `./runcmd` from the below commands. For a non-containerized setup you can directly run the commands mentioned below.
- Install the demo environment using one of the platforms with the start argument
**For KinD Cluster** **For KinD Cluster**
- Install & bring-up the KinD cluster using the following command - Install & bring-up the KinD cluster using the following command
``` ```bash
./manage.py start --platform kind ./manage.py start --platform kind
``` ```
- Wait for all the pods to get in a ready state. You can monitor this using - Wait for all the pods to get in a ready state. You can monitor this using
``` ```bash
watch kubectl get pods --all-namespaces watch kubectl get pods --all-namespaces
``` ```
@ -61,13 +188,13 @@ To get started with any of the above platforms we will follow the following step
- Get the port of frontend deployment - Get the port of frontend deployment
``` ```bash
kubectl get deploy front-end -n sock-shop -o jsonpath='{.spec.template.spec.containers[?(@.name == "front-end")].ports[0].containerPort}' kubectl get deploy front-end -n sock-shop -o jsonpath='{.spec.template.spec.containers[?(@.name == "front-end")].ports[0].containerPort}'
``` ```
- Perform port forwarding on the port obtained above - Perform port forwarding on the port obtained above
``` ```bash
kubectl port-forward deploy/front-end -n sock-shop 3000:<port-number> (typically, 8079) kubectl port-forward deploy/front-end -n sock-shop 3000:<port-number> (typically, 8079)
``` ```
@ -77,13 +204,13 @@ To get started with any of the above platforms we will follow the following step
- Create the GKE cluster (ensure you have set up access to your gcloud project) - Create the GKE cluster (ensure you have set up access to your gcloud project)
``` ```bash
./manage.py start --platform GKE ./manage.py start --platform GKE
``` ```
- Wait for all the pods to get in a ready state. You can monitor this using - Wait for all the pods to get in a ready state. You can monitor this using
``` ```bash
watch kubectl get pods --all-namespaces watch kubectl get pods --all-namespaces
``` ```
@ -92,7 +219,32 @@ To get started with any of the above platforms we will follow the following step
- After a few min, identify the ingress IP to access the web-ui - After a few min, identify the ingress IP to access the web-ui
```bash
kubectl get ingress basic-ingress --namespace=sock-shop
``` ```
You can access the web application in a few minutes at `http://<ingress-ip>`
**For EKS Cluster**
- Create the EKS cluster (ensure you have setup access to your AWS project)
```bash
./manage.py start --platform EKS --name {EKS_CLUSTER_NAME}
```
- Wait for all the pods to get in a ready state. You can monitor this using
```bash
watch kubectl get pods --all-namespaces
```
Once all pods come into Running state we can access the sock-shop application through web-ui which will help us to
visualize the impact of chaos on the application and whether the application persists after chaos injections.
- After a few min, identify the ingress IP to access the web-ui
```bash
kubectl get ingress basic-ingress --namespace=sock-shop kubectl get ingress basic-ingress --namespace=sock-shop
``` ```
@ -102,35 +254,34 @@ To get started with any of the above platforms we will follow the following step
- To find out the supported tests for a platform, execute the following command: - To find out the supported tests for a platform, execute the following command:
``` ```bash
./manage.py list --platform <kind|gke> ./manage.py list --platform <kind|gke>
``` ```
- For running all experiments, run: - For running all experiments, run:
``` ```bash
./manage.py test --platform <platform-name> ./manage.py test --platform <platform-name>
``` ```
- For running selective experiments - For running selective experiments
``` ```bash
./manage.py test --platform <platform-name> --test <test-name> ./manage.py test --platform <platform-name> --test <test-name>
``` ```
Example: For running the pod-delete experiment. Example: For running the pod-delete experiment.
``` ```bash
./manage.py test --platform kind --test pod-delete ./manage.py test --platform kind --test pod-delete
``` ```
### Chaos Experiment Results ### Chaos Experiment Results
The experiment results (Pass/Fail) are derived based on the simple criteria of app availability post chaos & are summarized The experiment results (Pass/Fail) are derived based on the simple criteria of app availability post chaos & are summarized on the console once the execution completes. You are also encouraged to check the changes to the status of the web-ui & the
on the console once the execution completes. You are also encouraged to check the changes to the status of the web-ui & the respective microservices/pods as the experiment execute to get an idea of the failure injection process and subsequent recovery.
respective microservices/pods as the experiment executes to get an idea of the failure injection process and subsequent recovery.
Get more details about the flags used to configure and run the chaos tests please refer to the paramrter tables in the Get more details about the flags used to configure and run the chaos tests please refer to the parameter tables in the
[test](https://github.com/litmuschaos/litmus-demo#test) section. [test](https://github.com/litmuschaos/litmus-demo#test) section.
## Generate PDF of the experiment result summary ## Generate PDF of the experiment result summary
@ -141,7 +292,7 @@ We can also generate the pdf report of the experiment result summary using <code
./manage.py test --report=yes ./manage.py test --report=yes
``` ```
It will generate a pdf report of name `chaos-report.pdf` in the current location containing ChaosResult summary. It will generate a pdf report of the name `chaos-report.pdf` in the current location containing the ChaosResult summary.
## Deleting Cluster / Cluster Clean Up ## Deleting Cluster / Cluster Clean Up
@ -149,12 +300,18 @@ It will generate a pdf report of name `chaos-report.pdf` in the current location
For KinD cluster For KinD cluster
``` ```bash
./manage.py --platform kind stop ./manage.py --platform kind stop
``` ```
For GKE cluster For GKE cluster
``` ```bash
./manage.py --platform GKE stop --project {GC_PROJECT} ./manage.py --platform GKE stop --project {GC_PROJECT}
``` ```
For EKS cluster
```bash
./manage.py --platform EKS stop --name {EKS_CLUSTER_NAME} --awsregion {EKS_REGION_NAME}
```

View File

@ -1,7 +1,7 @@
--- ---
id: "litmus-probe" id: litmus-probe
title: "Declarative Approach to Chaos Hypothesis using Litmus Probes" title: Declarative Approach to Chaos Hypothesis using Litmus Probes
sidebar_label: "Litmus Probe" sidebar_label: Litmus Probe
--- ---
--- ---
@ -10,13 +10,14 @@ sidebar_label: "Litmus Probe"
Litmus probes are pluggable checks that can be defined within the ChaosEngine for any chaos experiment. The experiment pods execute these checks based on the mode they are defined in & factor their success as necessary conditions in determining the verdict of the experiment (along with the standard “in-built” checks). Litmus probes are pluggable checks that can be defined within the ChaosEngine for any chaos experiment. The experiment pods execute these checks based on the mode they are defined in & factor their success as necessary conditions in determining the verdict of the experiment (along with the standard “in-built” checks).
Litmus currently supports three types of probes: Litmus currently supports four types of probes:
- **httpProbe:** To query health/downstream URIs - **httpProbe:** To query health/downstream URIs
- **cmdProbe:** To execute any user-desired health-check function implemented as a shell command - **cmdProbe:** To execute any user-desired health-check function implemented as a shell command
- **k8sProbe:** To perform CRUD operations against native & custom Kubernetes resources - **k8sProbe:** To perform CRUD operations against native & custom Kubernetes resources
- **promProbe:** To execute promql queries and match prometheus metrics for specific criteria
These probes can be used in isolation or in several combinations to achieve the desired checks. While the `httpProbe` & `k8sProbe` are fully declarative in the way they are conceived, the `cmdProbe` expects the user to provide a shell command to implement checks that are highly specific to the application use case. These probes can be used in isolation or in several combinations to achieve the desired checks. While the `httpProbe` & `k8sProbe` are fully declarative in the way they are conceived, the `cmdProbe` expects the user to provide a shell command to implement checks that are highly specific to the application use case. `promProbe` expects the user to provide a promql query along with Prometheus service endpoints to check for specific criteria.
The probes can be set up to run in different modes: The probes can be set up to run in different modes:
@ -24,6 +25,7 @@ The probes can be set up to run in different modes:
- **EoT:** Executed at the End of Test as a post-chaos check - **EoT:** Executed at the End of Test as a post-chaos check
- **Edge:** Executed both, before and after the chaos - **Edge:** Executed both, before and after the chaos
- **Continuous:** The probe is executed continuously, with a specified polling interval during the chaos injection. - **Continuous:** The probe is executed continuously, with a specified polling interval during the chaos injection.
- **OnChaos:** The probe is executed continuously, with a specified polling interval strictly for chaos duration of chaos
All probes share some common attributes: All probes share some common attributes:
@ -31,6 +33,7 @@ All probes share some common attributes:
- **retry:** The number of times a check is re-run upon failure in the first attempt before declaring the probe status as failed. - **retry:** The number of times a check is re-run upon failure in the first attempt before declaring the probe status as failed.
- **interval:** The period between subsequent retries - **interval:** The period between subsequent retries
- **probePollingInterval:** The time interval for which continuous probe should be sleep after each iteration - **probePollingInterval:** The time interval for which continuous probe should be sleep after each iteration
- **initialDelaySeconds:** Represents the initial waiting time interval for the probes.
## Types of Litmus Probes ## Types of Litmus Probes
@ -67,13 +70,17 @@ probe:
type: "cmdProbe" type: "cmdProbe"
cmdProbe/inputs: cmdProbe/inputs:
command: "<command>" command: "<command>"
expectedResult: "<expected-result>" comparator:
type: "string" # supports: string, int, float
criteria: "contains" #supports >=,<=,>,<,==,!= for int and contains,equal,notEqual,matches,notMatches for string values
value: "<value-for-criteria-match>"
source: "<repo>/<tag>" # it can be “inline” or any image source: "<repo>/<tag>" # it can be “inline” or any image
mode: "Edge" mode: "Edge"
runProperties: runProperties:
probeTimeout: 5 probeTimeout: 5
interval: 5 interval: 5
retry: 1 retry: 1
initialDelaySeconds: 5
``` ```
### k8sProbe ### k8sProbe
@ -100,7 +107,32 @@ probe:
fieldSelector: "metadata.name=<appResourceName>,status.phase=Running" fieldSelector: "metadata.name=<appResourceName>,status.phase=Running"
labelSelector: "<app-labels>" labelSelector: "<app-labels>"
operation: "present" # it can be present, absent, create, delete operation: "present" # it can be present, absent, create, delete
mode: "EoT" mode: "EOT"
runProperties:
probeTimeout: 5
interval: 5
retry: 1
```
### promProbe
The `promProbe` allows users to run Prometheus queries and match the resulting output against specific conditions. The intent behind this probe is to allow users to define metrics-based SLOs in a declarative way and determine the experiment verdict based on its success. The probe runs the query on a Prometheus server defined by the `endpoint`, and checks whether the output satisfies the specified `criteria`.
The promql query can be provided in the `query` field. In the case of complex queries that span multiple lines, the `queryPath` attribute can be used to provide the link to a file consisting of the query. This file can be made available in the experiment pod via a ConfigMap resource, with the ConfigMap being passed in the [ChaosEngine](https://docs.litmuschaos.io/docs/chaosengine/#experiment-specification) OR the [ChaosExperiment](https://docs.litmuschaos.io/docs/chaosexperiment/#configuration-specification) CR.
<strong>NOTE:</strong> `query` and `queryPath` are mutually exclusive.
```yaml
probe:
- name: "check-probe-success"
type: "promProbe"
promProbe/inputs:
endpoint: "<prometheus-endpoint>"
query: "<promql-query>"
comparator:
criteria: "==" #supports >=,<=,>,<,==,!= comparision
value: "<value-for-criteria-match>"
mode: "Edge"
runProperties: runProperties:
probeTimeout: 5 probeTimeout: 5
interval: 5 interval: 5
@ -173,7 +205,10 @@ probe:
type: "cmdProbe" type: "cmdProbe"
cmdProbe/inputs: cmdProbe/inputs:
command: "<command>" command: "<command>"
expectedResult: "<expected-result>" comparator:
type: "string"
criteria: "equals"
value: "<value-for-criteria-match>"
source: "inline" source: "inline"
mode: "SOT" mode: "SOT"
runProperties: runProperties:
@ -185,7 +220,10 @@ probe:
cmdProbe/inputs: cmdProbe/inputs:
## probe1's result being used as one of the args in probe2 ## probe1's result being used as one of the args in probe2
command: "<commmand> {{ .probe1.ProbeArtifacts.Register }} <arg2>" command: "<commmand> {{ .probe1.ProbeArtifacts.Register }} <arg2>"
expectedResult: "<expected-result>" comparator:
type: "string"
criteria: "equals"
value: "<value-for-criteria-match>"
source: "inline" source: "inline"
mode: "SOT" mode: "SOT"
runProperties: runProperties:

145
website/docs/litmus-psp.md Normal file
View File

@ -0,0 +1,145 @@
---
id: litmus-psp
title: Using Pod Security Policies with Litmus
sidebar_label: Chaos Pod Security Policies
---
---
While working in environments (clusters) that have restrictive security policies, the default litmuschaos experiment execution procedure may be inhibited.
This is mainly due to the fact that the experiment pods running the chaos injection tasks run with a root user. This, in turn, is necessitated due to the mounting
of container runtime-specific socket files from the Kubernetes nodes in order to invoke runtime APIs. While this is not needed for all experiments (a considerable
number of them use purely the K8s API), those involving injection of chaos processes into the network/process namespaces of other containers have this requirement
(ex: netem, stress).
The restrictive policies are often enforced via [pod security policies](https://kubernetes.io/docs/concepts/policy/pod-security-policy/) (PSP) today, with organizations
opting for the default ["restricted"](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#example-policies) policy.
## Applying Pod Security Policies to Litmus Chaos Pods
- To run the litmus pods with operating characteristics described above, first create a custom PodSecurityPolicy that allows the same:
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/pod-security-policy/psp-litmus.yaml yaml"
```yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: litmus
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: "*"
spec:
privileged: true
# Required to prevent escalations to root.
allowPrivilegeEscalation: true
# Allow core volume types.
volumes:
- "configMap"
- "emptyDir"
- "projected"
- "secret"
- "downwardAPI"
# Assume that persistentVolumes set up by the cluster admin are safe to use.
- "persistentVolumeClaim"
allowedHostPaths:
# substitutes this path with an appropriate socket path
# ex: '/var/run/docker.sock', '/run/containerd/containerd.sock', '/run/crio/crio.sock'
- pathPrefix: "/var/run/docker.sock"
# substitutes this path with an appropriate container path
# ex: '/var/lib/docker/containers', '/var/lib/containerd/io.containerd.runtime.v1.linux/k8s.io', '/var/lib/containers/storage/overlay/'
- pathPrefix: "/var/lib/docker/containers"
allowedCapabilities:
- "NET_ADMIN"
- "SYS_ADMIN"
hostNetwork: false
hostIPC: false
hostPID: true
runAsUser:
rule: "RunAsAny"
seLinux:
# This policy assumes the nodes are using AppArmor rather than SELinux.
rule: "RunAsAny"
supplementalGroups:
rule: "MustRunAs"
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
fsGroup:
rule: "MustRunAs"
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
readOnlyRootFilesystem: false
```
**Note**: This PodSecurityPolicy is a sample configuration which works for a majority of the usecases. It is left to the user's discretion to modify it based
on the environment. For example, if the experiment doesn't need the socket file to be mounted, `allowedHostPaths` can be excluded from the psp spec. On the
other hand, in case of CRI-O runtime, network-chaos tests need the chaos pods executed in privileged mode. It is also possible that different PSP configs are
used in different namespaces based on ChaosExperiments installed/executed in them.
- Subscribe to the created PSP in the experiment RBAC (or in the [admin-mode](https://docs.litmuschaos.io/docs/admin-mode/#prepare-rbac-manifest) rbac, as applicable).
For example, the pod-delete experiment rbac instrumented with the PSP is shown below:
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-delete/rbac-psp.yaml yaml"
```yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pod-delete-sa
namespace: default
labels:
name: pod-delete-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-delete-sa
namespace: default
labels:
name: pod-delete-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: ["", "litmuschaos.io", "batch", "apps"]
resources:
[
"pods",
"deployments",
"pods/log",
"pods/exec",
"events",
"jobs",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs:
["create", "list", "get", "patch", "update", "delete", "deletecollection"]
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames: ["litmus"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-delete-sa
namespace: default
labels:
name: pod-delete-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-delete-sa
subjects:
- kind: ServiceAccount
name: pod-delete-sa
namespace: default
```
- Execute the ChaosEngine and verify that the litmus experiment pods are created successfully.

View File

@ -1,7 +1,7 @@
--- ---
id: "logs" id: logs
title: "Capturing & Viewing Logs" title: Capturing & Viewing Logs
sidebar_label: "Log Collection & Analysis" sidebar_label: Log Collection & Analysis
--- ---
--- ---

View File

@ -1,7 +1,7 @@
--- ---
id: "monitoring" id: monitoring
title: "Monitoring" title: Monitoring
sidebar_label: "Monitoring" sidebar_label: Monitoring
--- ---
--- ---

View File

@ -1,7 +1,7 @@
--- ---
id: "namespaced-mode" id: namespaced-mode
title: "Namespaced Mode" title: Namespaced Mode
sidebar_label: "Namespaced Mode" sidebar_label: Namespaced Mode
--- ---
--- ---

View File

@ -1,7 +1,7 @@
--- ---
id: "node-cpu-hog" id: node-cpu-hog
title: "Node CPU Hog Experiment Details" title: Node CPU Hog Experiment Details
sidebar_label: "Node CPU Hog" sidebar_label: Node CPU Hog
--- ---
--- ---
@ -24,7 +24,7 @@ sidebar_label: "Node CPU Hog"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `node-cpu-hog` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/node-cpu-hog/experiment.yaml) - Ensure that the `node-cpu-hog` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/node-cpu-hog/experiment.yaml)
- There should be administrative access to the platform on which the Kubernetes cluster is hosted, as the recovery of the affected node could be manual. For example, gcloud access to the GKE project - There should be administrative access to the platform on which the Kubernetes cluster is hosted, as the recovery of the affected node could be manual. For example, gcloud access to the GKE project
## Entry Criteria ## Entry Criteria
@ -88,10 +88,12 @@ rules:
"events", "events",
"chaosengines", "chaosengines",
"pods/log", "pods/log",
"pods/exec",
"chaosexperiments", "chaosexperiments",
"chaosresults", "chaosresults",
] ]
verbs: ["create", "list", "get", "patch", "update", "delete"] verbs:
["create", "list", "get", "patch", "update", "delete", "deletecollection"]
- apiGroups: [""] - apiGroups: [""]
resources: ["nodes"] resources: ["nodes"]
verbs: ["get", "list"] verbs: ["get", "list"]
@ -113,6 +115,8 @@ subjects:
namespace: default namespace: default
``` ```
**_Note:_** In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The `chaosServiceAccount` can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found [here](https://docs.litmuschaos.io/docs/next/litmus-psp/).
### Prepare ChaosEngine ### Prepare ChaosEngine
- Provide the application info in `spec.appinfo` - Provide the application info in `spec.appinfo`
@ -130,8 +134,8 @@ subjects:
<th> Notes </th> <th> Notes </th>
</tr> </tr>
<tr> <tr>
<td> APP_NODE </td> <td> TARGET_NODES </td>
<td> Name of the node subjected to node cpu hog chaos</td> <td> Comma separated list of nodes, subjected to node cpu hog chaos</td>
<td> Mandatory </td> <td> Mandatory </td>
<td> </td> <td> </td>
</tr> </tr>
@ -165,12 +169,24 @@ subjects:
<td> Defaults to <code>2</code> </td> <td> Defaults to <code>2</code> </td>
<td> Optional </td> <td> Optional </td>
<td> </td> <td> </td>
</tr>
<tr>
<td> NODES_AFFECTED_PERC </td>
<td> The Percentage of total nodes to target </td>
<td> Optional </td>
<td> Defaults to 0 (corresponds to 1 node), provide numeric value only </td>
</tr>
<tr>
<td> SEQUENCE </td>
<td> It defines sequence of chaos execution for multiple target nodes </td>
<td> Optional </td>
<td> Default value: parallel. Supported: serial, parallel </td>
</tr> </tr>
<tr> <tr>
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>
@ -212,8 +228,8 @@ spec:
- name: NODE_CPU_CORE - name: NODE_CPU_CORE
value: "" value: ""
# ENTER THE NAME OF THE APPLICATION NODE # ENTER THE COMMA SEPARATED TARGET NODES NAME
- name: APP_NODE - name: TARGET_NODES
value: "" value: ""
``` ```

View File

@ -1,7 +1,7 @@
--- ---
id: "node-drain" id: node-drain
title: "Node Drain Experiment Details" title: Node Drain Experiment Details
sidebar_label: "Node Drain" sidebar_label: Node Drain
--- ---
--- ---
@ -24,8 +24,8 @@ sidebar_label: "Node Drain"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `node-drain` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/node-drain/experiment.yaml) - Ensure that the `node-drain` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/node-drain/experiment.yaml)
- Ensure that the node specified in the experiment ENV variable `APP_NODE` (the node which will be drained) should be cordoned before execution of the chaos experiment (before applying the chaosengine manifest) to ensure that the litmus experiment runner pods are not scheduled on it / subjected to eviction. This can be achieved with the following steps: - Ensure that the node specified in the experiment ENV variable `TARGET_NODE` (the node which will be drained) should be cordoned before execution of the chaos experiment (before applying the chaosengine manifest) to ensure that the litmus experiment runner pods are not scheduled on it / subjected to eviction. This can be achieved with the following steps:
- Get node names against the applications pods: `kubectl get pods -o wide` - Get node names against the applications pods: `kubectl get pods -o wide`
- Cordon the node `kubectl cordon <nodename>` - Cordon the node `kubectl cordon <nodename>`
@ -89,6 +89,7 @@ rules:
"events", "events",
"chaosengines", "chaosengines",
"pods/log", "pods/log",
"pods/exec",
"daemonsets", "daemonsets",
"pods/eviction", "pods/eviction",
"chaosexperiments", "chaosexperiments",
@ -116,6 +117,8 @@ subjects:
namespace: default namespace: default
``` ```
**_Note:_** In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The `chaosServiceAccount` can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found [here](https://docs.litmuschaos.io/docs/next/litmus-psp/).
### Prepare ChaosEngine ### Prepare ChaosEngine
- Provide the application info in `spec.appinfo` - Provide the application info in `spec.appinfo`
@ -133,7 +136,7 @@ subjects:
<th> Notes </th> <th> Notes </th>
</tr> </tr>
<tr> <tr>
<td> APP_NODE </td> <td> TARGET_NODE </td>
<td> Name of the node to drain </td> <td> Name of the node to drain </td>
<td> Mandatory </td> <td> Mandatory </td>
<td> </td> <td> </td>
@ -160,7 +163,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>
@ -198,8 +201,8 @@ spec:
# provide the node labels # provide the node labels
kubernetes.io/hostname: "node02" kubernetes.io/hostname: "node02"
env: env:
# set node name # enter the target node name
- name: APP_NODE - name: TARGET_NODE
value: "node-01" value: "node-01"
``` ```

View File

@ -1,7 +1,7 @@
--- ---
id: "node-io-stress" id: node-io-stress
title: "Node IO Stress Experiment Details" title: Node IO Stress Experiment Details
sidebar_label: "Node IO Stress" sidebar_label: Node IO Stress
--- ---
--- ---
@ -24,7 +24,7 @@ sidebar_label: "Node IO Stress"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `node-io-stress` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/node-io-stress/experiment.yaml) - Ensure that the `node-io-stress` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/node-io-stress/experiment.yaml)
## Entry Criteria ## Entry Criteria
@ -84,12 +84,14 @@ rules:
"pods", "pods",
"jobs", "jobs",
"pods/log", "pods/log",
"pods/exec",
"events", "events",
"chaosengines", "chaosengines",
"chaosexperiments", "chaosexperiments",
"chaosresults", "chaosresults",
] ]
verbs: ["create", "list", "get", "patch", "update", "delete"] verbs:
["create", "list", "get", "patch", "update", "delete", "deletecollection"]
- apiGroups: [""] - apiGroups: [""]
resources: ["nodes"] resources: ["nodes"]
verbs: ["get", "list"] verbs: ["get", "list"]
@ -111,6 +113,8 @@ subjects:
namespace: default namespace: default
``` ```
**_Note:_** In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The `chaosServiceAccount` can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found [here](https://docs.litmuschaos.io/docs/next/litmus-psp/).
### Prepare ChaosEngine ### Prepare ChaosEngine
- Provide the application info in `spec.appinfo` - Provide the application info in `spec.appinfo`
@ -152,10 +156,10 @@ subjects:
<td> Default to 4 </td> <td> Default to 4 </td>
</tr> </tr>
<tr> <tr>
<td> APP_NODE </td> <td> TARGET_NODES </td>
<td> Name of the node subjected to IO stress </td> <td> Comma separated list of nodes, subjected to node io stress</td>
<td> Optional </td> <td> Mandatory </td>
<td> If not provided. It will select the app node from appinfo randomly</td> <td> </td>
</tr> </tr>
<tr> <tr>
<td> LIB </td> <td> LIB </td>
@ -174,12 +178,24 @@ subjects:
<td> Period to wait before and after injection of chaos in sec </td> <td> Period to wait before and after injection of chaos in sec </td>
<td> Optional </td> <td> Optional </td>
<td> </td> <td> </td>
</tr>
<tr>
<td> NODES_AFFECTED_PERC </td>
<td> The Percentage of total nodes to target </td>
<td> Optional </td>
<td> Defaults to 0 (corresponds to 1 node), provide numeric value only </td>
</tr>
<tr>
<td> SEQUENCE </td>
<td> It defines sequence of chaos execution for multiple target nodes </td>
<td> Optional </td>
<td> Default value: parallel. Supported: serial, parallel </td>
</tr> </tr>
<tr> <tr>
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>
@ -222,8 +238,8 @@ spec:
- name: FILESYSTEM_UTILIZATION_PERCENTAGE - name: FILESYSTEM_UTILIZATION_PERCENTAGE
value: "10" value: "10"
## enter the name of the desired node ## enter the comma separated target nodes name
- name: APP_NODE - name: TARGET_NODES
value: "" value: ""
``` ```

View File

@ -1,7 +1,7 @@
--- ---
id: "node-memory-hog" id: node-memory-hog
title: "Node Memory Hog Experiment Details" title: Node Memory Hog Experiment Details
sidebar_label: "Node Memory Hog" sidebar_label: Node Memory Hog
--- ---
--- ---
@ -24,7 +24,7 @@ sidebar_label: "Node Memory Hog"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `node-memory-hog` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/node-memory-hog/experiment.yaml) - Ensure that the `node-memory-hog` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/node-memory-hog/experiment.yaml)
- There should be administrative access to the platform on which the Kubernetes cluster is hosted, as the recovery of the affected node could be manual. For example, gcloud access to the GKE project - There should be administrative access to the platform on which the Kubernetes cluster is hosted, as the recovery of the affected node could be manual. For example, gcloud access to the GKE project
## Entry Criteria ## Entry Criteria
@ -86,12 +86,14 @@ rules:
"pods", "pods",
"jobs", "jobs",
"pods/log", "pods/log",
"pods/exec",
"events", "events",
"chaosengines", "chaosengines",
"chaosexperiments", "chaosexperiments",
"chaosresults", "chaosresults",
] ]
verbs: ["create", "list", "get", "patch", "update", "delete"] verbs:
["create", "list", "get", "patch", "update", "delete", "deletecollection"]
- apiGroups: [""] - apiGroups: [""]
resources: ["nodes"] resources: ["nodes"]
verbs: ["get", "list"] verbs: ["get", "list"]
@ -113,6 +115,8 @@ subjects:
namespace: default namespace: default
``` ```
**_Note:_** In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The `chaosServiceAccount` can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found [here](https://docs.litmuschaos.io/docs/next/litmus-psp/).
### Prepare ChaosEngine ### Prepare ChaosEngine
- Provide the application info in `spec.appinfo` - Provide the application info in `spec.appinfo`
@ -130,8 +134,8 @@ subjects:
<th> Notes </th> <th> Notes </th>
</tr> </tr>
<tr> <tr>
<td> APP_NODE </td> <td> TARGET_NODES </td>
<td> Name of the node subjected to memory hog </td> <td> Comma separated list of nodes, subjected to node memory hog</td>
<td> Mandatory </td> <td> Mandatory </td>
<td> </td> <td> </td>
</tr> </tr>
@ -164,12 +168,24 @@ subjects:
<td> Period to wait before and after injection of chaos in sec </td> <td> Period to wait before and after injection of chaos in sec </td>
<td> Optional </td> <td> Optional </td>
<td> </td> <td> </td>
</tr>
<tr>
<td> NODES_AFFECTED_PERC </td>
<td> The Percentage of total nodes to target </td>
<td> Optional </td>
<td> Defaults to 0 (corresponds to 1 node), provide numeric value only </td>
</tr>
<tr>
<td> SEQUENCE </td>
<td> It defines sequence of chaos execution for multiple target nodes </td>
<td> Optional </td>
<td> Default value: parallel. Supported: serial, parallel </td>
</tr> </tr>
<tr> <tr>
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>
@ -213,8 +229,8 @@ spec:
- name: MEMORY_PERCENTAGE - name: MEMORY_PERCENTAGE
value: "90" value: "90"
# ENTER THE NAME OF THE APPLICATION NODE # ENTER THE COMMA SEPARATED TARGET NODES NAME
- name: APP_NODE - name: TARGET_NODES
value: "" value: ""
``` ```

View File

@ -0,0 +1,279 @@
---
id: node-restart
title: Node Restart Experiment Details
sidebar_label: Node Restart
---
---
## Experiment Metadata
<table>
<tr>
<th> Type </th>
<th> Description </th>
<th> Tested K8s Platform </th>
</tr>
<tr>
<td> Generic </td>
<td> Restart the target node </td>
<td> Kubevirt VMs </td>
</tr>
</table>
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `node-restart` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/node-restart/experiment.yaml)
- Create a Kubernetes secret having the private SSH key for `SSH_USER` used to connect to `TARGET_NODE`. The name of secret should be `id-rsa` along with private SSH key data, named `ssh-privatekey`. A sample secret example is given below:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: id-rsa
type: Opaque
stringData:
ssh-privatekey: |-
# Add the private key for ssh here
```
## Entry-Criteria
- Application pods should be healthy before chaos injection.
- Target Nodes should be in Ready state before chaos injection.
## Exit-Criteria
- Application pods should be healthy after chaos injection.
- Target Nodes should be in Ready state after chaos injection.
## Details
- Causes chaos to disrupt state of node by restarting it.
- Tests deployment sanity (replica availability & uninterrupted service) and recovery workflows of the application pod
## Integrations
- Node Restart can be effected using the chaos library: `litmus`.
## Steps to Execute the Chaos Experiment
- This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
### Prepare chaosServiceAccount
- Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
#### Sample Rbac Manifest
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/node-restart/rbac.yaml yaml"
```yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-restart-sa
namespace: default
labels:
name: node-restart-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-restart-sa
labels:
name: node-restart-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: ["", "litmuschaos.io", "batch", "apps"]
resources:
[
"pods",
"jobs",
"secrets",
"events",
"chaosengines",
"pods/log",
"pods/exec",
"chaosexperiments",
"chaosresults",
]
verbs:
["create", "list", "get", "patch", "update", "delete", "deletecollection"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-restart-sa
labels:
name: node-restart-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-restart-sa
subjects:
- kind: ServiceAccount
name: node-restart-sa
namespace: default
```
**_Note:_** In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The `chaosServiceAccount` can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found [here](https://docs.litmuschaos.io/docs/next/litmus-psp/).
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- Provide the auxiliary applications info (ns & labels) in `spec.auxiliaryAppInfo`
- Override the experiment tunables if desired in `experiments.spec.components.env`
- To understand the values to provided in a ChaosEngine specification, refer [ChaosEngine Concepts](chaosengine-concepts.md)
#### Supported Experiment Tunables
<table>
<tr>
<th> Variables </th>
<th> Description </th>
<th> Specify In ChaosEngine </th>
<th> Notes </th>
</tr>
<tr>
<td> LIB_IMAGE </td>
<td> The image used to restart the node </td>
<td> Optional </td>
<td> Defaults to `litmuschaos/go-runner:latest` </td>
</tr>
<tr>
<td> SSH_USER </td>
<td> name of ssh user </td>
<td> Mandatory </td>
<td> Defaults to `root` </td>
</tr>
<tr>
<td> TARGET_NODE </td>
<td> name of target node, subjected to chaos </td>
<td> Mandatory </td>
<td> </td>
</tr>
<tr>
<td> TARGET_NODE_IP </td>
<td> ip of the target node, subjected to chaos </td>
<td> Mandatory </td>
<td> </td>
</tr>
<tr>
<td> REBOOT_COMMAND </td>
<td> Command used for reboot </td>
<td> Mandatory </td>
<td> Defaults to `sudo systemctl reboot` </td>
</tr>
<tr>
<td> TOTAL_CHAOS_DURATION </td>
<td> The time duration for chaos insertion (sec) </td>
<td> Optional </td>
<td> Defaults to 30s </td>
</tr>
<tr>
<td> RAMP_TIME </td>
<td> Period to wait before injection of chaos in sec </td>
<td> Optional </td>
<td> </td>
</tr>
<tr>
<td> LIB </td>
<td> The chaos lib used to inject the chaos </td>
<td> Optional </td>
<td> Defaults to `litmus` supported litmus only </td>
</tr>
<tr>
<td> LIB_IMAGE </td>
<td> The image used to restart the node </td>
<td> Optional </td>
<td> Defaults to `litmuschaos/go-runner:latest` </td>
</tr>
<tr>
<td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr>
</table>
#### Sample ChaosEngine Manifest
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/node-restart/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
# It can be true/false
annotationCheck: "false"
# It can be active/stop
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo:
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: node-restart-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: "delete"
experiments:
- name: node-restart
spec:
components:
nodeSelector:
# provide the node labels
kubernetes.io/hostname: "node02"
env:
# ENTER THE TARGET NODE NAME
- name: TARGET_NODE
value: "node01"
# ENTER THE TARGET NODE IP
- name: TARGET_NODE_IP
value: ""
# ENTER THE USER TO BE USED FOR SSH AUTH
- name: SSH_USER
value: ""
```
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
- If the chaos experiment is not executed, refer to the [troubleshooting](https://docs.litmuschaos.io/docs/faq-troubleshooting/)
section to identify the root cause and fix the issues.
### Watch Chaos progress
- View the status of the nodes as they are subjected to node restart.
`watch -n 1 kubectl get nodes`
### Check Chaos Experiment Result
- Check whether the application is resilient to the node restart, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult nginx-chaos-node-restart -n <application-namespace>`
### Node Restart Experiment Demo
- A sample recording of this experiment execution will be added soon.

View File

@ -1,7 +1,7 @@
--- ---
id: "node-taint" id: node-taint
title: "Node Taint Experiment Details" title: Node Taint Experiment Details
sidebar_label: "Node Taint" sidebar_label: Node Taint
--- ---
--- ---
@ -24,8 +24,8 @@ sidebar_label: "Node Taint"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `node-taint` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/node-taint/experiment.yaml) - Ensure that the `node-taint` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/node-taint/experiment.yaml)
- Ensure that the node specified in the experiment ENV variable `APP_NODE` (the node which will be tainted) should be cordoned before execution of the chaos experiment (before applying the chaosengine manifest) to ensure that the litmus experiment runner pods are not scheduled on it / subjected to eviction. This can be achieved with the following steps: - Ensure that the node specified in the experiment ENV variable `TARGET_NODE` (the node which will be tainted) should be cordoned before execution of the chaos experiment (before applying the chaosengine manifest) to ensure that the litmus experiment runner pods are not scheduled on it / subjected to eviction. This can be achieved with the following steps:
- Get node names against the applications pods: `kubectl get pods -o wide` - Get node names against the applications pods: `kubectl get pods -o wide`
- Cordon the node `kubectl cordon <nodename>` - Cordon the node `kubectl cordon <nodename>`
@ -91,6 +91,7 @@ rules:
"events", "events",
"chaosengines", "chaosengines",
"pods/log", "pods/log",
"pods/exec",
"daemonsets", "daemonsets",
"pods/eviction", "pods/eviction",
"chaosexperiments", "chaosexperiments",
@ -118,6 +119,8 @@ subjects:
namespace: default namespace: default
``` ```
**_Note:_** In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The `chaosServiceAccount` can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found [here](https://docs.litmuschaos.io/docs/next/litmus-psp/).
### Prepare ChaosEngine ### Prepare ChaosEngine
- Provide the application info in `spec.appinfo` - Provide the application info in `spec.appinfo`
@ -135,7 +138,7 @@ subjects:
<th> Notes </th> <th> Notes </th>
</tr> </tr>
<tr> <tr>
<td> APP_NODE </td> <td> TARGET_NODE </td>
<td> Name of the node to be tainted </td> <td> Name of the node to be tainted </td>
<td> Mandatory </td> <td> Mandatory </td>
<td> </td> <td> </td>
@ -168,7 +171,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>
@ -206,8 +209,8 @@ spec:
# provide the node labels # provide the node labels
kubernetes.io/hostname: "node02" kubernetes.io/hostname: "node02"
env: env:
# set node name # set target node name
- name: APP_NODE - name: TARGET_NODE
value: "node-01" value: "node-01"
# set taint label & effect # set taint label & effect

View File

@ -1,7 +1,7 @@
--- ---
id: "openebs-control-plane-chaos" id: openebs-control-plane-chaos
title: "OpenEBS Control Plane Chaos Experiment Details" title: OpenEBS Control Plane Chaos Experiment Details
sidebar_label: "Control Plane Chaos" sidebar_label: Control Plane Chaos
--- ---
--- ---
@ -25,7 +25,7 @@ sidebar_label: "Control Plane Chaos"
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `openebs-control-plane-chaos` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the `openebs` namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/openebs/openebs-control-plane-chaos/experiment.yaml) - Ensure that the `openebs-control-plane-chaos` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the `openebs` namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/openebs/openebs-control-plane-chaos/experiment.yaml)
## Entry Criteria ## Entry Criteria
@ -147,7 +147,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>

View File

@ -1,7 +1,7 @@
--- ---
id: "openebs-nfs-provisioner-kill" id: openebs-nfs-provisioner-kill
title: "OpenEBS NFS Provisioner Kill Chaos Experiment Details" title: OpenEBS NFS Provisioner Kill Chaos Experiment Details
sidebar_label: "NFS Provisioner Kill" sidebar_label: NFS Provisioner Kill
--- ---
--- ---
@ -25,7 +25,7 @@ sidebar_label: "NFS Provisioner Kill"
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `openebs-nfs-provisioner-kill` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the `openebs` namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/openebs/openebs-nfs-provisioner-kill/experiment.yaml) - Ensure that the `openebs-nfs-provisioner-kill` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the `openebs` namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/openebs/openebs-nfs-provisioner-kill/experiment.yaml)
- The "DATA_PERSISTENCE" env variable takes effect only if the "EXTERNAL_APP_CHECK" is enabled. A stateful busybox deployment is used to create and validate data persistence on the RMW NFS persistent volumes. - The "DATA_PERSISTENCE" env variable takes effect only if the "EXTERNAL_APP_CHECK" is enabled. A stateful busybox deployment is used to create and validate data persistence on the RMW NFS persistent volumes.
@ -177,7 +177,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>

View File

@ -1,7 +1,7 @@
--- ---
id: "openebs-pool-container-failure" id: openebs-pool-container-failure
title: "OpenEBS Pool Container Failure Experiment Details" title: OpenEBS Pool Container Failure Experiment Details
sidebar_label: "Pool Container Failure" sidebar_label: Pool Container Failure
--- ---
--- ---
@ -23,7 +23,7 @@ sidebar_label: "Pool Container Failure"
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>
@ -33,7 +33,7 @@ sidebar_label: "Pool Container Failure"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `openebs-pool-container-failure` experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/openebs/openebs-pool-container-failure/experiment.yaml) - Ensure that the `openebs-pool-container-failure` experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/openebs/openebs-pool-container-failure/experiment.yaml)
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox. - The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox.
- For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials): - For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
@ -227,7 +227,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>

View File

@ -1,7 +1,7 @@
--- ---
id: "openebs-pool-disk-loss" id: openebs-pool-disk-loss
title: "OpenEBS Pool Disk Loss Experiment Details" title: OpenEBS Pool Disk Loss Experiment Details
sidebar_label: "Pool Disk Loss" sidebar_label: Pool Disk Loss
--- ---
--- ---
@ -25,7 +25,7 @@ sidebar_label: "Pool Disk Loss"
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `openebs-pool-disk-loss` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the specificed namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/openebs/openebs-pool-disk-loss/experiment.yaml) - Ensure that the `openebs-pool-disk-loss` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the specificed namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/openebs/openebs-pool-disk-loss/experiment.yaml)
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for `MySQL` and `Busybox`. - The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for `MySQL` and `Busybox`.
@ -275,7 +275,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>

View File

@ -1,7 +1,7 @@
--- ---
id: "openebs-pool-network-delay" id: openebs-pool-network-delay
title: "OpenEBS Pool Network Latency Experiment Details" title: OpenEBS Pool Network Latency Experiment Details
sidebar_label: "Pool Network Latency" sidebar_label: Pool Network Latency
--- ---
--- ---
@ -29,7 +29,7 @@ sidebar_label: "Pool Network Latency"
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- - Ensure that the `openebs-pool-network-delay` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the specificed namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/openebs/openebs-pool-network-delay/experiment.yaml) - - Ensure that the `openebs-pool-network-delay` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the specificed namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/openebs/openebs-pool-network-delay/experiment.yaml)
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for `MySQL` and `Busybox`. - The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for `MySQL` and `Busybox`.
@ -227,7 +227,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>

View File

@ -1,7 +1,7 @@
--- ---
id: "openebs-pool-network-loss" id: openebs-pool-network-loss
title: "OpenEBS Pool Network Loss Experiment Details" title: OpenEBS Pool Network Loss Experiment Details
sidebar_label: "Pool Network Loss" sidebar_label: Pool Network Loss
--- ---
--- ---
@ -29,7 +29,7 @@ sidebar_label: "Pool Network Loss"
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `openebs-pool-network-loss` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the specificed namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/openebs/openebs-pool-network-loss/experiment.yaml) - Ensure that the `openebs-pool-network-loss` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the specificed namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/openebs/openebs-pool-network-loss/experiment.yaml)
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for `MySQL` and `Busybox`. - The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for `MySQL` and `Busybox`.
@ -228,7 +228,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>

View File

@ -1,7 +1,7 @@
--- ---
id: "openebs-pool-pod-failure" id: openebs-pool-pod-failure
title: "OpenEBS Pool Pod Failure Experiment Details" title: OpenEBS Pool Pod Failure Experiment Details
sidebar_label: "Pool Pod Failure" sidebar_label: Pool Pod Failure
--- ---
--- ---
@ -26,7 +26,7 @@ sidebar_label: "Pool Pod Failure"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `openebs-pool-pod-failure` experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/openebs/openebs-pool-pod-failure/experiment.yaml) - Ensure that the `openebs-pool-pod-failure` experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/openebs/openebs-pool-pod-failure/experiment.yaml)
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox. - The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox.
- For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials): - For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
@ -217,7 +217,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>

View File

@ -1,7 +1,7 @@
--- ---
id: "openebs-target-container-failure" id: openebs-target-container-failure
title: "OpenEBS Target Container Failure Experiment Details" title: OpenEBS Target Container Failure Experiment Details
sidebar_label: "Target Container Failure" sidebar_label: Target Container Failure
--- ---
--- ---
@ -26,7 +26,7 @@ sidebar_label: "Target Container Failure"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `openebs-target-container-failure` experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/openebs/openebs-target-container-failure/experiment.yaml) - Ensure that the `openebs-target-container-failure` experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/openebs/openebs-target-container-failure/experiment.yaml)
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox. - The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox.
- For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials): - For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
@ -189,7 +189,7 @@ subjects:
<td> LIB_IMAGE </td> <td> LIB_IMAGE </td>
<td> The chaos library image used to run the kill command </td> <td> The chaos library image used to run the kill command </td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to {"`gaiaadm/pumba:0.6.5`. Supported: `{docker : gaiaadm/pumba:0.6.5, containerd: gprasath/crictl:ci}`"} </td> <td> Defaults to `gaiaadm/pumba:0.6.5`. Supported: `{'{'}docker : gaiaadm/pumba:0.6.5, containerd: gprasath/crictl:ci{'}'}` </td>
</tr> </tr>
<tr> <tr>
<td> CONTAINER_RUNTIME </td> <td> CONTAINER_RUNTIME </td>
@ -225,7 +225,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>

View File

@ -1,7 +1,7 @@
--- ---
id: "openebs-target-network-delay" id: openebs-target-network-delay
title: "OpenEBS Target Network Latency Experiment Details" title: OpenEBS Target Network Latency Experiment Details
sidebar_label: "Target Network Latency" sidebar_label: Target Network Latency
--- ---
--- ---
@ -27,7 +27,7 @@ sidebar_label: "Target Network Latency"
- Ensure that the Kubernetes Cluster uses Docker runtime - Ensure that the Kubernetes Cluster uses Docker runtime
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `openebs-target-network-delay` experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/openebs/openebs-target-network-delay/experiment.yaml) - Ensure that the `openebs-target-network-delay` experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/openebs/openebs-target-network-delay/experiment.yaml)
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox. - The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox.
- For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials): - For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
@ -237,7 +237,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>

View File

@ -1,7 +1,7 @@
--- ---
id: "openebs-target-network-loss" id: openebs-target-network-loss
title: "OpenEBS Target Network Loss Experiment Details" title: OpenEBS Target Network Loss Experiment Details
sidebar_label: "Target Network Loss" sidebar_label: Target Network Loss
--- ---
--- ---
@ -27,7 +27,7 @@ sidebar_label: "Target Network Loss"
- Ensure that the Kubernetes Cluster uses Docker runtime - Ensure that the Kubernetes Cluster uses Docker runtime
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `openebs-target-network-loss` experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/openebs/openebs-target-network-loss/experiment.yaml) - Ensure that the `openebs-target-network-loss` experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/openebs/openebs-target-network-loss/experiment.yaml)
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox. - The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox.
- For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials): - For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
@ -225,7 +225,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>

View File

@ -1,7 +1,7 @@
--- ---
id: "openebs-target-pod-failure" id: openebs-target-pod-failure
title: "OpenEBS Target Pod Failure Experiment Details" title: OpenEBS Target Pod Failure Experiment Details
sidebar_label: "Target Pod Failure" sidebar_label: Target Pod Failure
--- ---
--- ---
@ -26,7 +26,7 @@ sidebar_label: "Target Pod Failure"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `openebs-target-pod-failure` experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/openebs/openebs-target-pod-failure/experiment.yaml) - Ensure that the `openebs-target-pod-failure` experiment resource is available in the cluster. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/openebs/openebs-target-pod-failure/experiment.yaml)
- The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox. - The DATA_PERSISTENCE can be enabled by provide the application's info in a configmap volume so that the experiment can perform necessary checks. Currently, LitmusChaos supports data consistency checks only for MySQL and Busybox.
- For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials): - For MYSQL data persistence check create a configmap as shown below in the application namespace (replace with actual credentials):
@ -214,7 +214,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>

View File

@ -1,7 +1,7 @@
--- ---
id: "openshift-litmus" id: openshift-litmus
title: "Installation of LitmusChaos on OpenShift" title: Installation of LitmusChaos on OpenShift
sidebar_label: "Install Litmus" sidebar_label: Install Litmus
--- ---
--- ---
@ -33,7 +33,7 @@ Running chaos on your application involves the following steps:
### Install Litmus ### Install Litmus
``` ```
oc apply -f https://litmuschaos.github.io/litmus/litmus-operator-v1.9.0.yaml oc apply -f https://litmuschaos.github.io/litmus/litmus-operator-v1.10.0.yaml
``` ```
The above command install all the CRDs, required service account configuration, and chaos-operator. Before you start running a chaos experiment, verify if Litmus is installed correctly. The above command install all the CRDs, required service account configuration, and chaos-operator. Before you start running a chaos experiment, verify if Litmus is installed correctly.
@ -94,13 +94,13 @@ Expected output:
### Install Chaos Experiments ### Install Chaos Experiments
Chaos experiments contain the actual chaos details. These experiments are installed on your cluster as OpenShift CRs. Chaos experiments contain the actual chaos details. These experiments are installed on your cluster as OpenShift CRs.
The Chaos Experiments are grouped as Chaos Charts and are published on <a href="https://hub.litmuschaos.io" target="_blank">ChaosHub</a>.. The Chaos Experiments are grouped as Chaos Charts and are published on <a href="https://hub.litmuschaos.io" target="_blank">Chaos Hub</a>.
The generic chaos experiments such as `pod-delete`, `container-kill`,` pod-network-latency` are available under Generic Chaos Chart. The generic chaos experiments such as `pod-delete`, `container-kill`,` pod-network-latency` are available under Generic Chaos Chart.
This is the first chart you are recommended to install. This is the first chart you are recommended to install.
``` ```
oc apply -f https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/experiments.yaml -n nginx oc apply -f https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/experiments.yaml -n nginx
``` ```
Verify if the chaos experiments are installed. Verify if the chaos experiments are installed.
@ -145,6 +145,7 @@ rules:
"pods", "pods",
"deployments", "deployments",
"pods/log", "pods/log",
"pods/exec",
"events", "events",
"jobs", "jobs",
"chaosengines", "chaosengines",
@ -245,7 +246,7 @@ oc apply -f chaosengine.yaml
Describe the ChaosResult CR to know the status of each experiment. The `spec.verdict` is set to `Awaited` when the experiment is in progress, eventually changing to either `Pass` or `Fail`. Describe the ChaosResult CR to know the status of each experiment. The `spec.verdict` is set to `Awaited` when the experiment is in progress, eventually changing to either `Pass` or `Fail`.
<strong> NOTE:</strong> ChaosResult CR name will be {"`<chaos-engine-name>-<chaos-experiment-name>`"} <strong> NOTE:</strong> ChaosResult CR name will be `&lt;chaos-engine-name&gt;-&lt;chaos-experiment-name&gt;`
```console ```console
oc describe chaosresult nginx-chaos-container-kill -n nginx oc describe chaosresult nginx-chaos-container-kill -n nginx
@ -256,7 +257,7 @@ oc describe chaosresult nginx-chaos-container-kill -n nginx
You can uninstall Litmus by deleting the namespace. You can uninstall Litmus by deleting the namespace.
```console ```console
oc delete -f https://litmuschaos.github.io/litmus/litmus-operator-v1.9.0.yaml oc delete -f https://litmuschaos.github.io/litmus/litmus-operator-v1.10.0.yaml
``` ```
## More Chaos Experiments ## More Chaos Experiments

View File

@ -1,7 +1,7 @@
--- ---
id: "plugins" id: plugins
title: "Using other chaos libraries as plugins" title: Using other chaos libraries as plugins
sidebar_label: "Plugins" sidebar_label: Plugins
--- ---
--- ---
@ -13,7 +13,7 @@ Litmus provides a way to use any chaos library or a tool to inject chaos. The ch
The `plugins` or `chaos-libraries` host the core logic to inject chaos. The `plugins` or `chaos-libraries` host the core logic to inject chaos.
These plugins are hosted at https://github.com/litmuschaos/litmus-ansible/tree/master/chaoslib These plugins are hosted at https://github.com/litmuschaos/litmus-go/tree/master/chaoslib
Litmus project has integration into the following chaos-libraries. Litmus project has integration into the following chaos-libraries.
@ -21,7 +21,7 @@ Litmus project has integration into the following chaos-libraries.
| ------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- | | ------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| <a href="https://github.com/litmuschaos/litmus" target="_blank">Litmus</a> | <img src="https://camo.githubusercontent.com/953211f24c1c246f7017703f67b9779e4589bf76/68747470733a2f2f6c616e6473636170652e636e63662e696f2f6c6f676f732f6c69746d75732e737667" width="50" /> | Litmus native chaos libraries that encompasses the chaos capabilities for `pod-kill`, `container-kill`, `cpu-hog`, `network-chaos`, `disk-chaos`, `memory-hog` | | <a href="https://github.com/litmuschaos/litmus" target="_blank">Litmus</a> | <img src="https://camo.githubusercontent.com/953211f24c1c246f7017703f67b9779e4589bf76/68747470733a2f2f6c616e6473636170652e636e63662e696f2f6c6f676f732f6c69746d75732e737667" width="50" /> | Litmus native chaos libraries that encompasses the chaos capabilities for `pod-kill`, `container-kill`, `cpu-hog`, `network-chaos`, `disk-chaos`, `memory-hog` |
| <a href="https://github.com/alexei-led/pumba" target="_blank">Pumba</a> | <img src="https://github.com/alexei-led/pumba/raw/master/docs/img/pumba_logo.png" width="50"/> | Pumba provides chaos capabilities for `network-delay` | | <a href="https://github.com/alexei-led/pumba" target="_blank">Pumba</a> | <img src="https://github.com/alexei-led/pumba/raw/master/docs/img/pumba_logo.png" width="50"/> | Pumba provides chaos capabilities for `network-delay` |
| <a href="https://github.com/bloomberg/powerfulseal" target="_blank">PowerfulSeal</a> | <img src="https://github.com/bloomberg/powerfulseal/raw/master/media/powerful-seal.png" width="50"/> | PowerfulSeal provides chaos capabilities for `pod-kill` | | <a href="https://github.com/bloomberg/powerfulseal" target="_blank">PowerfulSeal</a> | <img src="https://github.com/powerfulseal/powerfulseal/raw/master/docs/media/powerful-seal.svg" width="50"/> | PowerfulSeal provides chaos capabilities for `pod-kill` |
| | | | | | | |
Usage of plugins is a configuration parameter inside the chaos experiment. Usage of plugins is a configuration parameter inside the chaos experiment.

View File

@ -1,7 +1,7 @@
--- ---
id: "pod-autoscaler" id: pod-autoscaler
title: "Scale the application replicas and test the node autoscaling on cluster" title: Scale the application replicas and test the node autoscaling on cluster
sidebar_label: "Pod Autoscaler" sidebar_label: Pod Autoscaler
--- ---
--- ---
@ -24,7 +24,7 @@ sidebar_label: "Pod Autoscaler"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `pod-autoscaler` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/pod-autoscaler/experiment.yaml) - Ensure that the `pod-autoscaler` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-autoscaler/experiment.yaml)
## Entry Criteria ## Entry Criteria
@ -87,6 +87,7 @@ rules:
"events", "events",
"chaosengines", "chaosengines",
"pods/log", "pods/log",
"pods/exec",
"chaosexperiments", "chaosexperiments",
"chaosresults", "chaosresults",
] ]
@ -112,6 +113,8 @@ subjects:
namespace: default namespace: default
``` ```
**_Note:_** In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The `chaosServiceAccount` can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found [here](https://docs.litmuschaos.io/docs/next/litmus-psp/).
### Prepare ChaosEngine ### Prepare ChaosEngine
- Provide the application info in `spec.appinfo` - Provide the application info in `spec.appinfo`
@ -156,7 +159,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>

View File

@ -1,7 +1,7 @@
--- ---
id: "pod-cpu-hog" id: pod-cpu-hog
title: "Pod CPU Hog Details" title: Pod CPU Hog Details
sidebar_label: "Pod CPU Hog" sidebar_label: Pod CPU Hog
--- ---
--- ---
@ -24,7 +24,7 @@ sidebar_label: "Pod CPU Hog"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `pod-cpu-hog` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/pod-cpu-hog/experiment.yaml) - Ensure that the `pod-cpu-hog` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-cpu-hog/experiment.yaml)
## Entry Criteria ## Entry Criteria
@ -111,6 +111,8 @@ subjects:
namespace: default namespace: default
``` ```
**_Note:_** In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The `chaosServiceAccount` can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found [here](https://docs.litmuschaos.io/docs/next/litmus-psp/).
### Prepare ChaosEngine ### Prepare ChaosEngine
- Provide the application info in `spec.appinfo` - Provide the application info in `spec.appinfo`
@ -158,16 +160,16 @@ subjects:
<td> Default to <code>gaiaadm/pumba</code> </td> <td> Default to <code>gaiaadm/pumba</code> </td>
</tr> </tr>
<tr> <tr>
<td> TARGET_POD </td> <td> TARGET_PODS </td>
<td> Name of the application pod subjected to pod cpu hog chaos</td> <td> Comma separated list of application pod name subjected to pod cpu hog chaos</td>
<td> Optional </td> <td> Optional </td>
<td> If not provided it will select from the appLabel provided</td> <td> If not provided, it will select target pods randomly based on provided appLabels</td>
</tr> </tr>
<tr> <tr>
<td> PODS_AFFECTED_PERC </td> <td> PODS_AFFECTED_PERC </td>
<td> The Percentage of total pods to target </td> <td> The Percentage of total pods to target </td>
<td> Optional </td> <td> Optional </td>
<td> Default to 0% (corresponds to 1 replica) </td> <td> Defaults to 0 (corresponds to 1 replica), provide numeric value only </td>
</tr> </tr>
<tr> <tr>
<td> CHAOS_INJECT_COMMAND </td> <td> CHAOS_INJECT_COMMAND </td>
@ -179,7 +181,7 @@ subjects:
<td> CHAOS_KILL_COMMAND </td> <td> CHAOS_KILL_COMMAND </td>
<td> The command to kill the chaos process</td> <td> The command to kill the chaos process</td>
<td> Optional </td> <td> Optional </td>
<td> Default to <code>kill {"$(find /proc -name exe -lname '*/md5sum' 2>&1 | grep -v 'Permission denied' | awk -F/ '{print $(NF-1)}' | head -n 1"}</code> </td> <td> Default to <code>kill $(find /proc -name exe -lname '*/md5sum' 2&gt;&amp;1 | grep -v 'Permission denied' | awk -F/ '{'{'}print $(NF-1){'}'}' | head -n 1</code> </td>
</tr> </tr>
<tr> <tr>
<td> RAMP_TIME </td> <td> RAMP_TIME </td>
@ -197,7 +199,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name. </td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name. </td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>
@ -217,8 +219,6 @@ spec:
annotationCheck: "true" annotationCheck: "true"
# It can be active/stop # It can be active/stop
engineState: "active" engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo: appinfo:
appns: "default" appns: "default"
applabel: "app=nginx" applabel: "app=nginx"
@ -244,6 +244,12 @@ spec:
- name: TOTAL_CHAOS_DURATION - name: TOTAL_CHAOS_DURATION
value: "60" # in seconds value: "60" # in seconds
- name: CHAOS_INJECT_COMMAND
value: "md5sum /dev/zero"
- name: CHAOS_KILL_COMMAND
value: "kill -9 $(ps afx | grep \"[md5sum] /dev/zero\" | awk '{print$1}' | tr '\n' ' ')"
``` ```
### Create the ChaosEngine Resource ### Create the ChaosEngine Resource

View File

@ -1,7 +1,7 @@
--- ---
id: "pod-delete" id: pod-delete
title: "Pod Delete Experiment Details" title: Pod Delete Experiment Details
sidebar_label: "Pod Delete" sidebar_label: Pod Delete
--- ---
--- ---
@ -17,14 +17,14 @@ sidebar_label: "Pod Delete"
<tr> <tr>
<td> Generic </td> <td> Generic </td>
<td> Fail the application pod </td> <td> Fail the application pod </td>
<td> GKE, Konvoy(AWS), Packet(Kubeadm), Minikube, EKS, AKS </td> <td> GKE, Konvoy(AWS), Packet(Kubeadm), Minikube, EKS, AKS, TKGi(VMware) </td>
</tr> </tr>
</table> </table>
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`).If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`).If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/pod-delete/experiment.yaml) - Ensure that the `pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-delete/experiment.yaml)
## Entry Criteria ## Entry Criteria
@ -86,6 +86,7 @@ rules:
"pods", "pods",
"deployments", "deployments",
"pods/log", "pods/log",
"pods/exec",
"events", "events",
"jobs", "jobs",
"chaosengines", "chaosengines",
@ -168,6 +169,8 @@ subjects:
namespace: default namespace: default
``` ```
**_Note:_** In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The `chaosServiceAccount` can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found [here](https://docs.litmuschaos.io/docs/next/litmus-psp/).
### Prepare ChaosEngine ### Prepare ChaosEngine
- Provide the application info in `spec.appinfo` - Provide the application info in `spec.appinfo`
@ -208,16 +211,16 @@ subjects:
<td> Default to `true`, With `terminationGracePeriodSeconds=0` </td> <td> Default to `true`, With `terminationGracePeriodSeconds=0` </td>
</tr> </tr>
<tr> <tr>
<td> TARGET_POD </td> <td> TARGET_PODS </td>
<td> Name of the application pod subjected to pod delete chaos</td> <td> Comma separated list of application pod name subjected to pod delete chaos</td>
<td> Optional </td> <td> Optional </td>
<td> If not provided it will select from the appLabel provided</td> <td> If not provided, it will select target pods randomly based on provided appLabels</td>
</tr> </tr>
<tr> <tr>
<td> PODS_AFFECTED_PERC </td> <td> PODS_AFFECTED_PERC </td>
<td> The Percentage of total pods to target </td> <td> The Percentage of total pods to target </td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to 0% (corresponds to 1 replica) </td> <td> Defaults to 0 (corresponds to 1 replica), provide numeric value only </td>
</tr> </tr>
<tr> <tr>
<td> RAMP_TIME </td> <td> RAMP_TIME </td>
@ -235,7 +238,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>
@ -258,8 +261,6 @@ spec:
annotationCheck: "true" annotationCheck: "true"
# It can be active/stop # It can be active/stop
engineState: "active" engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
chaosServiceAccount: pod-delete-sa chaosServiceAccount: pod-delete-sa
monitoring: false monitoring: false
# It can be delete/retain # It can be delete/retain

View File

@ -1,7 +1,7 @@
--- ---
id: "pod-io-stress" id: pod-io-stress
title: "Pod IO Stress Details" title: Pod IO Stress Details
sidebar_label: "Pod IO Stress" sidebar_label: Pod IO Stress
--- ---
--- ---
@ -24,7 +24,7 @@ sidebar_label: "Pod IO Stress"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `pod-io-stress` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/pod-io-stress/experiment.yaml) - Ensure that the `pod-io-stress` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-io-stress/experiment.yaml)
- Cluster must run docker container runtime - Cluster must run docker container runtime
## Entry Criteria ## Entry Criteria
@ -110,6 +110,8 @@ subjects:
namespace: default namespace: default
``` ```
**_Note:_** In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The `chaosServiceAccount` can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found [here](https://docs.litmuschaos.io/docs/next/litmus-psp/).
### Prepare ChaosEngine ### Prepare ChaosEngine
- Provide the application info in `spec.appinfo` - Provide the application info in `spec.appinfo`
@ -144,18 +146,18 @@ subjects:
<td> Optional </td> <td> Optional </td>
<td> Default to 4 </td> <td> Default to 4 </td>
</tr> </tr>
<tr>
<td> TARGET_POD </td>
<td> Name of the application pod subjected to IO stress chaos</td>
<td> Optional </td>
<td> If not provided it will select from the appLabel provided</td>
</tr>
<tr> <tr>
<td> TOTAL_CHAOS_DURATION </td> <td> TOTAL_CHAOS_DURATION </td>
<td> The time duration for chaos (seconds) </td> <td> The time duration for chaos (seconds) </td>
<td> Optional </td> <td> Optional </td>
<td> Default to 120s </td> <td> Default to 120s </td>
</tr> </tr>
<tr>
<td> VOLUME_MOUNT_PATH </td>
<td> Fill the given volume mount path</td>
<td> Optional </td>
<td> </td>
</tr>
<tr> <tr>
<td> LIB </td> <td> LIB </td>
<td> The chaos lib used to inject the chaos </td> <td> The chaos lib used to inject the chaos </td>
@ -169,16 +171,16 @@ subjects:
<td> Default to <code>gaiaadm/pumba</code> </td> <td> Default to <code>gaiaadm/pumba</code> </td>
</tr> </tr>
<tr> <tr>
<td> TARGET_POD </td> <td> TARGET_PODS </td>
<td> Name of the application pod subjected to pod io stress chaos</td> <td> Comma separated list of application pod name subjected to pod io stress chaos</td>
<td> Optional </td> <td> Optional </td>
<td> If not provided it will select from the appLabel provided</td> <td> If not provided, it will select target pods randomly based on provided appLabels</td>
</tr> </tr>
<tr> <tr>
<td> PODS_AFFECTED_PERC </td> <td> PODS_AFFECTED_PERC </td>
<td> The Percentage of total pods to target </td> <td> The Percentage of total pods to target </td>
<td> Optional </td> <td> Optional </td>
<td> Default to 0% (corresponds to 1 replica) </td> <td> Defaults to 0 (corresponds to 1 replica), provide numeric value only </td>
</tr> </tr>
<tr> <tr>
<td> RAMP_TIME </td> <td> RAMP_TIME </td>
@ -196,7 +198,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name. </td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name. </td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>
@ -216,8 +218,6 @@ spec:
annotationCheck: "true" annotationCheck: "true"
# It can be active/stop # It can be active/stop
engineState: "active" engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo: appinfo:
appns: "default" appns: "default"
applabel: "app=nginx" applabel: "app=nginx"

View File

@ -1,7 +1,7 @@
--- ---
id: "pod-memory-hog" id: pod-memory-hog
title: "Pod Memory Hog Details" title: Pod Memory Hog Details
sidebar_label: "Pod Memory Hog" sidebar_label: Pod Memory Hog
--- ---
--- ---
@ -24,7 +24,7 @@ sidebar_label: "Pod Memory Hog"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `pod-memory-hog` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/pod-memory-hog/experiment.yaml) - Ensure that the `pod-memory-hog` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-memory-hog/experiment.yaml)
- Cluster must run docker container runtime - Cluster must run docker container runtime
## Entry Criteria ## Entry Criteria
@ -111,6 +111,8 @@ subjects:
namespace: default namespace: default
``` ```
**_Note:_** In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The `chaosServiceAccount` can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found [here](https://docs.litmuschaos.io/docs/next/litmus-psp/).
### Prepare ChaosEngine ### Prepare ChaosEngine
- Provide the application info in `spec.appinfo` - Provide the application info in `spec.appinfo`
@ -158,22 +160,22 @@ subjects:
<td> Defaults to <code>gaiaadm/pumba</code> </td> <td> Defaults to <code>gaiaadm/pumba</code> </td>
</tr> </tr>
<tr> <tr>
<td> TARGET_POD </td> <td> TARGET_PODS </td>
<td> Name of the application pod subjected to pod memory hog chaos</td> <td> Comma separated list of application pod name subjected to pod memory hog chaos</td>
<td> Optional </td> <td> Optional </td>
<td> If not provided it will select from the appLabel provided</td> <td> If not provided, it will select target pods randomly based on provided appLabels</td>
</tr> </tr>
<tr> <tr>
<td> CHAOS_KILL_COMMAND </td> <td> CHAOS_KILL_COMMAND </td>
<td> The command to kill the chaos process</td> <td> The command to kill the chaos process</td>
<td> Optional </td> <td> Optional </td>
<td> Default to <code>kill {"$(find /proc -name exe -lname '*/dd' 2>&1 | grep -v 'Permission denied' | awk -F/ '{print $(NF-1)}' | head -n 1"}</code> </td> <td> Default to <code>kill $(find /proc -name exe -lname '*/dd' 2&gt;&amp;1 | grep -v 'Permission denied' | awk -F/ '{'{'}print $(NF-1){'}'}' | head -n 1</code></td>
</tr> </tr>
<tr> <tr>
<td> PODS_AFFECTED_PERC </td> <td> PODS_AFFECTED_PERC </td>
<td> The Percentage of total pods to target </td> <td> The Percentage of total pods to target </td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to 0% (corresponds to 1 replica) </td> <td> Defaults to 0 (corresponds to 1 replica), provide numeric value only </td>
</tr> </tr>
<tr> <tr>
<td> RAMP_TIME </td> <td> RAMP_TIME </td>
@ -191,7 +193,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>
@ -211,8 +213,6 @@ spec:
annotationCheck: "true" annotationCheck: "true"
# It can be active/stop # It can be active/stop
engineState: "active" engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
appinfo: appinfo:
appns: "default" appns: "default"
applabel: "app=nginx" applabel: "app=nginx"
@ -237,6 +237,9 @@ spec:
- name: TOTAL_CHAOS_DURATION - name: TOTAL_CHAOS_DURATION
value: "60" # in seconds value: "60" # in seconds
- name: CHAOS_KILL_COMMAND
value: "kill -9 $(ps afx | grep \"[dd] if /dev/zero\" | awk '{print $1}' | tr '\n' ' ')"
``` ```
### Create the ChaosEngine Resource ### Create the ChaosEngine Resource

View File

@ -1,7 +1,7 @@
--- ---
id: "pod-network-corruption" id: pod-network-corruption
title: "Pod Network Corruption Experiment Details" title: Pod Network Corruption Experiment Details
sidebar_label: "Pod Network Corruption" sidebar_label: Pod Network Corruption
--- ---
--- ---
@ -24,14 +24,9 @@ sidebar_label: "Pod Network Corruption"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `pod-network-corruption` experiment resource is available in the cluster by `kubectl get chaosexperiments` command. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/pod-network-corruption/experiment.yaml) - Ensure that the `pod-network-corruption` experiment resource is available in the cluster by `kubectl get chaosexperiments` command. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-network-corruption/experiment.yaml)
- Cluster must run docker container runtime - Cluster must run docker container runtime
<div class="danger">
<strong>NOTE</strong>:
Experiment is supported only on Docker Runtime. Support for containerd/CRIO runtimes will be added in subsequent releases.
</div>
## Entry Criteria ## Entry Criteria
- Application pods are healthy before chaos injection - Application pods are healthy before chaos injection
@ -88,6 +83,7 @@ rules:
"jobs", "jobs",
"events", "events",
"pods/log", "pods/log",
"pods/exec",
"chaosengines", "chaosengines",
"chaosexperiments", "chaosexperiments",
"chaosresults", "chaosresults",
@ -113,6 +109,8 @@ subjects:
namespace: default namespace: default
``` ```
**_Note:_** In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The `chaosServiceAccount` can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found [here](https://docs.litmuschaos.io/docs/next/litmus-psp/).
### Prepare ChaosEngine ### Prepare ChaosEngine
- Provide the application info in `spec.appinfo` - Provide the application info in `spec.appinfo`
@ -150,13 +148,13 @@ subjects:
<td> CONTAINER_RUNTIME </td> <td> CONTAINER_RUNTIME </td>
<td> container runtime interface for the cluster</td> <td> container runtime interface for the cluster</td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to docker, supported values: docker, containerd, crio </td> <td> Defaults to docker, supported values: docker, containerd and crio for litmus and only docker for pumba LIB </td>
</tr> </tr>
<tr> <tr>
<td> SOCKET_PATH </td> <td> SOCKET_PATH </td>
<td> Path of the containerd/crio socket file </td> <td> Path of the containerd/crio/docker socket file </td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to `/run/containerd/containerd.sock` </td> <td> Defaults to `/var/run/docker.sock` </td>
</tr> </tr>
<tr> <tr>
<td> TOTAL_CHAOS_DURATION </td> <td> TOTAL_CHAOS_DURATION </td>
@ -165,34 +163,34 @@ subjects:
<td> Default (60s) </td> <td> Default (60s) </td>
</tr> </tr>
<tr> <tr>
<td> TARGET_POD </td> <td> TARGET_PODS </td>
<td> Name of the application pod subjected to pod network corruption chaos</td> <td> Comma separated list of application pod name subjected to pod network corruption chaos</td>
<td> Optional </td> <td> Optional </td>
<td> If not provided it will select from the appLabel provided</td> <td> If not provided, it will select target pods randomly based on provided appLabels</td>
</tr> </tr>
<tr> <tr>
<td> TARGET_IPs </td> <td> DESTINATION_IPS </td>
<td> Destination ips for network chaos </td> <td> IP addresses of the services or pods, the accessibility to which, is impacted </td>
<td> Optional </td> <td> Optional </td>
<td> if not provided, it will induce network chaos for all ips/destinations</td> <td> if not provided, it will induce network chaos for all ips/destinations</td>
</tr> </tr>
<tr> <tr>
<td> TARGET_HOSTS </td> <td> DESTINATION_HOSTS </td>
<td> Destination hosts for network chaos </td> <td> DNS Names/FQDN names of the services, the accessibility to which, is impacted </td>
<td> Optional </td> <td> Optional </td>
<td> if not provided, it will induce network chaos for all ips/destinations or TARGET_IPs if already defined</td> <td> if not provided, it will induce network chaos for all ips/destinations or DESTINATION_IPS if already defined</td>
</tr> </tr>
<tr> <tr>
<td> PODS_AFFECTED_PERC </td> <td> PODS_AFFECTED_PERC </td>
<td> The Percentage of total pods to target </td> <td> The Percentage of total pods to target </td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to 0% (corresponds to 1 replica) </td> <td> Defaults to 0 (corresponds to 1 replica), provide numeric value only </td>
</tr> </tr>
<tr> <tr>
<td> LIB </td> <td> LIB </td>
<td> The chaos lib used to inject the chaos </td> <td> The chaos lib used to inject the chaos </td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to litmus, only litmus supported </td> <td> Default value: litmus, supported values: pumba and litmus </td>
</tr> </tr>
<tr> <tr>
<td> TC_IMAGE </td> <td> TC_IMAGE </td>
@ -222,7 +220,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>
@ -244,8 +242,6 @@ spec:
annotationCheck: "true" annotationCheck: "true"
# It can be active/stop # It can be active/stop
engineState: "active" engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
monitoring: false monitoring: false
appinfo: appinfo:
appns: "default" appns: "default"
@ -258,13 +254,6 @@ spec:
spec: spec:
components: components:
env: env:
#Container name where chaos has to be injected
- name: TARGET_CONTAINER
value: "nginx"
- name: LIB_IMAGE
value: "litmuschaos/go-runner:latest"
#Network interface inside target container #Network interface inside target container
- name: NETWORK_INTERFACE - name: NETWORK_INTERFACE
value: "eth0" value: "eth0"
@ -273,15 +262,14 @@ spec:
value: "60" # in seconds value: "60" # in seconds
# provide the name of container runtime # provide the name of container runtime
# it supports docker, containerd, crio # for litmus LIB, it supports docker, containerd, crio
# default to docker # for pumba LIB, it supports docker only
- name: CONTAINER_RUNTIME - name: CONTAINER_RUNTIME
value: "docker" value: "docker"
# provide the socket file path # provide the socket file path
# applicable only for containerd and crio runtime
- name: SOCKET_PATH - name: SOCKET_PATH
value: "/run/containerd/containerd.sock" value: "/var/run/docker.sock"
``` ```
### Create the ChaosEngine Resource ### Create the ChaosEngine Resource

View File

@ -1,7 +1,7 @@
--- ---
id: "pod-network-duplication" id: pod-network-duplication
title: "Pod Network Duplication Experiment Details" title: Pod Network Duplication Experiment Details
sidebar_label: "Pod Network Duplication" sidebar_label: Pod Network Duplication
--- ---
--- ---
@ -24,11 +24,7 @@ sidebar_label: "Pod Network Duplication"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `pod-network-duplication` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/pod-network-duplication/experiment.yaml) - Ensure that the `pod-network-duplication` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-network-duplication/experiment.yaml)
<div class="danger">
<strong>NOTE</strong>:
Experiment is supported only on Docker Runtime. Support for containerd/CRIO runtimes will be added in subsequent releases.
</div>
## Entry Criteria ## Entry Criteria
@ -84,6 +80,7 @@ rules:
"jobs", "jobs",
"events", "events",
"pods/log", "pods/log",
"pods/exec",
"chaosengines", "chaosengines",
"chaosexperiments", "chaosexperiments",
"chaosresults", "chaosresults",
@ -109,6 +106,8 @@ subjects:
namespace: default namespace: default
``` ```
**_Note:_** In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The `chaosServiceAccount` can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found [here](https://docs.litmuschaos.io/docs/next/litmus-psp/).
### Prepare ChaosEngine ### Prepare ChaosEngine
- Provide the application info in `spec.appinfo` - Provide the application info in `spec.appinfo`
@ -152,43 +151,43 @@ subjects:
<td> PODS_AFFECTED_PERC </td> <td> PODS_AFFECTED_PERC </td>
<td> The Percentage of total pods to target </td> <td> The Percentage of total pods to target </td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to 0% (corresponds to 1 replica) </td> <td> Defaults to 0 (corresponds to 1 replica), provide numeric value only </td>
</tr> </tr>
<tr> <tr>
<td> CONTAINER_RUNTIME </td> <td> CONTAINER_RUNTIME </td>
<td> container runtime interface for the cluster</td> <td> container runtime interface for the cluster</td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to docker, supported values: docker, containerd, crio </td> <td> Defaults to docker, supported values: docker, containerd and crio for litmus and only docker for pumba LIB </td>
</tr> </tr>
<tr> <tr>
<td> SOCKET_PATH </td> <td> SOCKET_PATH </td>
<td> Path of the containerd/crio socket file </td> <td> Path of the containerd/crio/docker socket file </td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to `/run/containerd/containerd.sock` </td> <td> Defaults to `/var/run/docker.sock` </td>
</tr> </tr>
<tr> <tr>
<td> TARGET_POD </td> <td> TARGET_PODS </td>
<td> Name of the application pod subjected to pod network duplication chaos</td> <td> Comma separated list of application pod name subjected to pod network duplication chaos</td>
<td> Optional </td> <td> Optional </td>
<td> If not provided it will select from the appLabel provided</td> <td> If not provided, it will select target pods randomly based on provided appLabels</td>
</tr> </tr>
<tr> <tr>
<td> TARGET_IPs </td> <td> DESTINATION_IPS </td>
<td> Destination ips for network chaos </td> <td> IP addresses of the services or pods, the accessibility to which, is impacted </td>
<td> Optional </td> <td> Optional </td>
<td> if not provided, it will induce network chaos for all ips/destinations</td> <td> if not provided, it will induce network chaos for all ips/destinations</td>
</tr> </tr>
<tr> <tr>
<td> TARGET_HOSTS </td> <td> DESTINATION_HOSTS </td>
<td> Destination hosts for network chaos </td> <td> DNS Names/FQDN names of the services, the accessibility to which, is impacted </td>
<td> Optional </td> <td> Optional </td>
<td> if not provided, it will induce network chaos for all ips/destinations or TARGET_IPs if already defined</td> <td> if not provided, it will induce network chaos for all ips/destinations or DESTINATION_IPS if already defined</td>
</tr> </tr>
<tr> <tr>
<td> LIB </td> <td> LIB </td>
<td> The chaos lib used to inject the chaos </td> <td> The chaos lib used to inject the chaos </td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to litmus, only litmus supported </td> <td> Default value: litmus, supported values: pumba and litmus </td>
</tr> </tr>
<tr> <tr>
<td> TC_IMAGE </td> <td> TC_IMAGE </td>
@ -218,7 +217,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>
@ -241,8 +240,6 @@ spec:
annotationCheck: "true" annotationCheck: "true"
# It can be active/stop # It can be active/stop
engineState: "active" engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
monitoring: false monitoring: false
appinfo: appinfo:
appns: "default" appns: "default"
@ -258,9 +255,6 @@ spec:
- name: TOTAL_CHAOS_DURATION - name: TOTAL_CHAOS_DURATION
value: "60" # in seconds value: "60" # in seconds
- name: LIB_IMAGE
value: "litmuschaos/go-runner:latest"
#Network interface inside target container #Network interface inside target container
- name: NETWORK_INTERFACE - name: NETWORK_INTERFACE
value: "eth0" value: "eth0"
@ -268,20 +262,15 @@ spec:
- name: NETWORK_PACKET_DUPLICATION_PERCENTAGE - name: NETWORK_PACKET_DUPLICATION_PERCENTAGE
value: "100" value: "100"
#If not provided it will take the first container of the target pod
- name: TARGET_CONTAINER
value: ""
# provide the name of container runtime # provide the name of container runtime
# it supports docker, containerd, crio # for litmus LIB, it supports docker, containerd, crio
# default to docker # for pumba LIB, it supports docker only
- name: CONTAINER_RUNTIME - name: CONTAINER_RUNTIME
value: "docker" value: "docker"
# provide the socket file path # provide the socket file path
# applicable only for containerd and crio runtime
- name: SOCKET_PATH - name: SOCKET_PATH
value: "/run/containerd/containerd.sock" value: "/var/run/docker.sock"
``` ```
### Create the ChaosEngine Resource ### Create the ChaosEngine Resource

View File

@ -1,7 +1,7 @@
--- ---
id: "pod-network-latency" id: pod-network-latency
title: "Pod Network Latency Experiment Details" title: Pod Network Latency Experiment Details
sidebar_label: "Pod Network Latency" sidebar_label: Pod Network Latency
--- ---
--- ---
@ -24,12 +24,7 @@ sidebar_label: "Pod Network Latency"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `pod-network-latency` experiment resource is available in the cluster by executing kubectl `get chaosexperiments` in the desired namespace. . If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/pod-network-latency/experiment.yaml) - Ensure that the `pod-network-latency` experiment resource is available in the cluster by executing kubectl `get chaosexperiments` in the desired namespace. . If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-network-latency/experiment.yaml)
<div class="danger">
<strong>NOTE</strong>:
Experiment is supported only on Docker Runtime. Support for containerd/CRIO runtimes will be added in subsequent releases.
</div>
## Entry Criteria ## Entry Criteria
@ -87,6 +82,7 @@ rules:
"pods", "pods",
"jobs", "jobs",
"pods/log", "pods/log",
"pods/exec",
"events", "events",
"chaosengines", "chaosengines",
"chaosexperiments", "chaosexperiments",
@ -113,6 +109,8 @@ subjects:
namespace: default namespace: default
``` ```
**_Note:_** In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The `chaosServiceAccount` can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found [here](https://docs.litmuschaos.io/docs/next/litmus-psp/).
### Prepare ChaosEngine ### Prepare ChaosEngine
- Provide the application info in `spec.appinfo` - Provide the application info in `spec.appinfo`
@ -144,7 +142,7 @@ subjects:
<td> NETWORK_LATENCY </td> <td> NETWORK_LATENCY </td>
<td> The latency/delay in milliseconds </td> <td> The latency/delay in milliseconds </td>
<td> Optional </td> <td> Optional </td>
<td> Default (60000ms) </td> <td> Default 2000, provide numeric value only </td>
</tr> </tr>
<tr> <tr>
<td> TOTAL_CHAOS_DURATION </td> <td> TOTAL_CHAOS_DURATION </td>
@ -153,46 +151,46 @@ subjects:
<td> Default (60s) </td> <td> Default (60s) </td>
</tr> </tr>
<tr> <tr>
<td> TARGET_POD </td> <td> TARGET_PODS </td>
<td> Name of the application pod subjected to pod network latency chaos</td> <td> Comma separated list of application pod name subjected to pod network latency chaos</td>
<td> Optional </td> <td> Optional </td>
<td> If not provided it will select from the appLabel provided</td> <td> If not provided, it will select target pods randomly based on provided appLabels</td>
</tr> </tr>
<tr> <tr>
<td> TARGET_IPs </td> <td> DESTINATION_IPS </td>
<td> Destination ips for network chaos </td> <td> IP addresses of the services or pods, the accessibility to which, is impacted </td>
<td> Optional </td> <td> Optional </td>
<td> if not provided, it will induce network chaos for all ips/destinations</td> <td> if not provided, it will induce network chaos for all ips/destinations</td>
</tr> </tr>
<tr> <tr>
<td> TARGET_HOSTS </td> <td> DESTINATION_HOSTS </td>
<td> Destination hosts for network chaos </td> <td> DNS Names/FQDN names of the services, the accessibility to which, is impacted </td>
<td> Optional </td> <td> Optional </td>
<td> if not provided, it will induce network chaos for all ips/destinations or TARGET_IPs if already defined</td> <td> if not provided, it will induce network chaos for all ips/destinations or DESTINATION_IPS if already defined</td>
</tr> </tr>
<tr> <tr>
<td> PODS_AFFECTED_PERC </td> <td> PODS_AFFECTED_PERC </td>
<td> The Percentage of total pods to target </td> <td> The Percentage of total pods to target </td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to 0% (corresponds to 1 replica) </td> <td> Defaults to 0 (corresponds to 1 replica), provide numeric value only </td>
</tr> </tr>
<tr> <tr>
<td> CONTAINER_RUNTIME </td> <td> CONTAINER_RUNTIME </td>
<td> container runtime interface for the cluster</td> <td> container runtime interface for the cluster</td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to docker, supported values: docker, containerd, crio </td> <td> Defaults to docker, supported values: docker, containerd and crio for litmus and only docker for pumba LIB </td>
</tr> </tr>
<tr> <tr>
<td> SOCKET_PATH </td> <td> SOCKET_PATH </td>
<td> Path of the containerd/crio socket file </td> <td> Path of the containerd/crio/docker socket file </td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to `/run/containerd/containerd.sock` </td> <td> Defaults to `/var/run/docker.sock` </td>
</tr> </tr>
<tr> <tr>
<td> LIB </td> <td> LIB </td>
<td> The chaos lib used to inject the chaos </td> <td> The chaos lib used to inject the chaos </td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to litmus, only litmus supported </td> <td> Default value: litmus, supported values: pumba and litmus </td>
</tr> </tr>
<tr> <tr>
<td> TC_IMAGE </td> <td> TC_IMAGE </td>
@ -221,7 +219,7 @@ subjects:
<tr> <tr>
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
<td> </td> <td> </td>
</tr> </tr>
@ -244,8 +242,6 @@ spec:
annotationCheck: "true" annotationCheck: "true"
# It can be active/stop # It can be active/stop
engineState: "active" engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
monitoring: false monitoring: false
appinfo: appinfo:
appns: "default" appns: "default"
@ -258,33 +254,25 @@ spec:
spec: spec:
components: components:
env: env:
#Container name where chaos has to be injected
- name: TARGET_CONTAINER
value: "nginx"
#Network interface inside target container #Network interface inside target container
- name: NETWORK_INTERFACE - name: NETWORK_INTERFACE
value: "eth0" value: "eth0"
- name: LIB_IMAGE
value: "litmuschaos/go-runner:latest"
- name: NETWORK_LATENCY - name: NETWORK_LATENCY
value: "60000" value: "2000"
- name: TOTAL_CHAOS_DURATION - name: TOTAL_CHAOS_DURATION
value: "60" # in seconds value: "60" # in seconds
# provide the name of container runtime # provide the name of container runtime
# it supports docker, containerd, crio # for litmus LIB, it supports docker, containerd, crio
# default to docker # for pumba LIB, it supports docker only
- name: CONTAINER_RUNTIME - name: CONTAINER_RUNTIME
value: "docker" value: "docker"
# provide the socket file path # provide the socket file path
# applicable only for containerd and crio runtime
- name: SOCKET_PATH - name: SOCKET_PATH
value: "/run/containerd/containerd.sock" value: "/var/run/docker.sock"
``` ```
### Create the ChaosEngine Resource ### Create the ChaosEngine Resource

View File

@ -1,7 +1,7 @@
--- ---
id: "pod-network-loss" id: pod-network-loss
title: "Pod Network Loss Experiment Details" title: Pod Network Loss Experiment Details
sidebar_label: "Pod Network Loss" sidebar_label: Pod Network Loss
--- ---
--- ---
@ -24,11 +24,7 @@ sidebar_label: "Pod Network Loss"
## Prerequisites ## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `pod-network-loss` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/pod-network-loss/experiment.yaml) - Ensure that the `pod-network-loss` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-network-loss/experiment.yaml)
<div class="danger">
<strong>NOTE</strong>:
Experiment is supported only on Docker Runtime. Support for containerd/CRIO runtimes will be added in subsequent releases.
</div>
## Entry Criteria ## Entry Criteria
@ -84,6 +80,7 @@ rules:
"jobs", "jobs",
"events", "events",
"pods/log", "pods/log",
"pods/exec",
"chaosengines", "chaosengines",
"chaosexperiments", "chaosexperiments",
"chaosresults", "chaosresults",
@ -109,6 +106,8 @@ subjects:
namespace: default namespace: default
``` ```
**_Note:_** In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The `chaosServiceAccount` can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found [here](https://docs.litmuschaos.io/docs/next/litmus-psp/).
### Prepare ChaosEngine ### Prepare ChaosEngine
- Provide the application info in `spec.appinfo` - Provide the application info in `spec.appinfo`
@ -149,47 +148,46 @@ subjects:
<td> Default (60s) </td> <td> Default (60s) </td>
</tr> </tr>
<tr> <tr>
<td> TARGET_POD </td> <td> TARGET_PODS </td>
<td> Name of the application pod subjected to pod network loss chaos</td> <td> Comma separated list of application pod name subjected to pod network loss chaos</td>
<td> Optional </td> <td> Optional </td>
<td> If not provided it will select from the appLabel provided</td> <td> If not provided, it will select target pods randomly based on provided appLabels</td>
<td> If not provided it will select from the app label provided</td>
</tr> </tr>
<tr> <tr>
<td> TARGET_IPs </td> <td> DESTINATION_IPS </td>
<td> Destination ips for network chaos </td> <td> IP addresses of the services or pods, the accessibility to which, is impacted </td>
<td> Optional </td> <td> Optional </td>
<td> if not provided, it will induce network chaos for all ips/destinations</td> <td> if not provided, it will induce network chaos for all ips/destinations</td>
</tr> </tr>
<tr> <tr>
<td> TARGET_HOSTS </td> <td> DESTINATION_HOSTS </td>
<td> Destination hosts for network chaos </td> <td> DNS Names/FQDN names of the services, the accessibility to which, is impacted </td>
<td> Optional </td> <td> Optional </td>
<td> if not provided, it will induce network chaos for all ips/destinations or TARGET_IPs if already defined</td> <td> if not provided, it will induce network chaos for all ips/destinations or DESTINATION_IPS if already defined</td>
</tr> </tr>
<tr> <tr>
<td> PODS_AFFECTED_PERC </td> <td> PODS_AFFECTED_PERC </td>
<td> The Percentage of total pods to target </td> <td> The Percentage of total pods to target </td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to 0% (corresponds to 1 replica) </td> <td> Defaults to 0 (corresponds to 1 replica), provide numeric value only </td>
</tr> </tr>
<tr> <tr>
<td> CONTAINER_RUNTIME </td> <td> CONTAINER_RUNTIME </td>
<td> container runtime interface for the cluster</td> <td> container runtime interface for the cluster</td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to docker, supported values: docker, containerd, crio </td> <td> Defaults to docker, supported values: docker, containerd and crio for litmus and only docker for pumba LIB </td>
</tr> </tr>
<tr> <tr>
<td> SOCKET_PATH </td> <td> SOCKET_PATH </td>
<td> Path of the containerd/crio socket file </td> <td> Path of the containerd/crio/docker socket file </td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to `/run/containerd/containerd.sock` </td> <td> Defaults to `/var/run/docker.sock` </td>
</tr> </tr>
<tr> <tr>
<td> LIB </td> <td> LIB </td>
<td> The chaos lib used to inject the chaos </td> <td> The chaos lib used to inject the chaos </td>
<td> Optional </td> <td> Optional </td>
<td> Defaults to litmus, only litmus supported </td> <td> Default value: litmus, supported values: pumba and litmus </td>
</tr> </tr>
<tr> <tr>
<td> TC_IMAGE </td> <td> TC_IMAGE </td>
@ -219,7 +217,7 @@ subjects:
<td> INSTANCE_ID </td> <td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td> <td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td> <td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still {"<"} 64 characters </td> <td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr> </tr>
</table> </table>
@ -242,8 +240,6 @@ spec:
annotationCheck: "true" annotationCheck: "true"
# It can be active/stop # It can be active/stop
engineState: "active" engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
monitoring: false monitoring: false
appinfo: appinfo:
appns: "default" appns: "default"
@ -256,13 +252,6 @@ spec:
spec: spec:
components: components:
env: env:
#Container name where chaos has to be injected
- name: TARGET_CONTAINER
value: "nginx"
- name: LIB_IMAGE
value: "litmuschaos/go-runner:latest"
#Network interface inside target container #Network interface inside target container
- name: NETWORK_INTERFACE - name: NETWORK_INTERFACE
value: "eth0" value: "eth0"
@ -274,15 +263,14 @@ spec:
value: "60" # in seconds value: "60" # in seconds
# provide the name of container runtime # provide the name of container runtime
# it supports docker, containerd, crio # for litmus LIB, it supports docker, containerd, crio
# default to docker # for pumba LIB, it supports docker only
- name: CONTAINER_RUNTIME - name: CONTAINER_RUNTIME
value: "docker" value: "docker"
# provide the socket file path # provide the socket file path
# applicable only for containerd and crio runtime
- name: SOCKET_PATH - name: SOCKET_PATH
value: "/run/containerd/containerd.sock" value: "/var/run/docker.sock"
``` ```
### Create the ChaosEngine Resource ### Create the ChaosEngine Resource

30
website/docs/portal.md Normal file
View File

@ -0,0 +1,30 @@
---
id: portal
title: Litmus Portal
sidebar_label: Litmus Portal (beta-0)
---
---
## What is the Litmus Portal
It is a centralized web portal for creating, scheduling, and monitoring [Chaos Workflows](https://docs.litmuschaos.io/docs/chaos-workflows/).
The Litmus Portal simplifies the chaos engineering experience for users by providing multiple features, some of which are listed below. It is
in the `beta-0` phase as of the 1.11.0 release & is undergoing active [development](https://github.com/litmuschaos/litmus/tree/master/litmus-portal).
- Ability to launch and manage chaos across Kubernetes clusters (connected as "targets" to the portal)
- Basic authentication with support for organization teaming to collaborate on experiments
- Wizard to construct workflows by selecting, tuning and ordering experiments from the public [ChaosHub](https://hub.litmuschaos.io) or an alternate
ChaosExperiment source (structured similarly, i.e., essentially a [fork](https://github.com/litmuschaos/chaos-charts) of the public source with custom experiments)
- Assignment of weights for chaos experiments in a workflow and derivation of Resilience Score for each workflow run
- Support for repeated execution via workflow schedule
- Chaos workflow visualization with on-demand log-lookup for individual chaos pods/resources
- Dashboards to view chaos workflow status & history
- Analytics to compare resilience scores across workflow runs based on custom timelines
The portal also allows execution of "predefined chaos workflows" that can be uploaded on-demand to aid more customization, especially in the cases where the workflows
involve other Kubernetes actions (such as load generation) apart from chaos experiments.
Refer to the Litmus Portal [User Guide](https://docs.google.com/document/d/1fiN25BrZpvqg0UkBCuqQBE7Mx8BwDGC8ss2j2oXkZNA/edit#) to get started with the installation and usage.
<img src={require("./assets/portal-arch.jpg").default} width="800" />

View File

@ -1,7 +1,7 @@
--- ---
id: "prerequisites" id: prerequisites
title: "Litmus Pre-Requisites" title: Litmus Pre-Requisites
sidebar_label: "Pre-Requisites" sidebar_label: Pre-Requisites
--- ---
--- ---

View File

@ -1,7 +1,7 @@
--- ---
id: "rancher-litmus" id: rancher-litmus
title: "Installation and Troubleshooting of LitmusChaos on Rancher" title: Installation and Troubleshooting of LitmusChaos on Rancher
sidebar_label: "Install and Troubleshoot Litmus" sidebar_label: Install and Troubleshoot Litmus
--- ---
--- ---
@ -69,7 +69,7 @@ The nginx default web site should be available now.
Followed the steps in the [Getting Started Guide](https://docs.litmuschaos.io/docs/getstarted/)\* to install litmus in a `nginx` namespace with an nginx application. Followed the steps in the [Getting Started Guide](https://docs.litmuschaos.io/docs/getstarted/)\* to install litmus in a `nginx` namespace with an nginx application.
Download `litmus-operator-v1.9.0.yaml` from https://litmuschaos.github.io/litmus/litmus-operator-v1.9.0.yaml. Download `litmus-operator-v1.10.0.yaml` from https://litmuschaos.github.io/litmus/litmus-operator-v1.10.0.yaml.
Modify it to use the `nginx` namespace in three places (at lines 10, 41, and 47 approximately). Modify it to use the `nginx` namespace in three places (at lines 10, 41, and 47 approximately).
Install the litmus-operator in `nginx` application namespace using kubectl. Install the litmus-operator in `nginx` application namespace using kubectl.
@ -143,13 +143,13 @@ chaosresults litmuschaos.io tr
### Install Chaos Experiments ### Install Chaos Experiments
Chaos experiments contain the actual chaos details. These experiments are installed on your cluster as Kubernetes Custom Resources (CRs). Chaos experiments contain the actual chaos details. These experiments are installed on your cluster as Kubernetes Custom Resources (CRs).
The Chaos Experiments are grouped as Chaos Charts and are published on <a href="https://hub.litmuschaos.io" target="_blank">ChaosHub</a>.. The Chaos Experiments are grouped as Chaos Charts and are published on <a href="https://hub.litmuschaos.io" target="_blank">Chaos Hub</a>.
The generic chaos experiments such as `pod-delete`, `container-kill`,` pod-network-latency` are available under Generic Chaos Chart. The generic chaos experiments such as `pod-delete`, `container-kill`,` pod-network-latency` are available under Generic Chaos Chart.
This is the first chart you are recommended to install. This is the first chart you are recommended to install.
``` ```
$ kubectl apply -f https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/experiments.yaml -n nginx $ kubectl apply -f https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/experiments.yaml -n nginx
``` ```
Expected output: Expected output:
@ -415,7 +415,7 @@ You can uninstall Litmus by deleting the namespace.
```console ```console
kubectl delete -f chaosengine.yaml -n nginx kubectl delete -f chaosengine.yaml -n nginx
kubectl delete -f rbac.yaml -n nginx kubectl delete -f rbac.yaml -n nginx
kubectl delete -f https://hub.litmuschaos.io/api/chaos/1.9.0?file=charts/generic/experiments.yaml -n nginx kubectl delete -f https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/experiments.yaml -n nginx
kubectl delete -f litmus-operator.yaml -n nginx kubectl delete -f litmus-operator.yaml -n nginx
``` ```

View File

@ -6,17 +6,32 @@ sidebar_label: Resources
--- ---
## Chaos Demos ## LitmusChaos Resources
### Getting Started ### LitmusChaos Objective & High Level Design
Use this video to learn how to get started with Litmus. You will learn how to install Litmus, how to inject a fault into your application using one of the experiments available at ChaosHub. This video introduces you to LitmusChaos! Your one-stop toolset to carry out Chaos Engineering for Kubernetes applications. One of the most important tools for Kubernetes Developers, SREs, and DevOps engineers to ensure resilient Kubernetes infrastructure.
<a href="https://asciinema.org/a/G9TcXpgikLuGTBY7btIUNSuWN" target="_blank"> Check the video <a href="https://www.youtube.com/watch?v=ep6yxp_23Bk&list=PLmM1fgu30seVGFyNIEyDgAq6KnzgW2p3m&index=2&t=13s">here</a>
<img src={require('./assets/getstarted.svg').default} width="700"/> <hr/>
</a> ### LitmusChaos Architecture 101
This video discusses the LitmusChaos Architecture! The CRDs required for practicing Chaos Engineering on Kubernetes are explained here along with:
- Anatomy of a chaos experiment
- The detailed workflow of a chaos engine
Check the video <a href="https://www.youtube.com/watch?v=L38gBn8eEHw&list=PLmM1fgu30seVGFyNIEyDgAq6KnzgW2p3m&index=3&t=6s">here</a>
<hr/>
### LitmusChaos Component Details
This video takes you around the components of LitmusChaos easing your walkthrough of Litmus as a Chaos Engineering toolset and helping you achieve immense knowledge of each minute component of Litmus to help deploy chaos to your Kubernetes application as easily as possible.
Check the video <a href="https://www.youtube.com/watch?v=yhWgzN90SME&list=PLmM1fgu30seVGFyNIEyDgAq6KnzgW2p3m&index=5&t=3674s">here</a>
<hr/> <hr/>

View File

@ -1,7 +1,7 @@
--- ---
id: "scheduling" id: scheduling
title: "Scheduler Usage" title: Scheduler Usage
sidebar_label: "Scheduling" sidebar_label: Scheduling
--- ---
--- ---

View File

@ -1,3 +1,4 @@
const versions = require("./versions.json");
const communities = [ const communities = [
{ {
label: "Slack", label: "Slack",
@ -41,10 +42,10 @@ module.exports = {
tagline: "A website for testing", tagline: "A website for testing",
url: "https://docs.litmuschaos.io", url: "https://docs.litmuschaos.io",
baseUrl: "/", baseUrl: "/",
onBrokenLinks: "throw", onBrokenLinks: "ignore",
favicon: "img/favicon.ico", favicon: "img/favicon.ico",
organizationName: "litmuschaos", // Usually your GitHub org/user name. organizationName: "litmuschaos",
projectName: "litmus", // Usually your repo name. projectName: "litmus",
themeConfig: { themeConfig: {
navbar: { navbar: {
title: "Litmus Docs", title: "Litmus Docs",
@ -54,6 +55,31 @@ module.exports = {
src: "img/litmus-light-icon.svg", src: "img/litmus-light-icon.svg",
}, },
items: [ items: [
{
type: "docsVersion",
position: "right",
},
{
activeBasePath: "Version",
label: "Versions",
position: "left",
items: [
// adding items will create a dropdown
{
label: versions[0],
to: "docs/",
activeBaseRegex: `docs/(?!${versions.join("|")}|next)`,
},
...versions.slice(1).map((version) => ({
label: version,
to: `docs/${version}/`,
})),
{
label: "Master/Unreleased",
to: "docs/next/",
},
],
},
{ {
href: "https://github.com/litmuschaos/litmus", href: "https://github.com/litmuschaos/litmus",
label: "GitHub", label: "GitHub",
@ -95,9 +121,10 @@ module.exports = {
"@docusaurus/preset-classic", "@docusaurus/preset-classic",
{ {
docs: { docs: {
routeBasePath: "docs",
sidebarPath: require.resolve("./sidebars.js"), sidebarPath: require.resolve("./sidebars.js"),
// Please change this to your repo. editUrl: "https://github.com/litmuschaos/litmus-docs-beta/edit/staging/",
editUrl: "https://github.com/litmuschaos/litmus", showLastUpdateTime: true,
}, },
theme: { theme: {
customCss: require.resolve("./src/css/custom.css"), customCss: require.resolve("./src/css/custom.css"),

View File

@ -6,17 +6,8 @@ module.exports = {
"plugins", "plugins",
"architecture", "architecture",
"resources", "resources",
"community",
"devguide", "devguide",
], {
"Litmus Demo": ["litmus-demo"],
Concepts: [
"chaosengine",
"chaosexperiment",
"chaosschedule",
"chaosresult",
"litmus-probe",
],
Platforms: [ Platforms: [
{ {
type: "category", type: "category",
@ -29,6 +20,36 @@ module.exports = {
items: ["rancher-litmus"], items: ["rancher-litmus"],
}, },
], ],
},
],
"Litmus Demo": ["litmus-demo"],
Concepts: [
{
type: "category",
label: "Custom Resources",
items: [
"chaosengine",
"chaosexperiment",
"chaosschedule",
"chaosresult",
],
},
{
type: "category",
label: "Hypothesis",
items: ["litmus-probe"],
},
{
type: "category",
label: "Operational Modes",
items: ["admin-mode", "namespaced-mode"],
},
{
type: "category",
label: "Security",
items: ["litmus-psp"],
},
],
Experiments: [ Experiments: [
{ {
type: "category", type: "category",
@ -53,15 +74,15 @@ module.exports = {
"pod-autoscaler", "pod-autoscaler",
"Kubernetes-Chaostoolkit-Application", "Kubernetes-Chaostoolkit-Application",
"Kubernetes-Chaostoolkit-Service", "Kubernetes-Chaostoolkit-Service",
"Kubernetes-Chaostoolkit-Cluster-Kiam",
"pod-io-stress", "pod-io-stress",
"node-io-stress", "node-io-stress",
"node-restart",
], ],
}, },
{ {
type: "category", type: "category",
label: "Kube-AWS", label: "Kube-AWS",
items: ["Kubernetes-Chaostoolkit-AWS"], items: ["Kubernetes-Chaostoolkit-AWS", "ebs-loss", "ec2-terminate"],
}, },
{ {
type: "category", type: "category",
@ -96,10 +117,25 @@ module.exports = {
label: "Cassandra", label: "Cassandra",
items: ["cassandra-pod-delete"], items: ["cassandra-pod-delete"],
}, },
{
type: "category",
label: "Kube-Components",
items: [
"Kubernetes-Chaostoolkit-Cluster-Kiam",
"Kubernetes-Chaostoolkit-Cluster-active-monitor-controller",
"Kubernetes-Chaostoolkit-Cluster-alb-ingress-controller",
"Kubernetes-Chaostoolkit-Cluster-kube-proxy",
"Kubernetes-Chaostoolkit-Cluster-prometheus-k8s-prometheus",
"Kubernetes-Chaostoolkit-Cluster-prometheus-pushgateway",
"Kubernetes-Chaostoolkit-Cluster-prometheus-operator",
"Kubernetes-Chaostoolkit-Cluster-Calico-Node",
"Kubernetes-Chaostoolkit-Cluster-Wavefront",
],
},
], ],
Scheduler: ["scheduling"], Scheduler: ["scheduling"],
"Chaos Workflow": ["chaos-workflows"], "Chaos Workflow": ["chaos-workflows"],
"Litmus FAQs": ["faq-general", "faq-troubleshooting"], FAQs: ["faq-general"],
Advanced: ["admin-mode", "namespaced-mode"], Troubleshooting: ["faq-troubleshooting"],
}, },
}; };

View File

@ -7,13 +7,13 @@
/* You can override the default Infima variables here. */ /* You can override the default Infima variables here. */
:root { :root {
--ifm-color-primary: #25c2a0; --ifm-color-primary: #3578e5;
--ifm-color-primary-dark: rgb(33, 175, 144); --ifm-color-primary-dark: #1d68e1;
--ifm-color-primary-darker: rgb(31, 165, 136); --ifm-color-primary-darker: #1b62d4;
--ifm-color-primary-darkest: rgb(26, 136, 112); --ifm-color-primary-darkest: #1751af;
--ifm-color-primary-light: rgb(70, 203, 174); --ifm-color-primary-light: #4e89e8;
--ifm-color-primary-lighter: rgb(102, 212, 189); --ifm-color-primary-lighter: #5a91ea;
--ifm-color-primary-lightest: rgb(146, 224, 208); --ifm-color-primary-lightest: #80aaef;
--ifm-code-font-size: 95%; --ifm-code-font-size: 95%;
} }
@ -23,3 +23,39 @@
margin: 0 calc(-1 * var(--ifm-pre-padding)); margin: 0 calc(-1 * var(--ifm-pre-padding));
padding: 0 var(--ifm-pre-padding); padding: 0 var(--ifm-pre-padding);
} }
/* your custom css */
@media only screen and (max-width: 735px) {
.nav-footer .sitemap > div {
padding-right: 32px;
padding-left: 0;
}
}
@media only screen and (min-width: 735px) {
.nav-footer .sitemap > div {
padding: 0 32px;
}
}
.fixedHeaderContainer {
background: #23232a;
}
/* .fixedHeaderContainer header .headerTitleWithLogo { */
/* font-size: 2em; */
/* font-weight: 100; */
/* } */
.footerText {
text-align: left;
color: white;
}
.nav-footer .sitemap div:first-child {
flex-grow: 4;
}
.nav-footer .sitemap div:last-child {
padding-left: 0;
}

View File

@ -1,7 +1,7 @@
import React from "react"; import React from "react";
import { Redirect } from "@docusaurus/router"; import { Redirect } from "@docusaurus/router";
function Home() { function Home() {
return <Redirect to="/docs/getstarted" />; return <Redirect to="/docs/" />;
} }
export default Home; export default Home;

View File

@ -35,3 +35,15 @@
height: 200px; height: 200px;
width: 200px; width: 200px;
} }
.main {
padding: 12px;
}
.heading {
font-weight: bold;
}
.contents {
color: #ccc;
}

View File

@ -0,0 +1,143 @@
---
id: admin-mode
title: Administrator Mode
sidebar_label: Administrator Mode
original_id: admin-mode
---
------
### What is Adminstator Mode?
Admin mode is one of the ways the chaos orchestration is set up in Litmus, wherein all chaos resources (i.e., install time resources like the operator, chaosexperiment CRs, chaosServiceAccount/rbac and runtime resources like chaosengine, chaos-runner, experiment jobs & chaosresults) are setup in a single admin namespace (typically, litmus). In other words, centralized administration of chaos.
This feature is aimed at making the SRE/Cluster Admins life easier by doing away with setting up chaos pre-requisites on a per namespace basis (which may be more relevant in an autonomous/self-service cluster sharing model in dev environments).
This mode typically needs a "wider" & "stronger" ClusterRole, albeit one that is still just a superset of the individual experiment permissions. In this mode, the applications in their respective namespaces are subjected to chaos while the chaos job runs elsewhere, i.e., admin namespace.
### How to use Adminstator Mode?
In order to use Admin Mode, you just have to create a ServiceAccount in the *admin* or so called *chaos* namespace (`litmus` itself can be used), which is tied to a ClusterRole that has the permissions to perform operations on Kubernetes resources involved in the selected experiments across namespaces.
Provide this ServiceAccount in ChaosEngine's .spec.chaosServiceAccount.
### Example
#### Prepare Chaos Experiment
- Select Chaos Experiment from [hub.litmuschaos.io](https://hub.litmuschaos.io/) and click on `INSTALL EXPERIMENT` button.
```bash
kubectl apply -f https://hub.litmuschaos.io/api/chaos/1.10.0?file=charts/generic/pod-delete/experiment.yaml -n litmus
```
#### Prepare RBAC Manifest
Here is an RBAC definition, which in essence is a superset of individual experiments RBAC that has the permissions to run all chaos experiments across different namespaces.
[embedmd]:# (https://litmuschaos.github.io/litmus/litmus-admin-rbac.yaml)
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: litmus-admin
namespace: litmus
labels:
name: litmus-admin
---
# Source: openebs/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: litmus-admin
labels:
name: litmus-admin
rules:
- apiGroups: ["","apps","batch","extensions","litmuschaos.io"]
resources: ["pods","pods/exec","pods/eviction","jobs","daemonsets","events","chaosresults","chaosengines"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
- apiGroups: ["","apps","litmuschaos.io","apps.openshift.io","argoproj.io"]
resources: ["configmaps","secrets","services","chaosexperiments","pods/log","replicasets","deployments","statefulsets","deploymentconfigs","rollouts","services"]
verbs: ["get","list","patch","update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list","patch","update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: litmus-admin
labels:
name: litmus-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: litmus-admin
subjects:
- kind: ServiceAccount
name: litmus-admin
namespace: litmus
```
#### Prepare ChaosEngine
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: litmus #Chaos Resources Namespace
spec:
appinfo:
appns: 'default' #Application Namespace
applabel: 'app=nginx'
appkind: 'deployment'
# It can be true/false
annotationCheck: 'true'
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
chaosServiceAccount: litmus-admin
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
experiments:
- name: pod-delete
spec:
components:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '30'
# set chaos interval (in sec) as desired
- name: CHAOS_INTERVAL
value: '10'
# pod failures without '--force' & default terminationGracePeriodSeconds
- name: FORCE
value: 'false'
```
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
### Watch Chaos Engine
- Describe Chaos Engine for chaos steps.
`kubectl describe chaosengine nginx-chaos -n litmus`
### Watch Chaos progress
- View pod terminations & recovery by setting up a watch on the pods in the application namespace
`watch -n 1 kubectl get pods -n default`
### Check Chaos Experiment Result
- Check whether the application is resilient to the pod failure, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult nginx-chaos-pod-delete -n litmus`

View File

@ -0,0 +1,45 @@
---
id: architecture
title: Litmus Architecture
sidebar_label: Architecture
---
<hr/>
<img src={require('./assets/litmus-schematic.png').default} width="800" />
**Chaos-Operator**
Chaos-Operator watches for the ChaosEngine CR and executes the Chaos-Experiments mentioned in the CR. Chaos-Operator is namespace scoped. By default, it runs in `litmus` namespace. Once the experiment is completed, chaos-operator invokes chaos-exporter to export chaos metrics to a Prometheus database.
**Chaos-CRDs**
During installation, the following three CRDs are installed on the Kubernetes cluster.
`chaosengines.litmuschaos.io`
`chaosexperiments.litmuschaos.io`
`chaosresults.litmuschaos.io`
**Chaos-Experiments**
Chaos Experiment is a CR and are available as YAML files on <a href="https://hub.litmuschaos.io" target="_blank">Chaos Hub</a>. For more details visit Chaos Hub [documentation](chaoshub.md).
**Chaos-Engine**
ChaosEngine CR links application to experiments. User has to create ChaosEngine YAML by specifying the application label and experiments and create the CR. The CR is watched by Chaos-Operator and chaos-experiments are executed on a given application.
**Chaos-Exporter**
Optionally metrics can be exported to a Prometheus database. Chaos-Exporter implements the Prometheus metrics endpoint.
<br/>
<br/>
<hr/>
<br/>
<br/>

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

View File

@ -0,0 +1,429 @@
<?xml version="1.0"?>
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="1349.33333328" height="862.4" font-family="Consolas, Menlo, 'Bitstream Vera Sans Mono', monospace, 'Powerline Symbols'" font-size="14px">
<style>
<!-- asciinema theme -->
.default-text-fill {fill: #cccccc}
.default-bg-fill {fill: #121314}
.c-0 {fill: #000000}
.c-1 {fill: #dd3c69}
.c-2 {fill: #4ebf22}
.c-3 {fill: #ddaf3c}
.c-4 {fill: #26b0d7}
.c-5 {fill: #b954e1}
.c-6 {fill: #54e1b9}
.c-7 {fill: #d9d9d9}
.c-8 {fill: #4d4d4d}
.c-9 {fill: #dd3c69}
.c-10 {fill: #4ebf22}
.c-11 {fill: #ddaf3c}
.c-12 {fill: #26b0d7}
.c-13 {fill: #b954e1}
.c-14 {fill: #54e1b9}
.c-15 {fill: #ffffff}
.c-8, .c-9, .c-10, .c-11, .c-12, .c-13, .c-14, .c-15 {font-weight: bold}
<!-- 256 colors -->
.c-16 {fill: #000000}
.c-17 {fill: #00005f}
.c-18 {fill: #000087}
.c-19 {fill: #0000af}
.c-20 {fill: #0000d7}
.c-21 {fill: #0000ff}
.c-22 {fill: #005f00}
.c-23 {fill: #005f5f}
.c-24 {fill: #005f87}
.c-25 {fill: #005faf}
.c-26 {fill: #005fd7}
.c-27 {fill: #005fff}
.c-28 {fill: #008700}
.c-29 {fill: #00875f}
.c-30 {fill: #008787}
.c-31 {fill: #0087af}
.c-32 {fill: #0087d7}
.c-33 {fill: #0087ff}
.c-34 {fill: #00af00}
.c-35 {fill: #00af5f}
.c-36 {fill: #00af87}
.c-37 {fill: #00afaf}
.c-38 {fill: #00afd7}
.c-39 {fill: #00afff}
.c-40 {fill: #00d700}
.c-41 {fill: #00d75f}
.c-42 {fill: #00d787}
.c-43 {fill: #00d7af}
.c-44 {fill: #00d7d7}
.c-45 {fill: #00d7ff}
.c-46 {fill: #00ff00}
.c-47 {fill: #00ff5f}
.c-48 {fill: #00ff87}
.c-49 {fill: #00ffaf}
.c-50 {fill: #00ffd7}
.c-51 {fill: #00ffff}
.c-52 {fill: #5f0000}
.c-53 {fill: #5f005f}
.c-54 {fill: #5f0087}
.c-55 {fill: #5f00af}
.c-56 {fill: #5f00d7}
.c-57 {fill: #5f00ff}
.c-58 {fill: #5f5f00}
.c-59 {fill: #5f5f5f}
.c-60 {fill: #5f5f87}
.c-61 {fill: #5f5faf}
.c-62 {fill: #5f5fd7}
.c-63 {fill: #5f5fff}
.c-64 {fill: #5f8700}
.c-65 {fill: #5f875f}
.c-66 {fill: #5f8787}
.c-67 {fill: #5f87af}
.c-68 {fill: #5f87d7}
.c-69 {fill: #5f87ff}
.c-70 {fill: #5faf00}
.c-71 {fill: #5faf5f}
.c-72 {fill: #5faf87}
.c-73 {fill: #5fafaf}
.c-74 {fill: #5fafd7}
.c-75 {fill: #5fafff}
.c-76 {fill: #5fd700}
.c-77 {fill: #5fd75f}
.c-78 {fill: #5fd787}
.c-79 {fill: #5fd7af}
.c-80 {fill: #5fd7d7}
.c-81 {fill: #5fd7ff}
.c-82 {fill: #5fff00}
.c-83 {fill: #5fff5f}
.c-84 {fill: #5fff87}
.c-85 {fill: #5fffaf}
.c-86 {fill: #5fffd7}
.c-87 {fill: #5fffff}
.c-88 {fill: #870000}
.c-89 {fill: #87005f}
.c-90 {fill: #870087}
.c-91 {fill: #8700af}
.c-92 {fill: #8700d7}
.c-93 {fill: #8700ff}
.c-94 {fill: #875f00}
.c-95 {fill: #875f5f}
.c-96 {fill: #875f87}
.c-97 {fill: #875faf}
.c-98 {fill: #875fd7}
.c-99 {fill: #875fff}
.c-100 {fill: #878700}
.c-101 {fill: #87875f}
.c-102 {fill: #878787}
.c-103 {fill: #8787af}
.c-104 {fill: #8787d7}
.c-105 {fill: #8787ff}
.c-106 {fill: #87af00}
.c-107 {fill: #87af5f}
.c-108 {fill: #87af87}
.c-109 {fill: #87afaf}
.c-110 {fill: #87afd7}
.c-111 {fill: #87afff}
.c-112 {fill: #87d700}
.c-113 {fill: #87d75f}
.c-114 {fill: #87d787}
.c-115 {fill: #87d7af}
.c-116 {fill: #87d7d7}
.c-117 {fill: #87d7ff}
.c-118 {fill: #87ff00}
.c-119 {fill: #87ff5f}
.c-120 {fill: #87ff87}
.c-121 {fill: #87ffaf}
.c-122 {fill: #87ffd7}
.c-123 {fill: #87ffff}
.c-124 {fill: #af0000}
.c-125 {fill: #af005f}
.c-126 {fill: #af0087}
.c-127 {fill: #af00af}
.c-128 {fill: #af00d7}
.c-129 {fill: #af00ff}
.c-130 {fill: #af5f00}
.c-131 {fill: #af5f5f}
.c-132 {fill: #af5f87}
.c-133 {fill: #af5faf}
.c-134 {fill: #af5fd7}
.c-135 {fill: #af5fff}
.c-136 {fill: #af8700}
.c-137 {fill: #af875f}
.c-138 {fill: #af8787}
.c-139 {fill: #af87af}
.c-140 {fill: #af87d7}
.c-141 {fill: #af87ff}
.c-142 {fill: #afaf00}
.c-143 {fill: #afaf5f}
.c-144 {fill: #afaf87}
.c-145 {fill: #afafaf}
.c-146 {fill: #afafd7}
.c-147 {fill: #afafff}
.c-148 {fill: #afd700}
.c-149 {fill: #afd75f}
.c-150 {fill: #afd787}
.c-151 {fill: #afd7af}
.c-152 {fill: #afd7d7}
.c-153 {fill: #afd7ff}
.c-154 {fill: #afff00}
.c-155 {fill: #afff5f}
.c-156 {fill: #afff87}
.c-157 {fill: #afffaf}
.c-158 {fill: #afffd7}
.c-159 {fill: #afffff}
.c-160 {fill: #d70000}
.c-161 {fill: #d7005f}
.c-162 {fill: #d70087}
.c-163 {fill: #d700af}
.c-164 {fill: #d700d7}
.c-165 {fill: #d700ff}
.c-166 {fill: #d75f00}
.c-167 {fill: #d75f5f}
.c-168 {fill: #d75f87}
.c-169 {fill: #d75faf}
.c-170 {fill: #d75fd7}
.c-171 {fill: #d75fff}
.c-172 {fill: #d78700}
.c-173 {fill: #d7875f}
.c-174 {fill: #d78787}
.c-175 {fill: #d787af}
.c-176 {fill: #d787d7}
.c-177 {fill: #d787ff}
.c-178 {fill: #d7af00}
.c-179 {fill: #d7af5f}
.c-180 {fill: #d7af87}
.c-181 {fill: #d7afaf}
.c-182 {fill: #d7afd7}
.c-183 {fill: #d7afff}
.c-184 {fill: #d7d700}
.c-185 {fill: #d7d75f}
.c-186 {fill: #d7d787}
.c-187 {fill: #d7d7af}
.c-188 {fill: #d7d7d7}
.c-189 {fill: #d7d7ff}
.c-190 {fill: #d7ff00}
.c-191 {fill: #d7ff5f}
.c-192 {fill: #d7ff87}
.c-193 {fill: #d7ffaf}
.c-194 {fill: #d7ffd7}
.c-195 {fill: #d7ffff}
.c-196 {fill: #ff0000}
.c-197 {fill: #ff005f}
.c-198 {fill: #ff0087}
.c-199 {fill: #ff00af}
.c-200 {fill: #ff00d7}
.c-201 {fill: #ff00ff}
.c-202 {fill: #ff5f00}
.c-203 {fill: #ff5f5f}
.c-204 {fill: #ff5f87}
.c-205 {fill: #ff5faf}
.c-206 {fill: #ff5fd7}
.c-207 {fill: #ff5fff}
.c-208 {fill: #ff8700}
.c-209 {fill: #ff875f}
.c-210 {fill: #ff8787}
.c-211 {fill: #ff87af}
.c-212 {fill: #ff87d7}
.c-213 {fill: #ff87ff}
.c-214 {fill: #ffaf00}
.c-215 {fill: #ffaf5f}
.c-216 {fill: #ffaf87}
.c-217 {fill: #ffafaf}
.c-218 {fill: #ffafd7}
.c-219 {fill: #ffafff}
.c-220 {fill: #ffd700}
.c-221 {fill: #ffd75f}
.c-222 {fill: #ffd787}
.c-223 {fill: #ffd7af}
.c-224 {fill: #ffd7d7}
.c-225 {fill: #ffd7ff}
.c-226 {fill: #ffff00}
.c-227 {fill: #ffff5f}
.c-228 {fill: #ffff87}
.c-229 {fill: #ffffaf}
.c-230 {fill: #ffffd7}
.c-231 {fill: #ffffff}
.c-232 {fill: #080808}
.c-233 {fill: #121212}
.c-234 {fill: #1c1c1c}
.c-235 {fill: #262626}
.c-236 {fill: #303030}
.c-237 {fill: #3a3a3a}
.c-238 {fill: #444444}
.c-239 {fill: #4e4e4e}
.c-240 {fill: #585858}
.c-241 {fill: #626262}
.c-242 {fill: #6c6c6c}
.c-243 {fill: #767676}
.c-244 {fill: #808080}
.c-245 {fill: #8a8a8a}
.c-246 {fill: #949494}
.c-247 {fill: #9e9e9e}
.c-248 {fill: #a8a8a8}
.c-249 {fill: #b2b2b2}
.c-250 {fill: #bcbcbc}
.c-251 {fill: #c6c6c6}
.c-252 {fill: #d0d0d0}
.c-253 {fill: #dadada}
.c-254 {fill: #e4e4e4}
.c-255 {fill: #eeeeee}
.br { font-weight: bold }
.it { font-style: italic }
.un { text-decoration: underline }
</style>
<rect width="100%" height="100%" class="default-bg-fill" />
<svg x="0.625%" y="1.136%" class="default-text-fill">
<g style="shape-rendering: optimizeSpeed">
<rect x="5.625%" y="6.818%" width="0.625%" height="19.7" class="c-7" />
<rect x="0.000%" y="95.455%" width="98.750%" height="19.7" class="c-2" />
</g>
<text class="default-text-fill">
<tspan y="0.000%">
<tspan dy="1em" x="0.000%">c</tspan><tspan x="0.625%">h</tspan><tspan x="1.250%">a</tspan><tspan x="1.875%">o</tspan><tspan x="2.500%">s</tspan><tspan x="3.125%">:</tspan><tspan x="3.750%">~</tspan><tspan x="4.375%">$</tspan><tspan x="5.625%">#</tspan><tspan x="6.875%">B</tspan><tspan x="7.500%">u</tspan><tspan x="8.125%">i</tspan><tspan x="8.750%">l</tspan><tspan x="9.375%">d</tspan><tspan x="10.625%">a</tspan><tspan x="11.250%">n</tspan><tspan x="11.875%">d</tspan><tspan x="13.125%">a</tspan><tspan x="13.750%">p</tspan><tspan x="14.375%">p</tspan><tspan x="15.000%">l</tspan><tspan x="15.625%">y</tspan><tspan x="16.875%">C</tspan><tspan x="17.500%">h</tspan><tspan x="18.125%">a</tspan><tspan x="18.750%">o</tspan><tspan x="19.375%">s</tspan><tspan x="20.000%">E</tspan><tspan x="20.625%">n</tspan><tspan x="21.250%">g</tspan><tspan x="21.875%">i</tspan><tspan x="22.500%">n</tspan><tspan x="23.125%">e</tspan><tspan x="24.375%">C</tspan><tspan x="25.000%">R</tspan><tspan x="26.250%">t</tspan><tspan x="26.875%">o</tspan><tspan x="28.125%">u</tspan><tspan x="28.750%">n</tspan><tspan x="29.375%">l</tspan><tspan x="30.000%">e</tspan><tspan x="30.625%">a</tspan><tspan x="31.250%">s</tspan><tspan x="31.875%">h</tspan><tspan x="33.125%">C</tspan><tspan x="33.750%">h</tspan><tspan x="34.375%">a</tspan><tspan x="35.000%">o</tspan><tspan x="35.625%">s</tspan><tspan x="49.375%" class="c-2"></tspan><tspan x="50.000%">E</tspan><tspan x="50.625%">v</tspan><tspan x="51.250%">e</tspan><tspan x="51.875%">r</tspan><tspan x="52.500%">y</tspan><tspan x="53.750%">1</tspan><tspan x="54.375%">.</tspan><tspan x="55.000%">0</tspan><tspan x="55.625%">s</tspan><tspan x="56.250%">:</tspan><tspan x="57.500%">k</tspan><tspan x="58.125%">u</tspan><tspan x="58.750%">b</tspan><tspan x="59.375%">e</tspan><tspan x="60.000%">c</tspan><tspan x="60.625%">t</tspan><tspan x="61.250%">l</tspan><tspan x="62.500%">g</tspan><tspan x="63.125%">e</tspan><tspan x="63.750%">t</tspan><tspan x="65.000%">p</tspan><tspan x="65.625%">o</tspan><tspan x="83.750%">F</tspan><tspan x="84.375%">r</tspan><tspan x="85.000%">i</tspan><tspan x="86.250%">O</tspan><tspan x="86.875%">c</tspan><tspan x="87.500%">t</tspan><tspan x="89.375%">4</tspan><tspan x="90.625%">1</tspan><tspan x="91.250%">9</tspan><tspan x="91.875%">:</tspan><tspan x="92.500%">3</tspan><tspan x="93.125%">2</tspan><tspan x="93.750%">:</tspan><tspan x="94.375%">3</tspan><tspan x="95.000%">5</tspan><tspan x="96.250%">2</tspan><tspan x="96.875%">0</tspan><tspan x="97.500%">1</tspan><tspan x="98.125%">9</tspan>
</tspan>
<tspan y="2.273%">
<tspan dy="1em" x="0.000%">c</tspan><tspan x="0.625%">h</tspan><tspan x="1.250%">a</tspan><tspan x="1.875%">o</tspan><tspan x="2.500%">s</tspan><tspan x="3.125%">:</tspan><tspan x="3.750%">~</tspan><tspan x="4.375%">$</tspan><tspan x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="4.545%">
<tspan dy="1em" x="0.000%">c</tspan><tspan x="0.625%">h</tspan><tspan x="1.250%">a</tspan><tspan x="1.875%">o</tspan><tspan x="2.500%">s</tspan><tspan x="3.125%">:</tspan><tspan x="3.750%">~</tspan><tspan x="4.375%">$</tspan><tspan x="5.625%">v</tspan><tspan x="6.250%">i</tspan><tspan x="7.500%">c</tspan><tspan x="8.125%">h</tspan><tspan x="8.750%">a</tspan><tspan x="9.375%">o</tspan><tspan x="10.000%">s</tspan><tspan x="10.625%">e</tspan><tspan x="11.250%">n</tspan><tspan x="11.875%">g</tspan><tspan x="12.500%">i</tspan><tspan x="13.125%">n</tspan><tspan x="13.750%">e</tspan><tspan x="14.375%">.</tspan><tspan x="15.000%">y</tspan><tspan x="15.625%">a</tspan><tspan x="16.250%">m</tspan><tspan x="16.875%">l</tspan><tspan x="49.375%" class="c-2"></tspan><tspan x="50.000%">N</tspan><tspan x="50.625%">A</tspan><tspan x="51.250%">M</tspan><tspan x="51.875%">E</tspan><tspan x="69.375%">R</tspan><tspan x="70.000%">E</tspan><tspan x="70.625%">A</tspan><tspan x="71.250%">D</tspan><tspan x="71.875%">Y</tspan><tspan x="74.375%">S</tspan><tspan x="75.000%">T</tspan><tspan x="75.625%">A</tspan><tspan x="76.250%">T</tspan><tspan x="76.875%">U</tspan><tspan x="77.500%">S</tspan><tspan x="80.625%">R</tspan><tspan x="81.250%">E</tspan><tspan x="81.875%">S</tspan><tspan x="82.500%">T</tspan><tspan x="83.125%">A</tspan><tspan x="83.750%">R</tspan><tspan x="84.375%">T</tspan><tspan x="85.000%">S</tspan><tspan x="87.500%">A</tspan><tspan x="88.125%">G</tspan><tspan x="88.750%">E</tspan>
</tspan>
<tspan y="6.818%">
<tspan dy="1em" x="0.000%">c</tspan><tspan x="0.625%">h</tspan><tspan x="1.250%">a</tspan><tspan x="1.875%">o</tspan><tspan x="2.500%">s</tspan><tspan x="3.125%">:</tspan><tspan x="3.750%">~</tspan><tspan x="4.375%">$</tspan><tspan x="49.375%" class="c-2"></tspan><tspan x="50.000%">h</tspan><tspan x="50.625%">e</tspan><tspan x="51.250%">l</tspan><tspan x="51.875%">l</tspan><tspan x="52.500%">o</tspan><tspan x="53.125%">-</tspan><tspan x="53.750%">d</tspan><tspan x="54.375%">e</tspan><tspan x="55.000%">p</tspan><tspan x="55.625%">l</tspan><tspan x="56.250%">o</tspan><tspan x="56.875%">y</tspan><tspan x="57.500%">-</tspan><tspan x="58.125%">d</tspan><tspan x="58.750%">d</tspan><tspan x="59.375%">5</tspan><tspan x="60.000%">9</tspan><tspan x="60.625%">b</tspan><tspan x="61.250%">8</tspan><tspan x="61.875%">9</tspan><tspan x="62.500%">5</tspan><tspan x="63.125%">6</tspan><tspan x="63.750%">-</tspan><tspan x="64.375%">h</tspan><tspan x="65.000%">x</tspan><tspan x="65.625%">c</tspan><tspan x="66.250%">j</tspan><tspan x="66.875%">v</tspan><tspan x="69.375%">1</tspan><tspan x="70.000%">/</tspan><tspan x="70.625%">1</tspan><tspan x="74.375%">R</tspan><tspan x="75.000%">u</tspan><tspan x="75.625%">n</tspan><tspan x="76.250%">n</tspan><tspan x="76.875%">i</tspan><tspan x="77.500%">n</tspan><tspan x="78.125%">g</tspan><tspan x="80.625%">0</tspan><tspan x="87.500%">1</tspan><tspan x="88.125%">9</tspan><tspan x="88.750%">m</tspan>
</tspan>
<tspan y="9.091%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="11.364%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="13.636%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="15.909%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="18.182%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="20.455%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="22.727%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="25.000%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="27.273%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="29.545%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="31.818%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="34.091%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="36.364%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="38.636%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="40.909%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="43.182%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="45.455%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="47.727%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%"></tspan><tspan x="50.625%"></tspan><tspan x="51.250%"></tspan><tspan x="51.875%"></tspan><tspan x="52.500%"></tspan><tspan x="53.125%"></tspan><tspan x="53.750%"></tspan><tspan x="54.375%"></tspan><tspan x="55.000%"></tspan><tspan x="55.625%"></tspan><tspan x="56.250%"></tspan><tspan x="56.875%"></tspan><tspan x="57.500%"></tspan><tspan x="58.125%"></tspan><tspan x="58.750%"></tspan><tspan x="59.375%"></tspan><tspan x="60.000%"></tspan><tspan x="60.625%"></tspan><tspan x="61.250%"></tspan><tspan x="61.875%"></tspan><tspan x="62.500%"></tspan><tspan x="63.125%"></tspan><tspan x="63.750%"></tspan><tspan x="64.375%"></tspan><tspan x="65.000%"></tspan><tspan x="65.625%"></tspan><tspan x="66.250%"></tspan><tspan x="66.875%"></tspan><tspan x="67.500%"></tspan><tspan x="68.125%"></tspan><tspan x="68.750%"></tspan><tspan x="69.375%"></tspan><tspan x="70.000%"></tspan><tspan x="70.625%"></tspan><tspan x="71.250%"></tspan><tspan x="71.875%"></tspan><tspan x="72.500%"></tspan><tspan x="73.125%"></tspan><tspan x="73.750%"></tspan><tspan x="74.375%"></tspan><tspan x="75.000%"></tspan><tspan x="75.625%"></tspan><tspan x="76.250%"></tspan><tspan x="76.875%"></tspan><tspan x="77.500%"></tspan><tspan x="78.125%"></tspan><tspan x="78.750%"></tspan><tspan x="79.375%"></tspan><tspan x="80.000%"></tspan><tspan x="80.625%"></tspan><tspan x="81.250%"></tspan><tspan x="81.875%"></tspan><tspan x="82.500%"></tspan><tspan x="83.125%"></tspan><tspan x="83.750%"></tspan><tspan x="84.375%"></tspan><tspan x="85.000%"></tspan><tspan x="85.625%"></tspan><tspan x="86.250%"></tspan><tspan x="86.875%"></tspan><tspan x="87.500%"></tspan><tspan x="88.125%"></tspan><tspan x="88.750%"></tspan><tspan x="89.375%"></tspan><tspan x="90.000%"></tspan><tspan x="90.625%"></tspan><tspan x="91.250%"></tspan><tspan x="91.875%"></tspan><tspan x="92.500%"></tspan><tspan x="93.125%"></tspan><tspan x="93.750%"></tspan><tspan x="94.375%"></tspan><tspan x="95.000%"></tspan><tspan x="95.625%"></tspan><tspan x="96.250%"></tspan><tspan x="96.875%"></tspan><tspan x="97.500%"></tspan><tspan x="98.125%"></tspan>
</tspan>
<tspan y="50.000%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="52.273%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="54.545%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="56.818%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="59.091%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="61.364%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="63.636%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="65.909%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="68.182%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="70.455%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="72.727%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="75.000%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="77.273%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="79.545%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="81.818%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="84.091%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="86.364%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="88.636%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="90.909%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan><tspan x="50.000%" class="br c-10">H</tspan><tspan x="50.625%" class="br c-10">e</tspan><tspan x="51.250%" class="br c-10">l</tspan><tspan x="51.875%" class="br c-10">l</tspan><tspan x="52.500%" class="br c-10">o</tspan><tspan x="53.750%" class="br c-10">W</tspan><tspan x="54.375%" class="br c-10">o</tspan><tspan x="55.000%" class="br c-10">r</tspan><tspan x="55.625%" class="br c-10">l</tspan><tspan x="56.250%" class="br c-10">d</tspan><tspan x="57.500%" class="br c-10">i</tspan><tspan x="58.125%" class="br c-10">s</tspan><tspan x="59.375%" class="br c-10">o</tspan><tspan x="60.000%" class="br c-10">n</tspan><tspan x="60.625%" class="br c-10">l</tspan><tspan x="61.250%" class="br c-10">i</tspan><tspan x="61.875%" class="br c-10">n</tspan><tspan x="62.500%" class="br c-10">e</tspan><tspan x="63.750%">H</tspan><tspan x="64.375%">T</tspan><tspan x="65.000%">T</tspan><tspan x="65.625%">P</tspan><tspan x="66.250%">/</tspan><tspan x="66.875%">2</tspan><tspan x="68.125%">2</tspan><tspan x="68.750%">0</tspan><tspan x="69.375%">0</tspan><tspan x="70.625%">O</tspan><tspan x="71.250%">K</tspan>
</tspan>
<tspan y="93.182%">
<tspan dy="1em" x="49.375%" class="c-2"></tspan>
</tspan>
<tspan y="95.455%">
<tspan dy="1em" x="0.000%" class="c-0">[</tspan><tspan x="0.625%" class="c-0">d</tspan><tspan x="1.250%" class="c-0">e</tspan><tspan x="1.875%" class="c-0">m</tspan><tspan x="2.500%" class="c-0">o</tspan><tspan x="3.125%" class="c-0">]</tspan><tspan x="4.375%" class="c-0">0</tspan><tspan x="5.000%" class="c-0">:</tspan><tspan x="5.625%" class="c-0">s</tspan><tspan x="6.250%" class="c-0">s</tspan><tspan x="6.875%" class="c-0">h</tspan><tspan x="7.500%" class="c-0">*</tspan><tspan x="75.625%" class="c-0">&quot;</tspan><tspan x="76.250%" class="c-0">r</tspan><tspan x="76.875%" class="c-0">a</tspan><tspan x="77.500%" class="c-0">h</tspan><tspan x="78.125%" class="c-0">u</tspan><tspan x="78.750%" class="c-0">l</tspan><tspan x="79.375%" class="c-0">-</tspan><tspan x="80.000%" class="c-0">T</tspan><tspan x="80.625%" class="c-0">h</tspan><tspan x="81.250%" class="c-0">i</tspan><tspan x="81.875%" class="c-0">n</tspan><tspan x="82.500%" class="c-0">k</tspan><tspan x="83.125%" class="c-0">P</tspan><tspan x="83.750%" class="c-0">a</tspan><tspan x="84.375%" class="c-0">d</tspan><tspan x="85.000%" class="c-0">-</tspan><tspan x="85.625%" class="c-0">E</tspan><tspan x="86.250%" class="c-0">4</tspan><tspan x="86.875%" class="c-0">9</tspan><tspan x="87.500%" class="c-0">0</tspan><tspan x="88.125%" class="c-0">&quot;</tspan><tspan x="89.375%" class="c-0">0</tspan><tspan x="90.000%" class="c-0">1</tspan><tspan x="90.625%" class="c-0">:</tspan><tspan x="91.250%" class="c-0">0</tspan><tspan x="91.875%" class="c-0">2</tspan><tspan x="93.125%" class="c-0">0</tspan><tspan x="93.750%" class="c-0">5</tspan><tspan x="94.375%" class="c-0">-</tspan><tspan x="95.000%" class="c-0">O</tspan><tspan x="95.625%" class="c-0">c</tspan><tspan x="96.250%" class="c-0">t</tspan><tspan x="96.875%" class="c-0">-</tspan><tspan x="97.500%" class="c-0">1</tspan><tspan x="98.125%" class="c-0">9</tspan>
</tspan>
</text>
<g transform="translate(-50 -50)">
<svg x="50%" y="50%" width="100" height="100">
<svg version="1.1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 866.0254037844387 866.0254037844387">
<defs>
<mask id="small-triangle-mask">
<rect width="100%" height="100%" fill="white"/>
<polygon points="508.01270189221935 433.01270189221935, 208.0127018922194 259.8076211353316, 208.01270189221927 606.217782649107" fill="black"></polygon>
</mask>
</defs>
<polygon points="808.0127018922194 433.01270189221935, 58.01270189221947 -1.1368683772161603e-13, 58.01270189221913 866.0254037844386" mask="url(#small-triangle-mask)" fill="white"></polygon>
<polyline points="481.2177826491071 333.0127018922194, 134.80762113533166 533.0127018922194" stroke="white" stroke-width="90"></polyline>
</svg>
</svg>
</g>
</svg>
</svg>

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 354 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

View File

@ -0,0 +1,312 @@
---
id: cassandra-pod-delete
title: Cassandra Pod Delete Experiment Details
sidebar_label: Cassandra Pod Delete
original_id: cassandra-pod-delete
---
---
## Experiment Metadata
<table>
<tr>
<th> Type </th>
<th> Description </th>
<th> Tested K8s Platform </th>
</tr>
<tr>
<td> Cassandra </td>
<td> Fail the Cassandra statefulset pod</td>
<td> GKE, Konvoy(AWS), Packet(Kubeadm), Minikube, EKS </td>
</tr>
</table>
## Prerequisites
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`).If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `cassandra-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.10.0?file=charts/cassandra/cassandra-pod-delete/experiment.yaml)
## Entry Criteria
- Cassandra pods are healthy before chaos injection
- The load should be distributed on the each replicas.
## Exit Criteria
- Cassandra pods are healthy post chaos injection
- The load should be distributed on the each replicas.
## Details
- Causes (forced/graceful) pod failure of specific/random replicas of an cassandra statefulset
- Tests cassandra sanity (replica availability & uninterrupted service) and recovery workflow of the cassandra statefulset.
## Integrations
- Pod failures can be effected by setting `LIB` env as `litmus`
## Steps to Execute the Chaos Experiment
- This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine)
- Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
### Prepare chaosServiceAccount
- Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
#### Sample Rbac Manifest
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/v1.10.x/charts/cassandra/cassandra-pod-delete/rbac.yaml yaml"
```yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cassandra-pod-delete-sa
namespace: default
labels:
name: cassandra-pod-delete-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cassandra-pod-delete-sa
namespace: default
labels:
name: cassandra-pod-delete-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: ["", "litmuschaos.io", "batch", "apps"]
resources:
[
"pods",
"deployments",
"statefulsets",
"services",
"pods/log",
"pods/exec",
"events",
"jobs",
"chaosengines",
"chaosexperiments",
"chaosresults",
]
verbs:
["create", "list", "get", "patch", "update", "delete", "deletecollection"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cassandra-pod-delete-sa
namespace: default
labels:
name: cassandra-pod-delete-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cassandra-pod-delete-sa
subjects:
- kind: ServiceAccount
name: cassandra-pod-delete-sa
namespace: default
```
### Prepare ChaosEngine
- Provide the application info in `spec.appinfo`
- Override the experiment tunables if desired in `experiments.spec.components.env`
- To understand the values to provided in a ChaosEngine specification, refer [ChaosEngine Concepts](chaosengine-concepts.md)
#### Supported Experiment Tunables
<table>
<tr>
<th> Variables </th>
<th> Description </th>
<th> Specify In ChaosEngine </th>
<th> Notes </th>
</tr>
<tr>
<td> CASSANDRA_SVC_NAME </td>
<td> Cassandra Service Name </td>
<td> Mandatory </td>
<td> Defaults value: cassandra </td>
</tr>
<tr>
<td> KEYSPACE_REPLICATION_FACTOR </td>
<td> Value of the Replication factor for the cassandra liveness deploy</td>
<td> Mandatory </td>
<td> It needs to create keyspace while checking the livenss of cassandra</td>
</tr>
<tr>
<td> CASSANDRA_PORT </td>
<td> Port of the cassandra statefulset </td>
<td> Mandatory </td>
<td> Defaults value: 9042 </td>
</tr>
<tr>
<td> CASSANDRA_LIVENESS_CHECK </td>
<td> It allows to check the liveness of the cassandra statefulset </td>
<td> Optional </td>
<td> It can be`enabled` or `disabled` </td>
</tr>
<tr>
<td> CASSANDRA_LIVENESS_IMAGE </td>
<td> Image of the cassandra liveness deployment </td>
<td> Optional </td>
<td> Default value: litmuschaos/cassandra-client:latest </td>
</tr>
<tr>
<td> SEQUENCE </td>
<td> It defines sequence of chaos execution for multiple target pods </td>
<td> Optional </td>
<td> Default value: parallel. Supported: serial, parallel </td>
</tr>
<tr>
<td> TOTAL_CHAOS_DURATION </td>
<td> The time duration for chaos insertion (seconds) </td>
<td> Optional </td>
<td> Defaults to 15s </td>
</tr>
<tr>
<td> PODS_AFFECTED_PERC </td>
<td> The Percentage of total pods to target </td>
<td> Optional </td>
<td> Defaults to 0% (corresponds to 1 replica) </td>
</tr>
<tr>
<td> CHAOS_INTERVAL </td>
<td> Time interval b/w two successive pod failures (sec) </td>
<td> Optional </td>
<td> Defaults to 5s </td>
</tr>
<tr>
<td> LIB </td>
<td> The chaos lib used to inject the chaos </td>
<td> Optional </td>
<td> Defaults to <code>litmus</code>. Supported <code>litmus</code> only </td>
</tr>
<tr>
<td> FORCE </td>
<td> Application Pod deletion mode. `False` indicates graceful deletion with default termination period of 30s. 'True' indicates an immediate forceful deletion with 0s grace period </td>
<td> Optional </td>
<td> Default to `true`, With `terminationGracePeriodSeconds=0` </td>
</tr>
<tr>
<td> RAMP_TIME </td>
<td> Period to wait before injection of chaos in sec </td>
<td> Optional </td>
<td> </td>
</tr>
<tr>
<td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
<td> Optional </td>
<td> Ensure that the overall length of the chaosresult CR is still &lt; 64 characters </td>
</tr>
</table>
#### Sample ChaosEngine Manifest
[embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/v1.10.x/charts/cassandra/cassandra-pod-delete/engine.yaml yaml"
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: cassandra-chaos
namespace: default
spec:
appinfo:
appns: "default"
applabel: "app=cassandra"
appkind: "statefulset"
# It can be true/false
annotationCheck: "true"
# It can be active/stop
engineState: "active"
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ""
chaosServiceAccount: cassandra-pod-delete-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: "delete"
experiments:
- name: cassandra-pod-delete
spec:
components:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: "15"
# set chaos interval (in sec) as desired
- name: CHAOS_INTERVAL
value: "15"
# pod failures without '--force' & default terminationGracePeriodSeconds
- name: FORCE
value: "false"
# provide cassandra service name
# default service: cassandra
- name: CASSANDRA_SVC_NAME
value: "cassandra"
# provide the keyspace replication factor
- name: KEYSPACE_REPLICATION_FACTOR
value: "3"
# provide cassandra port
# default port: 9042
- name: CASSANDRA_PORT
value: "9042"
# SET THE CASSANDRA_LIVENESS_CHECK
# IT CAN BE `enabled` OR `disabled`
- name: CASSANDRA_LIVENESS_CHECK
value: ""
```
### Create the ChaosEngine Resource
- Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
`kubectl apply -f chaosengine.yml`
- If the chaos experiment is not executed, refer to the [troubleshooting](https://docs.litmuschaos.io/docs/faq-troubleshooting/)
section to identify the root cause and fix the issues.
### Watch Chaos progress
- View pod terminations & recovery by setting up a watch on the pods in the application namespace
`watch -n 1 kubectl get pods -n <application-namespace>`
### Abort/Restart the Chaos Experiment
- To stop the pod-delete experiment immediately, either delete the ChaosEngine resource or execute the following command:
`kubectl patch chaosengine <chaosengine-name> -n <namespace> --type merge --patch '{"spec":{"engineState":"stop"}}'`
- To restart the experiment, either re-apply the ChaosEngine YAML or execute the following command:
`kubectl patch chaosengine <chaosengine-name> -n <namespace> --type merge --patch '{"spec":{"engineState":"active"}}'`
### Check Chaos Experiment Result
- Check whether the cassandra statefulset is resilient to the pod failure, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `<ChaosEngine-Name>-<ChaosExperiment-Name>`.
`kubectl describe chaosresult cassandra-chaos-cassandra-pod-delete -n <cassandra-namespace>`
## Cassandra Pod Failure Demo
- It will be added soon.

View File

@ -0,0 +1,123 @@
---
id: chaos-workflows
title: Chaos Workflows with Argo and LitmusChaos
sidebar_label: Chaos Workflows
original_id: chaos-workflows
---
------
When simulating real-world failures via chaos injection on development/staging environments as part of a left-shifted,
continuous validation strategy, it is preferable to construct potential failure sequence over executing standalone chaos
injection actions. Often, this translates into failures during a certain workload condition (such as, say, percentage load),
multiple (parallel) failures of dependent & independent services etc.
Chaos Workflows are a set of actions strung together to achieve desired chaos impact on a Kubernetes cluster.
They are an effective mechanism to simulate real world conditions & gauge application behaviour in an effective manner.
This document specifies the procedure to setup and execute a simple chaos workflow to execute a pod-kill chaos on
an nginx deployment while a benchmark run is in progress.
## Install Argo Workflow Infrastructure
The Argo workflow infrastructure consists of the Argo workflow CRDs, Workflow Controller, associated RBAC & Argo CLI.
The steps are shown below to install Argo in the standard cluster-wide mode, where the workflow controller operates on all
namespaces. Ensure that you have the right permission to be able to create the said resources.
- Create argo namespace
```
kubectl create ns argo
```
- Create the CRDs, workflow controller deployment with associated RBAC
```
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo/stable/manifests/install.yaml -n argo
```
- Install the argo CLI on the test harness machine (where the kubeconfig is available)
```
curl -sLO https://github.com/argoproj/argo/releases/download/v2.8.0/argo-linux-amd64
```
```
chmod +x argo-linux-amd64
```
```
mv ./argo-linux-amd64 /usr/local/bin/argo
```
## Install a Sample Application: Nginx
- Install a simple multi-replica stateless Nginx deployment with service exposed over nodeport
```
kubectl apply -f https://raw.githubusercontent.com/litmuschaos/chaos-workflows/master/App/nginx.yaml
```
```
kubectl apply -f https://raw.githubusercontent.com/litmuschaos/chaos-workflows/master/App/service.yaml
```
## Install Litmus Infrastructure
- Apply the LitmusChaos Operator manifest:
```
kubectl apply -f https://litmuschaos.github.io/litmus/litmus-operator-v1.10.0.yaml
```
- Install the litmus-admin service account to be used by the chaos-operator while executing the experiment (this example
uses the [admin-mode](https://docs.litmuschaos.io/docs/next/admin-mode/) of chaos execution)
### Install the RBAC & experiment CR for litmus
```
kubectl apply -f https://litmuschaos.github.io/litmus/litmus-admin-rbac.yaml
```
- Install the pod-delete chaos experiment
```
kubectl apply -f https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-delete/experiment.yaml -n litmus
```
- **Note**: If you are interested in using chaostoolkit to perform the pod-delete, instead of the native litmus lib, you can apply
this [rbac](https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/k8-pod-delete/Cluster/rbac-admin.yaml)
& [experiment](https://hub.litmuschaos.io/api/chaos/1.10.0?file=charts/generic/k8-pod-delete/experiment.yaml) manifests instead
of the ones described above.
- Create the service account and associated RBAC, which will be used by the Argo workflow controller to execute the
actions specified in the workflow. In our case, this corresponds to the launch of the Nginx benchmark job and creating
the chaosengine to trigger the pod-delete chaos action. In our example, we place it in the namespace where the litmus
chaos resources reside, i.e., litmus.
```
kubectl apply -f https://raw.githubusercontent.com/litmuschaos/chaos-workflows/master/Argo/argo-access.yaml -n litmus
```
## Create the Chaos Workflow
- Applying the workflow manifest performs the following actions in parallel:
- Starts an Nginx benchmark job for the specified duration (300s)
- Triggers a random pod-kill of the Nginx replica by creating the chaosengine CR.
- Cleans up after chaos.
```
argo submit https://raw.githubusercontent.com/litmuschaos/chaos-workflows/master/Argo/argowf-native-pod-delete.yaml -n litmus
```
- **Note**: If you are using the chaostoolkit experiment, submit [this](https://raw.githubusercontent.com/litmuschaos/chaos-workflows/master/Argo/argowf-chaos-admin.yaml) workflow manifest instead.
### Visualize the Chaos Workflow
You can visualize the progress of the chaos workflow via the Argo UI. Convert the argo-server service to type NodePort & view the dashboard at `https://<node-ip>:<nodeport>`
```
kubectl patch svc argo-server -n argo -p '{"spec": {"type": "NodePort"}}'
```

View File

@ -0,0 +1,867 @@
---
id: chaosengine
title: Constructing the ChaosEngine
sidebar_label: ChaosEngine
original_id: chaosengine
---
---
The ChaosEngine is the main user-facing chaos custom resource with a namespace scope and is designed to hold information
around how the chaos experiments are executed. It connects an application instance with one or more chaos experiments,
while allowing the users to specify run level details (override experiment defaults, provide new environment variables and
volumes, options to delete or retain experiment pods, etc.,). This CR is also updated/patched with status of the chaos
experiments, making it the single source of truth with respect to the chaos.
This section describes the fields in the ChaosEngine spec and the possible values that can be set against the same.
## State Specification
<table>
<tr>
<th>Field</th>
<td><code>.spec.engineState</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to control the state of the chaosengine</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><code>active</code>, <code>stop</code></td>
</tr>
<tr>
<th>Default</th>
<td><code>active</code></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>engineState</code> in the spec is a user defined flag to trigger chaos. Setting it to <code>active</code> ensures successful execution of chaos. Patching it with <code>stop</code> aborts ongoing experiments. It has a corresponding flag in the chaosengine status field, called <code>engineStatus</code> which is updated by the controller based on actual state of the ChaosEngine.</td>
</tr>
</table>
## Application Specification
<table>
<tr>
<th>Field</th>
<td><code>.spec.appinfo.appns</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify namespace of application under test</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: string)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>appns</code> in the spec specifies the namespace of the AUT. Usually provided as a quoted string.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.appinfo.applabel</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify unique label of application under test</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: string)(pattern: "label_key=label_value")</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>applabel</code> in the spec specifies a unique label of the AUT. Usually provided as a quoted string of pattern key=value. Note that if multiple applications share the same label within a given namespace, the AUT is filtered based on the presence of the chaos annotation <code>litmuschaos.io/chaos: "true"</code>. If, however, the <code>annotationCheck</code> is disabled, then a random application (pod) sharing the specified label is selected for chaos.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.appinfo.appkind</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify resource kind of application under test</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><code>deployment</code>, <code>statefulset</code>, <code>daemonset</code>, <code>deploymentconfig</code>, <code>rollout</code></td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i> (depends on app type)</td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>appkind</code> in the spec specifies the Kubernetes resource type of the app deployment. The Litmus ChaosOperator supports chaos on deployments, statefulsets and daemonsets. Application health check routines are dependent on the resource types, in case of some experiments.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.auxiliaryAppInfo</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify one or more app namespace-label pairs whose health is also monitored as part of the chaos experiment, in addition to a primary application specified in the <code>.spec.appInfo</code>. <b>NOTE</b>: If the auxiliary applications are deployed in namespaces other than the AUT, ensure that the chaosServiceAccount is bound to a cluster role and has adequate permissions to list pods on other namespaces. </td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: string)(pattern: "namespace:label_key=label_value").</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>auxiliaryAppInfo</code> in the spec specifies a (comma-separated) list of namespace-label pairs for downstream (dependent) apps of the primary app specified in <code>.spec.appInfo</code> in case of pod-level chaos experiments. In case of infra-level chaos experiments, this flag specifies those apps that may be directly impacted by chaos and upon which health checks are necessary.</td>
</tr>
</table>
**Note**: Irrespective of the nature of the chaos experiment, i.e., pod-level (single-app impact/lesser blast radius) or infra-level(multi-app impact/higher blast radius), the `.spec.appinfo` is a must-fill where the experiment is pointed to at least one primary app whose health is measured as an indicator of the resiliency / success of the chaos experiment.
## RBAC Specification
<table>
<tr>
<th>Field</th>
<td><code>.spec.chaosServiceAccount</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify serviceaccount used for chaos experiment</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: string)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>chaosServiceAccount</code> in the spec specifies the name of the serviceaccount mapped to a role/clusterRole with enough permissions to execute the desired chaos experiment. The minimum permissions needed for any given experiment is provided in the <code>.spec.definition.permissions</code> field of the respective <b>chaosexperiment</b> CR.</td>
</tr>
</table>
## Runtime Specification
<table>
<tr>
<th>Field</th>
<td><code>.spec.annotationCheck</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to control annotationChecks on applications as prerequisites for chaos</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><code>true</code>, <code>false</code></td>
</tr>
<tr>
<th>Default</th>
<td><code>true</code></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>annotationCheck</code> in the spec controls whether or not the operator checks for the annotation "litmuschaos.io/chaos" to be set against the application under test (AUT). Setting it to <code>true</code> ensures the check is performed, with chaos being skipped if the app is not annotated, while setting it to <code>false</code> suppresses this check and proceeds with chaos injection.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.monitoring</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to enable collection of simple chaos metrics</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><code>true</code>, <code>false</code></td>
</tr>
<tr>
<th>Default</th>
<td><code>false</code></td>
</tr>
<tr>
<th>Notes</th>
<td><code>monitoring</code> in the spec enables or disables collection of chaos metrics with an exporter pod. Metrics include count of experiments in a chaosengine & individual experiment status. It is recommended to keep this disabled.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.jobCleanupPolicy</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to control cleanup of chaos experiment job post execution of chaos</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><code>delete</code>, <code>retain</code></td>
</tr>
<tr>
<th>Default</th>
<td><code>delete</code></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>jobCleanupPolicy</code> controls whether or not the experiment pods are removed once execution completes. Set to <code>retain</code> for debug purposes (in the absence of standard logging mechanisms).</td>
</tr>
</table>
## Component Specification
<table>
<tr>
<th>Field</th>
<td><code>.spec.components.runner.image</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify image of ChaosRunner pod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: string)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i> (refer <i>Notes</i>)</td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.components.runner.image</code> allows developers to specify their own debug runner images. Defaults for the runner image can be enforced via the operator env <b>CHAOS_RUNNER_IMAGE</b></td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.components.runner.imagePullPolicy</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify imagePullPolicy for the ChaosRunner</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><code>Always</code>, <code>IfNotPresent</code></td>
</tr>
<tr>
<th>Default</th>
<td><code>IfNotPresent</code></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.components.runner.imagePullPolicy</code> allows developers to specify the pull policy for chaos-runner. Set to <code>Always</code> during debug/test.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.components.runner.imagePullSecrets</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify imagePullSecrets for the ChaosRunner</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: []corev1.LocalObjectReference)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.components.runner.imagePullSecrets</code> allows developers to specify the <code>imagePullSecret</code> name for ChaosRunner. </td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.components.runner.runnerannotation</code></td>
</tr>
<tr>
<th>Description</th>
<td>Annotations that needs to be provided in the pod which will be created (runner-pod)</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td> <i>user-defined</i> (type: map[string]string) </td>
</tr>
<tr>
<th>Default</th>
<td> n/a </td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.components.runner.runnerannotation</code> allows developers to specify the custom annotations for the runner pod.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.components.runner.args</code></td>
</tr>
<tr>
<th>Description</th>
<td>Specify the args for the ChaosRunner Pod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: []string)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.components.runner.args</code> allows developers to specify their own debug runner args.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.components.runner.command</code></td>
</tr>
<tr>
<th>Description</th>
<td>Specify the commands for the ChaosRunner Pod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: []string)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.components.runner.command</code> allows developers to specify their own debug runner commands.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.components.runner.configMaps</code></td>
</tr>
<tr>
<th>Description</th>
<td>Configmaps passed to the chaos runner pod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: {'{'}name: string, mountPath: string{'}'})</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.spec.components.runner.configMaps</code> provides for a means to insert config information into the runner pod.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.components.runner.secrets</code></td>
</tr>
<tr>
<th>Description</th>
<td>Kubernetes secrets passed to the chaos runner pod.</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: {'{'}name: string, mountPath: string{'}'})</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.spec.components.runner.secrets</code> provides for a means to push secrets (typically project ids, access credentials etc.,) into the chaos runner pod. These are especially useful in case of platform-level/infra-level chaos experiments. </td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.components.runner.nodeSelector</code></td>
</tr>
<tr>
<th>Description</th>
<td>Node selectors for the runner pod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td>Labels in the from of label key=value</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.spec.components.runner.nodeSelector</code> The nodeselector contains labels of the node on which runner pod should be scheduled. Typically used in case of infra/node level chaos.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.components.runner.tolerations</code></td>
</tr>
<tr>
<th>Description</th>
<td>Toleration for the runner pod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: []corev1.Toleration)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.spec.components.runner.tolerations</code> Provides tolerations for the runner pod so that it can be scheduled on the respective tainted node. Typically used in case of infra/node level chaos.</td>
</tr>
</table>
## Experiment Specification
<table>
<tr>
<th>Field</th>
<td><code>.spec.experiments[].name</code></td>
</tr>
<tr>
<th>Description</th>
<td>Name of the chaos experiment CR</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: string)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>experiment[].name</code> specifies the chaos experiment to be executed by the ChaosOperator.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.experiments[].spec.components.env</code></td>
</tr>
<tr>
<th>Description</th>
<td>Environment variables passed to the chaos experiment</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: {'{'}name: string, value: string{'}'})</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>experiment[].spec.components.env</code> specifies the array of tunables passed to the experiment pods. Though the field is optional from a chaosengine definition viewpoint, it is almost always necessary to provide experiment tunables via this definition. While some of the env variables override the defaults in the experiment CR and some of the env are mandatory additions filling in for placeholders/empty values in the experimet CR. For a list of "mandatory" & "optional" env for an experiment, refer to the respective experiment documentation.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.experiments[].spec.components.configMaps</code></td>
</tr>
<tr>
<th>Description</th>
<td>Configmaps passed to the chaos experiment</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td> <i>user-defined</i> (type: {'{'}name: string, mountPath: string{'}'})</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>experiment[].spec.components.configMaps</code> provides for a means to insert config information into the experiment. The configmaps definition is validated for correctness and those specified are checked for availability (in the cluster/namespace) before being mounted into the experiment pods.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.experiments[].spec.components.secrets</code></td>
</tr>
<tr>
<th>Description</th>
<td>Kubernetes secrets passed to the chaos experiment</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: {'{'}name: string, mountPath: string{'}'})</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>experiment[].spec.components.secrets</code> provides for a means to push secrets (typically project ids, access credentials etc.,) into the experiment pods. These are especially useful in case of platform-level/infra-level chaos experiments. The secrets definition is validated for correctness and those specified are checked for availability (in the cluster/namespace) before being mounted into the experiment pods.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.experiments[].spec.components.experimentImage</code></td>
</tr>
<tr>
<th>Description</th>
<td>Override the image of the chaos experiment</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i> string </i></td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>experiment[].spec.components.experimentImage</code> overrides the experiment image for the chaoexperiment.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.experiments[].spec.components.experimentImagePullSecrets</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify imagePullSecrets for the ChaosExperiment</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: []corev1.LocalObjectReference)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.components.runner.experimentImagePullSecrets</code> allows developers to specify the <code>imagePullSecret</code> name for ChaosExperiment. </td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.experiments[].spec.components.nodeSelector</code></td>
</tr>
<tr>
<th>Description</th>
<td>Provide the node selector for the experiment pod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i> Labels in the from of label key=value</i></td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>experiment[].spec.components.nodeSelector</code> The nodeselector contains labels of the node on which experiment pod should be scheduled. Typically used in case of infra/node level chaos.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.experiments[].spec.components.statusCheckTimeouts</code></td>
</tr>
<tr>
<th>Description</th>
<td>Provides the timeout and retry values for the status checks. Defaults to 180s & 90 retries (2s per retry)</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i> It contains values in the form {'{'}delay: int, timeout: int{'}'} </i></td>
</tr>
<tr>
<th>Default</th>
<td><i>delay: 2s and timeout: 180s</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>experiment[].spec.components.statusCheckTimeouts</code> The statusCheckTimeouts override the status timeouts inside chaosexperiments. It contains timeout & delay in seconds.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.experiments[].spec.components.resources</code></td>
</tr>
<tr>
<th>Description</th>
<td>Specify the resource requirements for the ChaosExperiment pod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: corev1.ResourceRequirements)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>experiment[].spec.components.resources</code> contains the resource requirements for the ChaosExperiment Pod, where we can provide resource requests and limits for the pod.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.experiments[].spec.components.experimentannotation</code></td>
</tr>
<tr>
<th>Description</th>
<td>Annotations that needs to be provided in the pod which will be created (experiment-pod)</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td> <i>user-defined</i> (type: label key=value) </td>
</tr>
<tr>
<th>Default</th>
<td> n/a </td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.spec.components.experimentannotation</code> allows developers to specify the custom annotations for the experiment pod.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.experiments[].spec.components.tolerations</code></td>
</tr>
<tr>
<th>Description</th>
<td>Toleration for the experiment pod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: []corev1.Toleration)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.spec.components.tolerations</code>Tolerations for the experiment pod so that it can be scheduled on the respective tainted node. Typically used in case of infra/node level chaos.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.experiments[].spec.probe</code></td>
</tr>
<tr>
<th>Description</th>
<td> Declarative way to define the chaos hypothesis</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td> <i>user-defined</i> </td>
</tr>
<tr>
<th>Default</th>
<td> n/a </td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.probe</code> allows developers to specify the chaos hypothesis. It supports three types: <code>cmdProbe</code>, <code>k8sProbe</code>, <code>httpProbe</code>. For more details <a href="https://docs.litmuschaos.io/docs/litmus-probe/">refer</a></td>
</tr>
</table>

View File

@ -0,0 +1,426 @@
---
id: chaosexperiment
title: Constructing the ChaosExperiment
sidebar_label: ChaosExperiment
---
---
ChaosExperiment CR is the heart of litmus and contains the low-level execution information. They serve as off-the-shelf templates that one needs to "pull"
(install on the cluster) before including them as part of a chaos run against any target applications (the binding being defined in the [ChaosEngine](https://docs.litmuschaos.io/docs/chaosengine/)). The experiments are installed on the cluster as Kubernetes custom resources and are designed to hold granular
details of the experiment such as image, library, necessary permissions, chaos parameters (set to their default values). Most of the ChaosExperiment parameters, are essentially tunables that can be overridden from the ChaosEngine resource. The ChaosExperiment CRs are the primary artifacts hosted on the [ChaosHub](https://hub.litmuschaos.io)
This section describes the fields in the ChaosExperiment spec and the possible values that can be set against the same.
## Scope Specification
<table>
<tr>
<th>Field</th>
<td><code>.spec.definition.scope</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify the scope of the ChaosExperiment</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><code>Namespaced</code>, <code>Cluster</code></td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i> (depends on experiment type)</td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.spec.definition.scope</code> specifies the scope of the experiment. It can be <code>Namespaced</code> scope for pod level experiments and <code>Cluster</code> for the experiments having a cluster wide impact.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.definition.permissions</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify the minimum permission to run the ChaosExperiment</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: list)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.spec.definition.permissions</code> specify the minimum permission that is required to run the ChaosExperiment. It also helps to estimate the blast radius for the ChaosExperiment.</td>
</tr>
</table>
## Component Specification
<table>
<tr>
<th>Field</th>
<td><code>.spec.definition.image</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify the image to run the ChaosExperiment </td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: string)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i> (refer Notes)</td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.spec.definition.image</code> allows the developers to specify their experiment images. Typically set to the Litmus <code>go-runner</code> or the <code>ansible-runner</code>. This feature of the experiment enables BYOC (BringYourOwnChaos), where developers can implement their own variants of a standard chaos experiment</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.definition.imagePullPolicy</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag that helps the developers to specify imagePullPolicy for the ChaosExperiment</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><code>IfNotPresent</code>, <code>Always</code> (type: string)</td>
</tr>
<tr>
<th>Default</th>
<td><code>Always</code></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.spec.definition.imagePullPolicy</code> allows developers to specify the pull policy for ChaosExperiment image. Set to <code>Always</code> during debug/test</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.definition.args</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify the entrypoint for the ChaosExperiment</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type:list of string)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.spec.definition.args</code> specifies the entrypoint for the ChaosExperiment. It depends on the language used in the experiment. For litmus-go the <code>.spec.definition.args</code> contains a single binary of all experiments and managed via <code>-name</code> flag to indicate experiment to run(<code>-name (exp-name)</code>).</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.definition.command</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify the shell on which the ChaosExperiment will execute</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: list of string).</td>
</tr>
<tr>
<th>Default</th>
<td><code>/bin/bash</code></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.spec.definition.command</code> specifies the shell used to run the experiment <code>/bin/bash</code> is the most common shell to be used.</td>
</tr>
</table>
## Experiment Tunables Specification
<table>
<tr>
<th>Field</th>
<td><code>.spec.definition.env</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify env used for ChaosExperiment</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: {'{'}name: string, value: string{'}'})</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td> The <code>.spec.definition.env</code> specifies the array of tunables passed to the experiment pods as environment variables. It is used to manage the experiment execution. We can set the default values for all the variables (tunable) here which can be overridden by ChaosEngine from <code>.spec.experiments[].spec.components.env</code> if required. To know about the variables that need to be overridden check the list of "mandatory" & "optional" env for an experiment as provided within the respective experiment documentation.</td>
</tr>
</table>
## Configuration Specification
<table>
<tr>
<th>Field</th>
<td><code>.spec.definition.securityContext.containerSecurityContext.privileged</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify the security context for the ChaosExperiment pod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>true, false</i> (type:bool)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.spec.definition.securityContext.containerSecurityContext.privileged</code> specify the securityContext params to the experiment container.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.definition.labels</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify the label for the ChaosPod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type:map[string]string)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td> The <code>.spec.definition.labels</code> allow developers to specify the ChaosPod label for an experiment. </td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.definition.securityContext.podSecurityContext</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify security context for ChaosPod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type:corev1.PodSecurityContext)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td> The <code>.spec.definition.securityContext.podSecurityContext</code> allows the developers to specify the security context for the ChaosPod which applies to all containers inside the Pod.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.definition.configmaps</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify the configmap for ChaosPod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i></td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td> The <code>.spec.definition.configmaps</code> allows the developers to mount the ConfigMap volume into the experiment pod.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.definition.secrets</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify the secrets for ChaosPod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i></td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td> The <code>.spec.definition.secrets</code> specify the secret data to be passed for the ChaosPod. The secrets typically contains confidential information like credentials.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.definition.experimentannotations</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify the custom annotation to the ChaosPod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type:map[string]string)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td> The <code>.spec.definition.experimentannotations</code> allows the developer to specify the Custom annotation for the chaos pod.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.definition.hostFileVolumes</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify the host file volumes to the ChaosPod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type:map[string]string)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td> The <code>.spec.definition.hostFileVolumes</code> allows the developer to specify the host file volumes to the ChaosPod.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.definition.hostPID</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify the host PID for the ChaosPod</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>true, false</i> (type:bool)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td> The <code>.spec.definition.hostPID</code> allows the developer to specify the host PID for the ChaosPod. </td>
</tr>
</table>

View File

@ -0,0 +1,78 @@
---
id: chaoshub
title: Using and contributing to ChaosHub
sidebar_label: ChaosHub
---
---
**Important links**
Chaos Hub is maintained at https://hub.litmuschaos.io
To contribute new ChaosCharts visit: https://github.com/litmuschaos/chaos-charts
**Introduction**
Litmus chaos hub is a place where the Chaos Engineering community members publish their chaos experiments. A set of related chaos experiments are bundled into a `Chaos Chart`. Chaos Charts are classified into the following categories.
- [Generic Chaos](#generic-chaos)
- [Application Chaos](#application-chaos)
- [Platform Chaos](#platform-chaos)
### Generic Chaos
Chaos actions that apply to generic Kubernetes resources are classified into this category. Following chaos experiments are supported under Generic Chaos Chart
| Experiment name | Description | User guide link |
| ----------------------- | ------------------------------------------------------------------------ | ----------------------------------------------------- |
| Container Kill | Kills the container in the application pod | [container-kill](container-kill.md) |
| Pod Delete | Deletes the application pod | [pod-delete](pod-delete.md) |
| Pod Network Latency | Injects network latency into the pod | [pod-network-latency](pod-network-latency.md) |
| Pod Network Loss | Injects network loss into the pod | [pod-network-loss](pod-network-loss.md) |
| Node CPU Hog | Exhaust CPU resources on the Kubernetes Node | [node-cpu-hog](node-cpu-hog.md) |
| Node Memory Hog | Exhaust Memory resources on the Kubernetes Node | [node-memory-hog](node-memory-hog.md) |
| Disk Fill | Fillup Ephemeral Storage of a Resource | [disk-fill](disk-fill.md) |
| Disk Loss | External disk loss from the node | [disk-loss](disk-loss.md) |
| Node Drain | Drains the node where application pod is scheduled | [node-drain](node-drain.md) |
| Pod CPU Hog | Consumes CPU resources on the application container | [pod-cpu-hog](pod-cpu-hog.md) |
| Pod Memory Hog | Consumes Memory resources on the application container | [pod-memory-hog](pod-memory-hog.md) |
| Pod Network Corruption | Injects Network Packet Corruption into Application Pod | [pod-network-corruption](pod-network-corruption.md) |
| Kubelet Service Kill | Kills the kubelet service on the application node | [kubelet-service-kill](kubelet-service-kill.md) |
| Docker Service Kill | Kills the docker service on the application node | [docker-service-kill](docker-service-kill.md) |
| Node Taint | Taints the node where application pod is scheduled | [node-taint](node-taint.md) |
| Pod Autoscaler | Scales the application replicas and test the node autoscaling on cluster | [pod-autoscaler](pod-autoscaler.md) |
| Pod Network Duplication | Injects Network Packet Duplication into Application Pod | [pod-network-duplication](pod-network-duplication.md) |
| Pod IO Stress | Injects IO stress resources on the application container | [pod-io-stress](pod-io-stress.md) |
| Node IO stress | Injects IO stress resources on the application node | [node-io-stress](node-io-stress.md) |
### Application Chaos
While Chaos Experiments under the Generic category offer the ability to induce chaos into Kubernetes resources, it is difficult to analyze and conclude if the chaos induced found a weakness in a given application. The application specific chaos experiments are built with some checks on _pre-conditions_ and some expected outcomes after the chaos injection. The result of the chaos experiment is determined by matching the outcome with the expected outcome.
<div class="danger">
<strong>NOTE:</strong> If the result of the chaos experiment is `pass`, it means that the application is resilient to that chaos.
</div>
#### Benefits of contributing an application chaos experiment
Application developers write negative tests in their CI pipelines to test the resiliency of the applications. These negative can be converted into Litmus Chaos Experiments and contributed to ChaosHub, so that the users of the application can use them in staging/pre-production/production environments to check the resilience. Application environments vary considerably from where they are tested (CI pipelines) to where they are deployed (Production). Hence, running the same chaos tests in the user's environment will help determine the weaknesses of the deployment and fixing such weaknesses leads to increased resilience.
Following Application Chaos experiments are available on ChaosHub
| Application | Description | Chaos Experiments |
| ----------- | ------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| OpenEBS | Container Attached Storage for Kubernetes | [openebs-pool-pod-failure](openebs-pool-pod-failure.md)<br/>[openebs-pool-container-failure](openebs-pool-container-failure.md)<br/>[openebs-target-pod-failure](openebs-target-pod-failure.md)<br/>[openebs-target-container-failure](openebs-target-container-failure.md)<br/>[openebs-target-network-delay](openebs-target-network-delay.md)<br/>[openebs-target-network-loss](openebs-target-network-loss.md) <br/>[openebs-control-plane-chaos](openebs-control-plane-chaos.md) <br/>[openebs-nfs-provisioner-kill](openebs-nfs-provisioner-kill.md) <br/>[openebs-target-network-loss](openebs-target-network-loss.md) <br/>[openebs-pool-disk-loss](openebs-pool-disk-loss.md) <br/>[openebs-pool-network-loss](openebs-pool-network-loss.md) <br/>[openebs-pool-network-delay](openebs-pool-network-delay.md) |
| Kafka | Open-source stream processing software | [kafka-broker-pod-failure](kafka-broker-pod-failure.md)<br/>[kafka-broker-disk-failure](kafka-broker-disk-failure.md)<br/> |
| CoreDns | CoreDNS is a fast and flexible DNS server that chains plugins | [coredns-pod-delete](coredns-pod-delete.md) |
| Cassandra | Cassandra is an opensource distributed database | [cassandra-pod-delete](cassandra-pod-delete.md) |
### Platform Chaos
Chaos experiments that inject chaos into the platform resources of Kubernetes are classified into this category. Management of platform resources vary significantly from each other, Chaos Charts may be maintained separately for each platform (For example, AWS, GCP, Azure, etc)
Following Platform Chaos experiments are available on ChaosHub
| Platform | Description | Chaos Experiments |
| -------- | ------------------------------------------- | --------------------------------------------------------------------------- |
| AWS | Amazon Web Services platform. Includes EKS. | [ec2-terminate](chaostoolkit-aws-ec2-terminate.md), [ebs-loss](ebs-loss.md) |

View File

@ -0,0 +1,263 @@
---
id: chaosresult
title: Constructing the ChaosResult
sidebar_label: ChaosResult
---
---
ChaosResult resource holds the results of a ChaosExperiment with a namespace scope. It is created or updated at runtime by the experiment itself. It holds important information like the ChaosEngine reference, Experiment State, Verdict of the experiment (on completion), salient application/result attributes. It is also a source for metrics collection. It is updated/patched with the status of the experiment run. It is not removed as part of the default cleanup procedures to allow for extended reference.
This section describes the fields/details provided by the ChaosResult spec.
## Component Details
<table>
<tr>
<th>Field</th>
<td><code>.spec.engine</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to hold the ChaosEngine name for the experiment</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td>n/a (type: string)</td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.spec.engine</code> holds the engine name for the current course of the experiment.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.experiment</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to hold the ChaosExperiment name which induces chaos.</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td>n/a (type: string)</td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.spec.experiment</code> holds the ChaosExperiment name for the current course of the experiment.</td>
</tr>
</table>
## Status Details
<table>
<tr>
<th>Field</th>
<td><code>.status.experimentstatus.failstep</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to show the failure step of the ChaosExperiment</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>n/a</i>(type: string)</td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.status.experimentstatus.failstep</code> Show the step at which the experiment failed. It helps in faster debugging of failures in the experiment execution.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.status.phase</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to show the current phase of the experiment</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>Awaited,Running,Completed,Aborted</i> (type: string)</td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.status.phase</code> shows the current phase in which the experiment is. It gets updated as the experiment proceeds.If the experiment is aborted then the status will be Aborted.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.status.probesuccesspercentage</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to show the probe success percentage</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>1 to 100</i> (type: int)</td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.status.probesuccesspercentage</code> shows the probe success percentage which is a ratio of successful checks v/s total probes.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.status.verdict</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to show the verdict of the experiment.</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>Awaited,Pass,Fail,Stopped</i> (type: string)</td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.status.verdict</code> shows the verdict of the experiment. It is <code>Awaited</code> when the experiment is in progress and ends up with Pass or Fail according to the experiment result.</td>
</tr>
</table>
## Probe Details
<table>
<tr>
<th>Field</th>
<td><code>.status.probestatus.name</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to show the name of probe used in the experiment</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>n/a</i> n/a (type: string)</td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.status.probestatus.name</code> shows the name of the probe used in the experiment.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.status.probestatus.status.continuous</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to show the result of probe in continuous mode</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>Awaited,Passed,Better Luck Next Time</i> (type: string)</td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.status.probestatus.status.continuous</code> helps to get the result of the probe in the continuous mode. The httpProbe is better used in the Continuous mode.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.status.probestatus.status.postchaos</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to show the probe result post chaos</td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><i>Awaited,Passed,Better Luck Next Time</i> (type:map[string]string)</td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.status.probestatus.status.postchaos</code> shows the result of probe setup in EOT mode executed at the End of Test as a post-chaos check. </td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.status.probestatus.status.prechaos</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to show the probe result pre chaos</td>
</tr>
<tr>
<th>Range</th>
<td><i>Awaited,Passed,Better Luck Next Time</i> (type:string)</td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.status.probestatus.status.prechaos</code> shows the result of probe setup in SOT mode executed at the Start of Test as a pre-chaos check.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.status.probestatus.type</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to show the type of probe used</td>
</tr>
<tr>
<th>Range</th>
<td>
<i>HTTPProbe,K8sProbe,CmdProbe</i>(type:string)</td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>.status.probestatus.type</code> shows the type of probe used.</td>
</tr>
</table>

View File

@ -0,0 +1,266 @@
---
id: chaosschedule
title: Constructing the ChaosSchedule
sidebar_label: ChaosSchedule (alpha)
---
---
The ChaosSchedule is the user-facing chaos custom resource with a namespace scope and is designed to hold information
around how the ChaosEngines are to be scheduled according to the specified template. It schedules a chaosengine instance .
This section describes the fields in the ChaosSchedule spec and the possible values that can be set against the same.
<font style={{fontFamily:"verdana",color:"yellow"}}>Note</font> - This is the alpha version of ChaosScheduler. An enhanced version may be released in the future based on the user reviews
## Schedule Specification
<table>
<tr>
<th>Field</th>
<td><code>.spec.schedule.now</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to control the type of scheduling</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><code>true</code>, <code>false</code></td>
</tr>
<tr>
<th>Default</th>
<td><code>n/a</code></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>now</code> in the <code>spec.schedule</code> ensures immediate creation of chaosengine, i.e., injection of chaos.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.schedule.once.executionTime</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify execution timestamp at which chaos is injected, when the policy is <code>once</code>. The chaosengine is created exactly at this timestamp.</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: UTC Timeformat)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td><code>.spec.schedule.once</code> refers to a single-instance execution of chaos at a particular timestamp specified by <code>.spec.schedule.once.executionTime</code></td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.schedule.repeat.timeRange.startTime</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify start timestamp of the range within which chaos is injected, when the policy is <code>repeat</code>. The chaosengine is not created before this timestamp.</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: UTC Timeformat)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>When <code>startTime</code> is specified against the policy <code>repeat</code>, ChaosEngine will not be formed before this time, no matter when it was created.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.schedule.repeat.timeRange.endTime</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify end timestamp of the range within which chaos is injected, when the policy is <code>repeat</code>. The chaosengine is not created after this timestamp.</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: UTC Timeformat)</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>When <code>endTime</code> is specified against the policy <code>repeat</code>, ChaosEngine will not be formed after this time.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.schedule.repeat.properties.minChaosInterval</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify the minimum interval between two chaosengines to be formed. </td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: string)(pattern: "{'{'}number{'}'}m", "{'{'}number{'}'}h").</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>minChaosInterval</code> in the spec specifies a time interval that must be taken care of while repeatedly forming the chaosengines i.e. This much duration of time should be there as interval between the formation of two chaosengines. </td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.schedule.repeat.workDays.includedDays</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify the days at which chaos is allowed to take place</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>user-defined</i> (type: string)(pattern: [{'{'}day_name{'}'},{'{'}day_name{'}'}...]).</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td> The <code>includedDays</code> in the spec specifies a (comma-separated) list
of days of the week at which chaos is allowed to take place. {'{'}day_name{'}'} is to
be specified with the first 3 letters of the name of day such as
<code>Mon</code>, <code>Tue</code> etc.</td>
</tr>
</table>
<table>
<tr>
<th>Field</th>
<td><code>.spec.schedule.repeat.workHours.includedHours</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to specify the hours at which chaos is allowed to take place</td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>{'{'}hour_number{'}'} will range from 0 to 23</i> (type: string)(pattern: {'{'}hour_number{'}'}-{'{'}hour_number{'}'}).</td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>includedHours</code> in the spec specifies a range of hours of the day at which chaos is allowed to take place. 24 hour format is followed</td>
</tr>
</table>
## Engine Specification
<table>
<tr>
<th>Field</th>
<td><code>.spec.engineTemplateSpec</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to control chaosengine to be formed </td>
</tr>
<tr>
<th>Type</th>
<td>Mandatory</td>
</tr>
<tr>
<th>Range</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Default</th>
<td><i>n/a</i></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>engineTemplateSpec</code> is the ChaosEngineSpec of ChaosEngine that is to be formed.</td>
</tr>
</table>
## State Specification
<table>
<tr>
<th>Field</th>
<td><code>.spec.scheduleState</code></td>
</tr>
<tr>
<th>Description</th>
<td>Flag to control chaosshedule state </td>
</tr>
<tr>
<th>Type</th>
<td>Optional</td>
</tr>
<tr>
<th>Range</th>
<td><code>active</code>, <code>halt</code>, <code>complete</code></td>
</tr>
<tr>
<th>Default</th>
<td><code>active</code></td>
</tr>
<tr>
<th>Notes</th>
<td>The <code>scheduleState</code> is the current state of ChaosSchedule. If the schedule is running its state will be <code>active</code>, if the schedule is halted its state will be <code>halt</code> and if the schedule is completed it state will be <code>complete</code>.</td>
</tr>
</table>

Some files were not shown because too many files have changed in this diff Show More