Compare commits

...

43 Commits

Author SHA1 Message Date
litmusbot f4dfe58d78 16935505557: version upgraded for chaos-charts 2025-08-13 11:12:10 +00:00
Neelanjan Manna 92e2120b9a
Merge pull request #651 from litmuschaos/CHAOS-9406
feat: adds pod-network-rate-limit fault chart
2025-08-13 16:41:45 +05:30
neelanjan00 caeb1a389d
feat: adds pod-network-rate-limit fault
Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>
2025-07-28 16:16:21 +05:30
litmusbot 55299cbc7a 14635851373: version upgraded for chaos-charts 2025-04-24 07:25:06 +00:00
Jongwoo Han 44ddb75e96
feat: Add a rds-instance-stop chaos fault (#635)
* feat: Add a rds-instance-stop chaos fault

Signed-off-by: Jongwoo Han <jongwooo.han@gmail.com>

---------

Signed-off-by: Jongwoo Han <jongwooo.han@gmail.com>
2025-04-24 12:54:44 +05:30
litmusbot 160f11c322 10297656644: version upgraded for chaos-charts 2024-08-08 07:16:01 +00:00
Namkyu Park 682949e5c2
fix: update k6 engine (#633)
Signed-off-by: namkyu1999 <lak9348@konkuk.ac.kr>
2024-08-08 12:45:34 +05:30
litmusbot b93ed83cf0 8873345791: version upgraded for chaos-charts 2024-04-29 04:32:33 +00:00
Namkyu Park 13119089c7
feat: Add a k6-loadgen chaos fault (#622)
* feat: add load dir

Signed-off-by: namkyu1999 <lak9348@konkuk.ac.kr>

* fix: chore

Signed-off-by: namkyu1999 <lak9348@konkuk.ac.kr>

* feat: add secret

Signed-off-by: namkyu1999 <lak9348@konkuk.ac.kr>

* fix: hostpath to secret

Signed-off-by: namkyu1999 <lak9348@konkuk.ac.kr>

* fix: fixed scope

Signed-off-by: namkyu1999 <lak9348@konkuk.ac.kr>

* fix: implement feedbacks

Signed-off-by: namkyu1999 <lak9348@konkuk.ac.kr>

* fix: chore

Signed-off-by: namkyu1999 <lak9348@konkuk.ac.kr>

* Update faults/load/k6-loadgen/engine.yaml

Co-authored-by: Neelanjan Manna <neelanjanmanna@gmail.com>
Signed-off-by: namkyu1999 <lak9348@konkuk.ac.kr>

* Update faults/load/k6-loadgen/fault.yaml

Co-authored-by: Neelanjan Manna <neelanjanmanna@gmail.com>
Signed-off-by: namkyu1999 <lak9348@konkuk.ac.kr>

* Update faults/load/k6-loadgen/engine.yaml

Co-authored-by: Neelanjan Manna <neelanjanmanna@gmail.com>
Signed-off-by: namkyu1999 <lak9348@konkuk.ac.kr>

---------

Signed-off-by: namkyu1999 <lak9348@konkuk.ac.kr>
Co-authored-by: Neelanjan Manna <neelanjanmanna@gmail.com>
2024-04-29 10:02:08 +05:30
litmusbot a26b30e112 8685238135: version upgraded for chaos-charts 2024-04-15 07:37:48 +00:00
Shubham Chaudhary fe30b73860
fix(disk-fill): Add the container runtime and socketPath env (#628)
Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>
2024-04-15 13:07:27 +05:30
litmusbot 08d2a6e574 7529396473: version upgraded for chaos-charts 2024-01-15 13:15:35 +00:00
Shubham Chaudhary 625df46807
Integrate the litmus images with scarf gateway (#620)
Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>
2024-01-15 18:45:11 +05:30
litmusbot 6d0fb711a2 6403752352: version upgraded for chaos-charts 2023-10-04 08:54:25 +00:00
Neelanjan Manna 19a05d9515
fix: Updates AWS CSVs (#614)
* updates aws csvs

Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>

* updates experiment labels and chaosengine appns

Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>

* updates chaosengine namespaces

Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>

---------

Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>
2023-10-04 14:23:55 +05:30
Neelanjan Manna d482aa76af
chore: Fix experiments charts for 3.0.0 (#613)
* adds metadata.annotation in chaosengine and replaces install-chaos-faults step to use artifacts.data.raw manifests

Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>

* fixes annotation -> annotations

Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>

* updates labels

Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>

---------

Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>
2023-10-04 10:37:55 +05:30
Neelanjan Manna c7f3d2683c
chore: Fix ChaosHub for Litmus 3.0 (#612)
* fixes broken links, adds missing icons and updates readme

Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>
2023-10-03 13:45:04 +05:30
litmusbot e846d97ea3 6354102712: version upgraded for chaos-charts 2023-09-29 16:06:05 +00:00
Neelanjan Manna aca9eadce0
replaces go-runner tags to litmuschaos/go-runner:latest (#611)
Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>
2023-09-29 21:35:31 +05:30
Vedant Shrotria f993e37cdc
Merge pull request #609 from neelanjan00/add-maintainers-in-csv
chore: Adds maintainers in fault csv
2023-09-28 11:08:09 +05:30
neelanjan00 468dd4c429
adds maintainers in fault csv
Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>
2023-09-27 20:20:27 +05:30
litmusbot 40891c5517 6273914149: version upgraded for chaos-charts 2023-09-22 11:56:11 +00:00
Saranya Jena 840fa6861b
Merge pull request #608 from litmuschaos/refactor-hub-3.0.0
Merge refactor hub 3.0.0 to master
2023-09-22 17:25:43 +05:30
Saranya-jena 71bc8e9848
updated github actions
Signed-off-by: Saranya-jena <saranya.jena@harness.io>
2023-09-22 16:38:23 +05:30
Saranya-jena 26c87550d7
updated the tags to latest
Signed-off-by: Saranya-jena <saranya.jena@harness.io>
2023-09-22 13:58:05 +05:30
Amit Kumar Das 123e7450ef
Add files via upload 2023-09-21 16:22:21 +08:00
Amit Kumar Das 3f2c0b0aeb
Add files via upload 2023-09-21 16:21:05 +08:00
Amit Kumar Das fb5c66d451
Add files via upload 2023-09-21 16:20:25 +08:00
Amit Kumar Das 5dc83f77fb
Add files via upload 2023-09-21 16:20:00 +08:00
Amit Kumar Das 3a56d69410
Add files via upload 2023-09-21 16:18:30 +08:00
Amit Kumar Das 0488947ce6
Add files via upload 2023-09-21 16:17:04 +08:00
Vedant Shrotria bd3e240e30
Merge pull request #606 from neelanjan00/upgrade-go-runner-tag
chore: Upgrade go-runner image tag to 3.0.0-beta10
2023-09-01 10:28:29 +05:30
neelanjan00 ff78e5b029
updates go-runner image tags to 3.0.0-beta10
Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>
2023-08-31 14:35:52 +05:30
Vedant Shrotria baa27bbad2
Merge pull request #603 from neelanjan00/update-image-tags
chore: Update go-runner image tags to 3.0.0-beta3
2023-07-25 16:39:24 +05:30
neelanjan00 2cb4a10e69
updates go-runner image tags to 3.0.0-beta3
Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>
2023-07-20 12:25:20 +05:30
neelanjan00 ec90d6e953
updates generateName to name in metadata
Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>
2023-07-07 17:09:20 +05:30
neelanjan00 c01dfc2748
updates cleanup-chaos step name
Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>
2023-07-07 17:03:40 +05:30
neelanjan00 294375af2b
updates experiment step name
Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>
2023-07-07 17:00:41 +05:30
Vedant Shrotria 48a0779d8a
Merge pull request #601 from neelanjan00/refactor-hub
chore: Refactor hub for 3.0.0
2023-06-22 12:43:35 +05:30
neelanjan00 3e1cdecab0
resolves PR comments
Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>
2023-06-22 12:32:25 +05:30
neelanjan00 a7f7d0fa19
removes charts and workflows dir
Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>
2023-06-21 18:11:23 +05:30
neelanjan00 dff7caa36a
merges master
Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>
2023-06-20 12:31:59 +05:30
neelanjan00 ae8467237a
refactors directory and file structure
Signed-off-by: neelanjan00 <neelanjan.manna@harness.io>
2023-06-05 13:15:56 +05:30
868 changed files with 20561 additions and 45899 deletions

215
.gitignore vendored Normal file
View File

@ -0,0 +1,215 @@
# Created by https://www.toptal.com/developers/gitignore/api/git,visualstudiocode,goland+all,jetbrains+all,macos
# Edit at https://www.toptal.com/developers/gitignore?templates=git,visualstudiocode,goland+all,jetbrains+all,macos
### Git ###
# Created by git for backups. To disable backups in Git:
# $ git config --global mergetool.keepBackup false
*.orig
# Created by git when using merge tools for conflicts
*.BACKUP.*
*.BASE.*
*.LOCAL.*
*.REMOTE.*
*_BACKUP_*.txt
*_BASE_*.txt
*_LOCAL_*.txt
*_REMOTE_*.txt
### GoLand+all ###
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
# User-specific stuff
.idea/**/workspace.xml
.idea/**/tasks.xml
.idea/**/usage.statistics.xml
.idea/**/dictionaries
.idea/**/shelf
# AWS User-specific
.idea/**/aws.xml
# Generated files
.idea/**/contentModel.xml
# Sensitive or high-churn files
.idea/**/dataSources/
.idea/**/dataSources.ids
.idea/**/dataSources.local.xml
.idea/**/sqlDataSources.xml
.idea/**/dynamic.xml
.idea/**/uiDesigner.xml
.idea/**/dbnavigator.xml
# Gradle
.idea/**/gradle.xml
.idea/**/libraries
# Gradle and Maven with auto-import
# When using Gradle or Maven with auto-import, you should exclude module files,
# since they will be recreated, and may cause churn. Uncomment if using
# auto-import.
# .idea/artifacts
# .idea/compiler.xml
# .idea/jarRepositories.xml
# .idea/modules.xml
# .idea/*.iml
# .idea/modules
# *.iml
# *.ipr
# CMake
cmake-build-*/
# Mongo Explorer plugin
.idea/**/mongoSettings.xml
# File-based project format
*.iws
# IntelliJ
out/
# mpeltonen/sbt-idea plugin
.idea_modules/
# JIRA plugin
atlassian-ide-plugin.xml
# Cursive Clojure plugin
.idea/replstate.xml
# SonarLint plugin
.idea/sonarlint/
# Crashlytics plugin (for Android Studio and IntelliJ)
com_crashlytics_export_strings.xml
crashlytics.properties
crashlytics-build.properties
fabric.properties
# Editor-based Rest Client
.idea/httpRequests
# Android studio 3.1+ serialized cache file
.idea/caches/build_file_checksums.ser
### GoLand+all Patch ###
# Ignore everything but code style settings and run configurations
# that are supposed to be shared within teams.
.idea/*
!.idea/codeStyles
!.idea/runConfigurations
### JetBrains+all ###
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
# User-specific stuff
# AWS User-specific
# Generated files
# Sensitive or high-churn files
# Gradle
# Gradle and Maven with auto-import
# When using Gradle or Maven with auto-import, you should exclude module files,
# since they will be recreated, and may cause churn. Uncomment if using
# auto-import.
# .idea/artifacts
# .idea/compiler.xml
# .idea/jarRepositories.xml
# .idea/modules.xml
# .idea/*.iml
# .idea/modules
# *.iml
# *.ipr
# CMake
# Mongo Explorer plugin
# File-based project format
# IntelliJ
# mpeltonen/sbt-idea plugin
# JIRA plugin
# Cursive Clojure plugin
# SonarLint plugin
# Crashlytics plugin (for Android Studio and IntelliJ)
# Editor-based Rest Client
# Android studio 3.1+ serialized cache file
### JetBrains+all Patch ###
# Ignore everything but code style settings and run configurations
# that are supposed to be shared within teams.
### macOS ###
# General
.DS_Store
.AppleDouble
.LSOverride
# Icon must end with two \r
Icon
# Thumbnails
._*
# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent
# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk
### macOS Patch ###
# iCloud generated files
*.icloud
### VisualStudioCode ###
.vscode/
.vscode/*
!.vscode/settings.json
!.vscode/tasks.json
!.vscode/launch.json
!.vscode/extensions.json
!.vscode/*.code-snippets
# Local History for Visual Studio Code
.history/
# Built Visual Studio Code Extensions
*.vsix
### VisualStudioCode Patch ###
# Ignore all local history of files
.history
.ionide
# End of https://www.toptal.com/developers/gitignore/api/git,visualstudiocode,goland+all,jetbrains+all,macos

107
README.md
View File

@ -7,213 +7,212 @@
[![YouTube Channel](https://img.shields.io/badge/YouTube-Subscribe-red)](https://www.youtube.com/channel/UCa57PMqmz_j0wnteRa9nCaw)
<br><br>
This repository hosts the Litmus Chaos Charts. A set of related chaos experiments are bundled into a Chaos Chart. Chaos Charts are classified into the following categories.
This repository hosts the Litmus Chaos Charts. A set of related chaos faults are bundled into a Chaos Chart. Chaos Charts are classified into the following categories.
- [Generic Chaos](#generic-chaos)
- [Kubernetes Chaos](#kubernetes-chaos)
- [Application Chaos](#application-chaos)
- [Platform Chaos](#platform-chaos)
### Generic Chaos
### Kubernetes Chaos
Chaos actions that apply to generic Kubernetes resources are classified into this category. Following chaos experiments are supported under Generic Chaos Chart
Chaos faults that apply to Kubernetes resources are classified in this category. Following chaos faults are supported for Kubernetes:
<table>
<tr>
<th> Experiment Name </th>
<th> Fault Name </th>
<th> Description </th>
<th> Link </th>
</tr>
<tr>
<td> Container Kill </td>
<td> Kill one container in the application pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/container-kill"> container-kill </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/container-kill"> container-kill </a></td>
<tr>
<tr>
<td> Disk Fill </td>
<td> Fill the Ephemeral Storage of the Pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/disk-fill"> disk-fill </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/disk-fill"> disk-fill </a></td>
<tr>
<tr>
<td> Docker Service Kill </td>
<td> Kill docker service of the target node </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/docker-service-kill"> docker-service-kill </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/docker-service-kill"> docker-service-kill </a></td>
<tr>
<tr>
<td> Kubelet Service Kill </td>
<td> Kill kubelet service of the target node </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/kubelet-service-kill"> kubelet-service-kill </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/kubelet-service-kill"> kubelet-service-kill </a></td>
<tr>
<tr>
<td> Node CPU Hog </td>
<td> Stress the cpu of the target node </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/node-cpu-hog"> node-cpu-hog </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/node-cpu-hog"> node-cpu-hog </a></td>
<tr>
<tr>
<td> Node Drain </td>
<td> Drain the target node </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/node-drain"> node-drain </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/node-drain"> node-drain </a></td>
<tr>
<tr>
<td> Node IO Stress </td>
<td> Stress the IO of the target node </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/node-io-stress"> node-io-stress </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/node-io-stress"> node-io-stress </a></td>
<tr>
<tr>
<td> Node Memory Hog </td>
<td> Stress the memory of the target node </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/node-memory-hog"> node-memory-hog </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/node-memory-hog"> node-memory-hog </a></td>
<tr>
<tr>
<td> Node Restart </td>
<td> Restart the target node </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/node-restart"> node-restart </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/node-restart"> node-restart </a></td>
<tr>
<tr>
<td> Node Taint </td>
<td> Taint the target node </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/node-taint"> node-taint </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/node-taint"> node-taint </a></td>
<tr>
<tr>
<td> Pod Autoscaler </td>
<td> Scale the replicas of the target application </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/pod-autoscaler"> pod-autoscaler </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-autoscaler"> pod-autoscaler </a></td>
<tr>
<tr>
<td> Pod CPU Hog </td>
<td> Stress the CPU of the target pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/pod-cpu-hog"> pod-cpu-hog </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-cpu-hog"> pod-cpu-hog </a></td>
<tr>
<tr>
<td> Pod Delete </td>
<td> Delete the target pods </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/pod-delete"> pod-delete </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-delete"> pod-delete </a></td>
<tr>
<tr>
<td> Pod DNS Spoof </td>
<td> Spoof dns requests to desired target hostnames </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/pod-dns-spoof"> pod-dns-spoof </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-dns-spoof"> pod-dns-spoof </a></td>
<tr>
<tr>
<td> Pod DNS Error </td>
<td> Error the dns requests of the target pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/pod-dns-error"> pod-dns-error </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-dns-error"> pod-dns-error </a></td>
<tr>
<tr>
<td> Pod IO Stress </td>
<td> Stress the IO of the target pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/pod-io-stress"> pod-io-stress </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-io-stress"> pod-io-stress </a></td>
<tr>
<tr>
<td> Pod Memory Hog </td>
<td> Stress the memory of the target pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/pod-memory-hog"> pod-memory-hog </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-memory-hog"> pod-memory-hog </a></td>
<tr>
<tr>
<td> Pod Network Latency </td>
<td> Induce the network latency in target pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/pod-network-latency"> pod-network-latency </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-network-latency"> pod-network-latency </a></td>
<tr>
<tr>
<td> Pod Network Corruption </td>
<td> Induce the network packet corruption in target pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/pod-network-corruption"> pod-network-corruption </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-network-corruption"> pod-network-corruption </a></td>
<tr>
<tr>
<td> Pod Network Duplication </td>
<td> Induce the network packet duplication in target pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/pod-network-duplication"> pod-network-duplication </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-network-duplication"> pod-network-duplication </a></td>
<tr>
<tr>
<td> Pod Network Loss </td>
<td> Induce the network loss in target pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/pod-network-loss"> pod-network-loss </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-network-loss"> pod-network-loss </a></td>
<tr>
<tr>
<td> Pod Network Partition </td>
<td> Disrupt network connectivity to kubernetes pods </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/pod-network-partition"> pod-network-partition </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-network-partition"> pod-network-partition </a></td>
<tr>
</table>
### Application Chaos
While Chaos Experiments under the Generic category offer the ability to induce chaos into Kubernetes resources, it is difficult to analyze and conclude if the chaos induced found a weakness in a given application. The application specific chaos experiments are built with some checks on *pre-conditions* and some expected outcomes after the chaos injection. The result of the chaos experiment is determined by matching the outcome with the expected outcome.
While chaos faults under the Kubernetes category offer the ability to induce chaos into Kubernetes resources, it is difficult to analyze and conclude if the induced chaos found a weakness in a given application. The application specific chaos faults are built with some checks on *pre-conditions* and some expected outcomes after the chaos injection. The result of the chaos faults is determined by matching the outcome with the expected outcome.
<table>
<tr>
<th> Experiment Name </th>
<th> Fault Category </th>
<th> Description </th>
<th> Link </th>
</tr>
<tr>
<td> OpenEBS Experiments </td>
<td> Injects faults in OpenEBS tool </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/openebs"> OpenEBS experiments</a></td>
<td> Spring Boot Faults </td>
<td> Injects faults in Spring Boot applications </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/spring-boot"> Spring Boot Faults</a></td>
<tr>
</table>
### Platform Chaos
Chaos experiments that inject chaos into the platform resources of Kubernetes are classified into this category. Management of platform resources vary significantly from each other, Chaos Charts may be maintained separately for each platform (For example, AWS, GCP, Azure, VMWare etc)
Chaos faults that inject chaos into the platform and infrastructure resources are classified into this category. Management of platform resources vary significantly from each other, Chaos Charts may be maintained separately for each platform (For example: AWS, GCP, Azure, VMWare etc.)
Following Platform Chaos experiments are available on ChaosHub
Following chaos faults are classified in this category:
<table>
<tr>
<th> Experiment Name </th>
<th> Fault Category </th>
<th> Description </th>
<th> Link </th>
</tr>
<tr>
<td> AWS Experiments </td>
<td> AWS Faults </td>
<td> AWS Platform specific chaos </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/kube-aws"> AWS Experiments </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/aws"> AWS Faults </a></td>
<tr>
<tr>
<td> Azure Experiments </td>
<td> Azure Faults </td>
<td> Azure Platform specific chaos </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/azure"> Azure Experiments </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/azure"> Azure Faults </a></td>
<tr>
<tr>
<td> GCP Experiments </td>
<td> GCP Faults </td>
<td> GCP Platform specific chaos </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/gcp"> GCP Experiments </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/gcp"> GCP Faults </a></td>
<tr>
<tr>
<td> VMWare Experiments </td>
<td> VMWare Faults </td>
<td> VMWare Platform specific chaos </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/vmware"> VMWare Experiments </a></td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/vmware"> VMWare Faults </a></td>
<tr>
</table>
## Installation Steps for Chart Releases
## Installation Steps for Chart Releases
*Note: Supported from release 1.1.0*
*Note: Supported from release 3.0.0*
- To install the chaos experiments from a specific chart for a given release, execute the following commands
- To install the chaos faults from a specific chart for a given release, execute the following commands
with the desired `<release_version>`, `<chart_name>` & `<namespace>`
```bash
## downloads and unzips the released source
tar -zxvf <(curl -sL https://github.com/litmuschaos/chaos-charts/archive/<release_version>.tar.gz)
## installs the chaosexperiment resources
## installs the chaosexperiment resources
find chaos-charts-<release_version> -name experiments.yaml | grep <chart-name> | xargs kubectl apply -n <namespace> -f
```
- For example, to install the *generic* experiment chart bundle for release *1.1.0*, in the *sock-shop* namespace, run:
```
- For example, to install the *Kubernetes* fault chart bundle for release *3.0.0*, in the *sock-shop* namespace, run:
```bash
tar -zxvf <(curl -sL https://github.com/litmuschaos/chaos-charts/archive/1.1.0.tar.gz)
find chaos-charts-1.1.0 -name experiments.yaml | grep generic | xargs kubectl apply -n sock-shop -f
tar -zxvf <(curl -sL https://github.com/litmuschaos/chaos-charts/archive/3.0.0.tar.gz)
find chaos-charts-3.0.0 -name experiments.yaml | grep kubernetes | xargs kubectl apply -n sock-shop -f
```
- If you would like to install a specific experiment, replace the `experiments.yaml` in the above command with the relative
path of the experiment manifest within the parent chart. For example, to install only the *pod-delete* experiment, run:
- If you would like to install a specific fault, replace the `experiments.yaml` in the above command with the relative path of the fault manifest within the parent chart. For example, to install only the *pod-delete* fault, run:
```bash
find chaos-charts-1.1.0 -name experiment.yaml | grep 'generic/pod-delete' | xargs kubectl apply -n sock-shop -f
find chaos-charts-3.0.0 -name fault.yaml | grep 'kubernetes/pod-delete' | xargs kubectl apply -n sock-shop -f
```

View File

@ -1,43 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2021-06-10T10:28:08Z
name: aws-ssm-chaos-by-id
version: 0.1.0
annotations:
categories: Kubernetes
vendor: CNCF
support: https://slack.kubernetes.io/
spec:
displayName: aws-ssm-chaos-by-id
categoryDescription: |
AWS SSM Chaos By ID contains chaos to disrupt the state of infra resources. The experiment can induce chaos on AWS resources using Amazon SSM Run Command This is carried out by using SSM Docs that defines the actions performed by Systems Manager on your managed instances (having SSM agent installed) which let us perform chaos experiments on resources.
- Causes chaos on AWS ec2 instances with given instance ID(s) using SSM docs for total chaos duration with the specified chaos interval.
- Tests deployment sanity (replica availability & uninterrupted service) and recovery workflows of the target application pod(if provided).
keywords:
- SSM
- AWS
- EC2
platforms:
- AWS
maturity: alpha
chaosType: infra
maintainers:
- name: Udit Gaurav
email: udit@chaosnative.com
provider:
name: ChaosNative
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/aws-ssm/aws-ssm-chaos-by-id
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/aws-ssm/aws-ssm-chaos-by-id/
- name: Video
url:
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/aws-ssm/aws-ssm-chaos-by-id/experiment.yaml

View File

@ -1,62 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
engineState: 'active'
chaosServiceAccount: aws-ssm-chaos-by-id-sa
experiments:
- name: aws-ssm-chaos-by-id
spec:
components:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '60'
# set chaos duration (in sec) as desired
- name: CHAOS_INTERVAL
value: '60'
# Instance ID of the target ec2 instance
# Multiple IDs can also be provided as comma separated values ex: id1,id2
- name: EC2_INSTANCE_ID
value: ''
# provide the region name of the target instances
- name: REGION
value: ''
# provide the percentage of available memory to stress
- name: MEMORY_PERCENTAGE
value: '80'
# provide the CPU chores to be comsumed
# 0 will consume all the available cpu cores
- name: CPU_CORE
value: '0'
# Provide the name of ssm doc
# if not using the default stress docs
- name: DOCUMENT_NAME
value: ''
# Provide the type of ssm doc
# if not using the default stress docs
- name: DOCUMENT_TYPE
value: ''
# Provide the format of ssm doc
# if not using the default stress docs
- name: DOCUMENT_FORMAT
value: ''
# Provide the path of ssm doc
# if not using the default stress docs
- name: DOCUMENT_PATH
value: ''
# if you want to install dependencies to run default ssm docs
- name: INSTALL_DEPENDENCIES
value: 'True'

View File

@ -1,124 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Execute AWS SSM Chaos on given ec2 instance IDs
kind: ChaosExperiment
metadata:
name: aws-ssm-chaos-by-id
labels:
name: aws-ssm-chaos-by-id
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name aws-ssm-chaos-by-id
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '60'
- name: CHAOS_INTERVAL
value: '60'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# Instance ID of the target ec2 instance
# Multiple IDs can also be provided as comma separated values ex: id1,id2
- name: EC2_INSTANCE_ID
value: ''
- name: REGION
value: ''
# it defines the sequence of chaos execution for multiple target instances
# supported values: serial, parallel
- name: SEQUENCE
value: 'parallel'
# Provide the path of aws credentials mounted from secret
- name: AWS_SHARED_CREDENTIALS_FILE
value: '/tmp/cloud_config.yml'
# Provide the name of ssm doc
# if not using the default stress docs
- name: DOCUMENT_NAME
value: ''
# Provide the type of ssm doc
# if not using the default stress docs
- name: DOCUMENT_TYPE
value: ''
# Provide the format of ssm doc
# if not using the default stress docs
- name: DOCUMENT_FORMAT
value: ''
# Provide the path of ssm doc
# if not using the default stress docs
- name: DOCUMENT_PATH
value: ''
# if you want to install dependencies to run default ssm docs
- name: INSTALL_DEPENDENCIES
value: 'True'
# provide the number of workers for memory stress
- name: NUMBER_OF_WORKERS
value: '1'
# provide the percentage of available memory to stress
- name: MEMORY_PERCENTAGE
value: '80'
# provide the CPU chores to be comsumed
# 0 will consume all the available cpu cores
- name: CPU_CORE
value: '0'
labels:
name: aws-ssm-chaos-by-id
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest
secrets:
- name: cloud-secret
mountPath: /tmp/

View File

@ -1,62 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: aws-ssm-chaos-by-id-sa
namespace: default
labels:
name: aws-ssm-chaos-by-id-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: aws-ssm-chaos-by-id-sa
labels:
name: aws-ssm-chaos-by-id-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: aws-ssm-chaos-by-id-sa
labels:
name: aws-ssm-chaos-by-id-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: aws-ssm-chaos-by-id-sa
subjects:
- kind: ServiceAccount
name: aws-ssm-chaos-by-id-sa
namespace: default

View File

@ -1,43 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2021-06-10T10:28:08Z
name: aws-ssm-chaos-by-tag
version: 0.1.0
annotations:
categories: Kubernetes
vendor: CNCF
support: https://slack.kubernetes.io/
spec:
displayName: aws-ssm-chaos-by-tag
categoryDescription: |
AWS SSM Chaos By ID contains chaos to disrupt the state of infra resources. The experiment can induce chaos on AWS resources using Amazon SSM Run Command This is carried out by using SSM Docs that defines the actions performed by Systems Manager on your managed instances (having SSM agent installed) which let us perform chaos experiments on resources.
- Causes chaos on AWS ec2 instances with given instance tag using SSM docs for total chaos duration with the specified chaos interval.
- Tests deployment sanity (replica availability & uninterrupted service) and recovery workflows of the target application pod(if provided).
keywords:
- SSM
- AWS
- EC2
platforms:
- AWS
maturity: alpha
chaosType: infra
maintainers:
- name: Udit Gaurav
email: udit@chaosnative.com
provider:
name: ChaosNative
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/aws-ssm/aws-ssm-chaos-by-tag
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/aws-ssm/aws-ssm-chaos-by-tag/
- name: Video
url:
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/aws-ssm/aws-ssm-chaos-by-tag/experiment.yaml

View File

@ -1,62 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
engineState: 'active'
chaosServiceAccount: aws-ssm-chaos-by-tag-sa
experiments:
- name: aws-ssm-chaos-by-tag
spec:
components:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '60'
# set chaos duration (in sec) as desired
- name: CHAOS_INTERVAL
value: '60'
# provide tag of the target ec2 instances
# ex: team:devops (key:value)
- name: EC2_INSTANCE_TAG
value: ''
# provide the region name of the target instances
- name: REGION
value: ''
# provide the percentage of available memory to stress
- name: MEMORY_PERCENTAGE
value: '80'
# provide the CPU chores to comsumed
# 0 will consume all the available cpu cores
- name: CPU_CORE
value: '0'
# Provide the name of ssm doc
# if not using the default stress docs
- name: DOCUMENT_NAME
value: ''
# Provide the type of ssm doc
# if not using the default stress docs
- name: DOCUMENT_TYPE
value: ''
# Provide the format of ssm doc
# if not using the default stress docs
- name: DOCUMENT_FORMAT
value: ''
# Provide the path of ssm doc
# if not using the default stress docs
- name: DOCUMENT_PATH
value: ''
# if you want to install dependencies to run default ssm docs
- name: INSTALL_DEPENDENCIES
value: 'True'

View File

@ -1,128 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Execute AWS SSM Chaos on given ec2 instance Tag
kind: ChaosExperiment
metadata:
name: aws-ssm-chaos-by-tag
labels:
name: aws-ssm-chaos-by-tag
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name aws-ssm-chaos-by-tag
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '60'
- name: CHAOS_INTERVAL
value: '60'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# provide tag of the target ec2 instances
# ex: team:devops (key:value)
- name: EC2_INSTANCE_TAG
value: ''
- name: REGION
value: ''
# it defines the sequence of chaos execution for multiple target instances
# supported values: serial, parallel
- name: SEQUENCE
value: 'parallel'
# Provide the path of aws credentials mounted from secret
- name: AWS_SHARED_CREDENTIALS_FILE
value: '/tmp/cloud_config.yml'
# percentage of total instance to target
- name: INSTANCE_AFFECTED_PERC
value: ''
# Provide the name of ssm doc
# if not using the default stress docs
- name: DOCUMENT_NAME
value: ''
# Provide the type of ssm doc
# if not using the default stress docs
- name: DOCUMENT_TYPE
value: ''
# Provide the format of ssm doc
# if not using the default stress docs
- name: DOCUMENT_FORMAT
value: ''
# Provide the path of ssm doc
# if not using the default stress docs
- name: DOCUMENT_PATH
value: ''
# if you want to install dependencies to run default ssm docs
- name: INSTALL_DEPENDENCIES
value: 'True'
# provide the number of workers for memory stress
- name: NUMBER_OF_WORKERS
value: '1'
# provide the percentage of available memory to stress
- name: MEMORY_PERCENTAGE
value: '80'
# provide the CPU chores to comsumed
# 0 will consume all the available cpu cores
- name: CPU_CORE
value: '0'
labels:
name: aws-ssm-chaos-by-tag
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest
secrets:
- name: cloud-secret
mountPath: /tmp/

View File

@ -1,62 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: aws-ssm-chaos-by-tag-sa
namespace: default
labels:
name: aws-ssm-chaos-by-tag-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: aws-ssm-chaos-by-tag-sa
labels:
name: aws-ssm-chaos-by-tag-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: aws-ssm-chaos-by-tag-sa
labels:
name: aws-ssm-chaos-by-tag-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: aws-ssm-chaos-by-tag-sa
subjects:
- kind: ServiceAccount
name: aws-ssm-chaos-by-tag-sa
namespace: default

View File

@ -1,36 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2021-06-11T10:28:08Z
name: aws-ssm
version: 0.1.0
annotations:
categories: Kubernetes
chartDescription: Injects aws ssm chaos
spec:
displayName: AWS SSM
categoryDescription: >
aws ssm contains chaos to disrupt state of aws resources by litmus aws ssm docs
experiments:
- aws-ssm-chaos-by-id
- aws-ssm-chaos-by-tag
keywords:
- AWS
- SSM
- EC2
maintainers:
- name: ksatchit
email: karthik@chaosnative.com
provider:
name: ChaosNative
links:
- name: Kubernetes Website
url: https://kubernetes.io
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/aws-ssm
- name: Kubernetes Slack
url: https://slack.kubernetes.io/
icon:
- url: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/aws-ssm/icons/aws-ssm.png
mediatype: image/png
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/aws-ssm/experiments.yaml

View File

@ -1,8 +0,0 @@
packageName: aws-ssm
experiments:
- name: aws-ssm-chaos-by-id
CSV: aws-ssm-chaos-by-id.chartserviceversion.yaml
desc: "aws-ssm-chaos-by-id"
- name: aws-ssm-chaos-by-tag
CSV: aws-ssm-chaos-by-tag.chartserviceversion.yaml
desc: "aws-ssm-chaos-by-tag"

View File

@ -1,256 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Execute AWS SSM Chaos on given ec2 instance IDs
kind: ChaosExperiment
metadata:
name: aws-ssm-chaos-by-id
labels:
name: aws-ssm-chaos-by-id
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name aws-ssm-chaos-by-id
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '60'
- name: CHAOS_INTERVAL
value: '60'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# Instance ID of the target ec2 instance
# Multiple IDs can also be provided as comma separated values ex: id1,id2
- name: EC2_INSTANCE_ID
value: ''
- name: REGION
value: ''
# it defines the sequence of chaos execution for multiple target instances
# supported values: serial, parallel
- name: SEQUENCE
value: 'parallel'
# Provide the path of aws credentials mounted from secret
- name: AWS_SHARED_CREDENTIALS_FILE
value: '/tmp/cloud_config.yml'
# Provide the name of ssm doc
# if not using the default stress docs
- name: DOCUMENT_NAME
value: ''
# Provide the type of ssm doc
# if not using the default stress docs
- name: DOCUMENT_TYPE
value: ''
# Provide the format of ssm doc
# if not using the default stress docs
- name: DOCUMENT_FORMAT
value: ''
# Provide the path of ssm doc
# if not using the default stress docs
- name: DOCUMENT_PATH
value: ''
# if you want to install dependencies to run default ssm docs
- name: INSTALL_DEPENDENCIES
value: 'True'
# provide the number of workers for memory stress
- name: NUMBER_OF_WORKERS
value: '1'
# provide the percentage of available memory to stress
- name: MEMORY_PERCENTAGE
value: '80'
# provide the CPU chores to be comsumed
# 0 will consume all the available cpu cores
- name: CPU_CORE
value: '0'
labels:
name: aws-ssm-chaos-by-id
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest
secrets:
- name: cloud-secret
mountPath: /tmp/
---
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Execute AWS SSM Chaos on given ec2 instance Tag
kind: ChaosExperiment
metadata:
name: aws-ssm-chaos-by-tag
labels:
name: aws-ssm-chaos-by-tag
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name aws-ssm-chaos-by-tag
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '60'
- name: CHAOS_INTERVAL
value: '60'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# provide tag of the target ec2 instances
# ex: team:devops (key:value)
- name: EC2_INSTANCE_TAG
value: ''
- name: REGION
value: ''
# it defines the sequence of chaos execution for multiple target instances
# supported values: serial, parallel
- name: SEQUENCE
value: 'parallel'
# Provide the path of aws credentials mounted from secret
- name: AWS_SHARED_CREDENTIALS_FILE
value: '/tmp/cloud_config.yml'
# percentage of total instance to target
- name: INSTANCE_AFFECTED_PERC
value: ''
# Provide the name of ssm doc
# if not using the default stress docs
- name: DOCUMENT_NAME
value: ''
# Provide the type of ssm doc
# if not using the default stress docs
- name: DOCUMENT_TYPE
value: ''
# Provide the format of ssm doc
# if not using the default stress docs
- name: DOCUMENT_FORMAT
value: ''
# Provide the path of ssm doc
# if not using the default stress docs
- name: DOCUMENT_PATH
value: ''
# if you want to install dependencies to run default ssm docs
- name: INSTALL_DEPENDENCIES
value: 'True'
# provide the number of workers for memory stress
- name: NUMBER_OF_WORKERS
value: '1'
# provide the percentage of available memory to stress
- name: MEMORY_PERCENTAGE
value: '80'
# provide the CPU chores to comsumed
# 0 will consume all the available cpu cores
- name: CPU_CORE
value: '0'
labels:
name: aws-ssm-chaos-by-tag
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest
secrets:
- name: cloud-secret
mountPath: /tmp/
---

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.1 KiB

View File

@ -1,42 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
name: azure-disk-loss
version: 0.1.0
annotations:
categories: Azure
vendor: ChaosNative
support: https://app.slack.com/client/T09NY5SBT/CNXNB0ZTN
spec:
displayName: azure-disk-loss
categoryDescription: |
This experiment causes the detachment of the disk from the VM for a certain chaos duration
- Causes detachment of the disk from the VM and then reattachment of the disk to the VM
- It helps to check the performance of the application on the instance.
keywords:
- Azure
- Disk
- AKS
platforms:
- Azure
maturity: alpha
maintainers:
- name: avaakash
email: akash@chaosnative.com
minKubeVersion: 1.12.0
provider:
name: ChaosNative
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/azure/disk-loss/experiment
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/azure/azure-disk-loss/
# - name: Video
# url:
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/azure/azure-disk-loss/experiment.yaml

View File

@ -1,32 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
spec:
# It can be active/stop
engineState: 'active'
chaosServiceAccount: azure-disk-loss-sa
experiments:
- name: azure-disk-loss
spec:
components:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '30'
# set chaos interval (in sec) as desired
- name: CHAOS_INTERVAL
value: '30'
# provide the resource group of the instance
- name: RESOURCE_GROUP
value: ''
# accepts enable/disable, default is disable
- name: SCALE_SET
value: ''
# provide the virtual disk names (comma separated if multiple)
- name: VIRTUAL_DISK_NAMES
value: ''

View File

@ -1,92 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Detaches disk from the VM and then re-attaches disk to the VM
kind: ChaosExperiment
metadata:
name: azure-disk-loss
labels:
name: azure-disk-loss
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name azure-disk-loss
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '30'
- name: CHAOS_INTERVAL
value: '30'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# provide the resource group of the instance
- name: RESOURCE_GROUP
value: ''
# accepts enable/disable, default is disable
- name: SCALE_SET
value: ''
# provide the virtual disk names (comma separated if multiple)
- name: VIRTUAL_DISK_NAMES
value: ''
# provide the sequence type for the run. Options: serial/parallel
- name: SEQUENCE
value: 'parallel'
# provide the path to aks credentials mounted from secret
- name: AZURE_AUTH_LOCATION
value: '/tmp/azure.auth'
labels:
name: azure-disk-loss
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest
secrets:
- name: cloud-secret
mountPath: /tmp/

View File

@ -1,64 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: azure-disk-loss-sa
namespace: default
labels:
name: azure-disk-loss-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: azure-disk-loss-sa
namespace: default
labels:
name: azure-disk-loss-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: azure-disk-loss-sa
namespace: default
labels:
name: azure-disk-loss-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: azure-disk-loss-sa
subjects:
- kind: ServiceAccount
name: azure-disk-loss-sa
namespace: default

View File

@ -1,44 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2021-02-20T10:28:08Z
name: azure-instance-stop
version: 0.1.0
annotations:
categories: Azure
vendor: ChaosNative
support: https://app.slack.com/client/T09NY5SBT/CNXNB0ZTN
spec:
displayName: azure-instance-stop
categoryDescription: |
This experiment causes the power off of an azure instance for a certain chaos duration.
- Causes termination of an azure instance before bringing it back to running state after the specified chaos duration.
- It helps to check the performance of the application on the instance.
keywords:
- Azure
- Scaleset
- AKS
platforms:
- Azure
maturity: alpha
chaosType: infra
maintainers:
- name: Udit Gaurav
email: udit@chaosnative.com
provider:
name: Chaos Native
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/azure/instance-stop/experiment
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/azure/azure-instance-stop/
# - name: Video
# url:
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/azure/azure-instance-stop/experiment.yaml

View File

@ -1,91 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Terminating azure VM instance
kind: ChaosExperiment
metadata:
name: azure-instance-stop
labels:
name: azure-instance-stop
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name azure-instance-stop
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '30'
- name: CHAOS_INTERVAL
value: '30'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# provide the target instance name(s) (comma separated if multiple)
- name: AZURE_INSTANCE_NAMES
value: ''
# provide the resource group of the instance
- name: RESOURCE_GROUP
value: ''
# accepts enable/disable, default is disable
- name: SCALE_SET
value: ''
# Provide the path of aks credentials mounted from secret
- name: AZURE_AUTH_LOCATION
value: '/tmp/azure.auth'
- name: SEQUENCE
value: 'parallel'
labels:
name: azure-instance-stop
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest
secrets:
- name: cloud-secret
mountPath: /tmp/

View File

@ -1,62 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: azure-instance-stop-sa
namespace: default
labels:
name: azure-instance-stop-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: azure-instance-stop-sa
labels:
name: azure-instance-stop-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: azure-instance-stop-sa
labels:
name: azure-instance-stop-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: azure-instance-stop-sa
subjects:
- kind: ServiceAccount
name: azure-instance-stop-sa
namespace: default

View File

@ -1,40 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2021-02-20T10:28:08Z
name: azure
version: 0.1.0
annotations:
categories: Kubernetes
chartDescription: Injects chaos on azure servies
spec:
displayName: Azure
categoryDescription: >
Azure category of chaos experiments causes the disruption of the azure serives for a certain chaos duration.
experiments:
- azure-instance-stop
- azure-disk-loss
keywords:
- Azure
- Instance
- AKS
- Scaleset
maintainers:
- name: Udit Gaurav
email: udit.gaurav@mayadata.io
provider:
name: Chaos Native
links:
- name: Kubernetes Website
url: https://kubernetes.io
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/azure/experiments/azure
- name: Kubernetes Slack
url: https://slack.kubernetes.io/
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/contents/#cloud-infrastructure
icon:
- url: https://raw.githubusercontent.com/litmuschaos/charthub.litmuschaos.io/master/public/litmus.ico
mediatype: image/png
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/azure/experiments.yaml

View File

@ -1,187 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Detaches disk from the VM and then re-attaches disk to the VM
kind: ChaosExperiment
metadata:
name: azure-disk-loss
labels:
name: azure-disk-loss
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name azure-disk-loss
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '30'
- name: CHAOS_INTERVAL
value: '30'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# provide the resource group of the instance
- name: RESOURCE_GROUP
value: ''
# accepts enable/disable, default is disable
- name: SCALE_SET
value: ''
# provide the virtual disk names (comma separated if multiple)
- name: VIRTUAL_DISK_NAMES
value: ''
# provide the sequence type for the run. Options: serial/parallel
- name: SEQUENCE
value: 'parallel'
# provide the path to aks credentials mounted from secret
- name: AZURE_AUTH_LOCATION
value: '/tmp/azure.auth'
labels:
name: azure-disk-loss
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest
secrets:
- name: cloud-secret
mountPath: /tmp/
---
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Terminating azure VM instance
kind: ChaosExperiment
metadata:
name: azure-instance-stop
labels:
name: azure-instance-stop
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name azure-instance-stop
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '30'
- name: CHAOS_INTERVAL
value: '30'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# provide the target instance name(s) (comma separated if multiple)
- name: AZURE_INSTANCE_NAMES
value: ''
# provide the resource group of the instance
- name: RESOURCE_GROUP
value: ''
# accepts enable/disable, default is disable
- name: SCALE_SET
value: ''
# Provide the path of aks credentials mounted from secret
- name: AZURE_AUTH_LOCATION
value: '/tmp/azure.auth'
- name: SEQUENCE
value: 'parallel'
labels:
name: azure-instance-stop
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest
secrets:
- name: cloud-secret
mountPath: /tmp/
---

View File

@ -1,366 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Stops GCP VM instances and GKE nodes filtered by a label for a specified duration and later restarts them
kind: ChaosExperiment
metadata:
name: gcp-vm-instance-stop-by-label
labels:
name: gcp-vm-instance-stop-by-label
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name gcp-vm-instance-stop-by-label
command:
- /bin/bash
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '30'
# set chaos interval (in sec) as desired
- name: CHAOS_INTERVAL
value: '30'
- name: SEQUENCE
value: 'parallel'
# GCP project ID to which the vm instances belong
- name: GCP_PROJECT_ID
value: ''
# Label of the target vm instance(s)
- name: INSTANCE_LABEL
value: ''
# Zone in which the target vm instance(s) filtered by the label exist
# all the instances should lie in a single zone
- name: ZONES
value: ''
# enable it if the target instance is a part of a managed instance group
- name: MANAGED_INSTANCE_GROUP
value: 'disable'
# set the percentage value of the instances with the given label
# which should be targeted as part of the chaos injection
- name: INSTANCE_AFFECTED_PERC
value: ''
labels:
name: gcp-vm-instance-stop-by-label
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest
secrets:
- name: cloud-secret
mountPath: /tmp/
---
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Stops GCP VM instances and GKE nodes for a specified duration and later restarts them
kind: ChaosExperiment
metadata:
name: gcp-vm-instance-stop
labels:
name: gcp-vm-instance-stop
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name gcp-vm-instance-stop
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '30'
- name: CHAOS_INTERVAL
value: '30'
# parallel or serial; determines how the VM instances are terminated, all at once or one at a time
- name: SEQUENCE
value: 'parallel'
# period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# enable or disable; shall be set to enable if the target instances are a part of a managed instance group
- name: MANAGED_INSTANCE_GROUP
value: 'disable'
# Instance name of the target vm instance(s)
# Multiple instance names can be provided as comma separated values ex: instance1,instance2
- name: VM_INSTANCE_NAMES
value: ''
# GCP project ID to which the vm instances belong
- name: GCP_PROJECT_ID
value: ''
# Instance zone(s) of the target vm instance(s)
# If more than one instance is targetted, provide zone for each in the order of their
# respective instance name in VM_INSTANCE_NAME as comma separated values ex: zone1,zone2
- name: ZONES
value: ''
labels:
name: gcp-vm-instance-stop
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest
secrets:
- name: cloud-secret
mountPath: /tmp/
---
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Causes loss of a non-boot storage persistent disk from a GCP VM instance filtered by a label for a specified duration before attaching them back
kind: ChaosExperiment
metadata:
name: gcp-vm-disk-loss-by-label
labels:
name: gcp-vm-disk-loss-by-label
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name gcp-vm-disk-loss-by-label
command:
- /bin/bash
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '30'
# set chaos interval (in sec) as desired
- name: CHAOS_INTERVAL
value: '30'
# set the GCP project id
- name: GCP_PROJECT_ID
value: ''
# set the zone in which all the disks are created
# all the disks must exist in the same zone
- name: ZONES
value: ''
# set the label of the target disk volumes
- name: DISK_VOLUME_LABEL
value: ''
# set the percentage value of the disks with the given label
# which should be targeted as part of the chaos injection
- name: DISK_AFFECTED_PERC
value: ''
labels:
name: gcp-vm-disk-loss-by-label
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest
secrets:
- name: cloud-secret
mountPath: /tmp/
---
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Causes loss of a non-boot storage persistent disk from a GCP VM instance for a specified duration before attaching them back
kind: ChaosExperiment
metadata:
name: gcp-vm-disk-loss
labels:
name: gcp-vm-disk-loss
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name gcp-vm-disk-loss
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '30'
- name: CHAOS_INTERVAL
value: '30'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# parallel or serial; determines how chaos is injected
- name: SEQUENCE
value: 'parallel'
# set the GCP project id
- name: GCP_PROJECT_ID
value: ''
# set the disk volume name(s) as comma seperated values
# eg. volume1,volume2,...
- name: DISK_VOLUME_NAMES
value: ''
# set the disk zone(s) as comma seperated values in the corresponding
# order of DISK_VOLUME_NAME
# eg. zone1,zone2,...
- name: ZONES
value: ''
labels:
name: gcp-vm-disk-loss
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest
secrets:
- name: cloud-secret
mountPath: /tmp/
---

View File

@ -1,83 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Causes loss of a non-boot storage persistent disk from a GCP VM instance filtered by a label for a specified duration before attaching them back
kind: ChaosExperiment
metadata:
name: gcp-vm-disk-loss-by-label
labels:
name: gcp-vm-disk-loss-by-label
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name gcp-vm-disk-loss-by-label
command:
- /bin/bash
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '30'
# set chaos interval (in sec) as desired
- name: CHAOS_INTERVAL
value: '30'
# set the GCP project id
- name: GCP_PROJECT_ID
value: ''
# set the zone in which all the disks are created
# all the disks must exist in the same zone
- name: ZONES
value: ''
# set the label of the target disk volumes
- name: DISK_VOLUME_LABEL
value: ''
# set the percentage value of the disks with the given label
# which should be targeted as part of the chaos injection
- name: DISK_AFFECTED_PERC
value: ''
labels:
name: gcp-vm-disk-loss-by-label
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest
secrets:
- name: cloud-secret
mountPath: /tmp/

View File

@ -1,33 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
name: gcp-vm-disk-loss-by-label
version: 0.1.0
annotations:
categories: gcp
spec:
displayName: gcp-vm-disk-loss-by-label
categoryDescription: >
Causes loss of a non-boot storage persistent disk from a GCP VM instance filtered by a label for a specified duration before attaching them back
keywords:
- "Disk"
- "GCP"
platforms:
- "Minikube"
maturity: alpha
maintainers:
- name: Neelanjan Manna
email: neelanjan.manna@harness.io
minKubeVersion: 1.12.0
provider:
name: Harness
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/gcp/gcp-vm-disk-loss-by-label/
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/gcp/gcp-vm-disk-loss-by-label/experiment.yaml

View File

@ -1,62 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: gcp-vm-disk-loss-by-label-sa
namespace: default
labels:
name: gcp-vm-disk-loss-by-label-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: gcp-vm-disk-loss-by-label-sa
labels:
name: gcp-vm-disk-loss-by-label-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gcp-vm-disk-loss-by-label-sa
labels:
name: gcp-vm-disk-loss-by-label-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: gcp-vm-disk-loss-by-label-sa
subjects:
- kind: ServiceAccount
name: gcp-vm-disk-loss-by-label-sa
namespace: default

View File

@ -1,86 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Causes loss of a non-boot storage persistent disk from a GCP VM instance for a specified duration before attaching them back
kind: ChaosExperiment
metadata:
name: gcp-vm-disk-loss
labels:
name: gcp-vm-disk-loss
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name gcp-vm-disk-loss
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '30'
- name: CHAOS_INTERVAL
value: '30'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# parallel or serial; determines how chaos is injected
- name: SEQUENCE
value: 'parallel'
# set the GCP project id
- name: GCP_PROJECT_ID
value: ''
# set the disk volume name(s) as comma seperated values
# eg. volume1,volume2,...
- name: DISK_VOLUME_NAMES
value: ''
# set the disk zone(s) as comma seperated values in the corresponding
# order of DISK_VOLUME_NAME
# eg. zone1,zone2,...
- name: ZONES
value: ''
labels:
name: gcp-vm-disk-loss
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest
secrets:
- name: cloud-secret
mountPath: /tmp/

View File

@ -1,33 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
name: gcp-vm-disk-loss
version: 0.1.0
annotations:
categories: gcp
spec:
displayName: gcp-vm-disk-loss
categoryDescription: >
Causes loss of a non-boot storage persistent disk from a GCP VM instance for a specified duration before attaching them back
keywords:
- "Disk"
- "GCP"
platforms:
- "GCP"
maturity: alpha
maintainers:
- name: Neelanjan Manna
email: neelanjan@chaosnative.com
minKubeVersion: 1.12.0
provider:
name: ChaosNative
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/gcp/gcp-vm-disk-loss/
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/gcp/gcp-vm-disk-loss/experiment.yaml

View File

@ -1,62 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: gcp-vm-disk-loss-sa
namespace: default
labels:
name: gcp-vm-disk-loss-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: gcp-vm-disk-loss-sa
labels:
name: gcp-vm-disk-loss-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gcp-vm-disk-loss-sa
labels:
name: gcp-vm-disk-loss-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: gcp-vm-disk-loss-sa
subjects:
- kind: ServiceAccount
name: gcp-vm-disk-loss-sa
namespace: default

View File

@ -1,95 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Stops GCP VM instances and GKE nodes filtered by a label for a specified duration and later restarts them
kind: ChaosExperiment
metadata:
name: gcp-vm-instance-stop-by-label
labels:
name: gcp-vm-instance-stop-by-label
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name gcp-vm-instance-stop-by-label
command:
- /bin/bash
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '30'
# set chaos interval (in sec) as desired
- name: CHAOS_INTERVAL
value: '30'
- name: SEQUENCE
value: 'parallel'
# GCP project ID to which the vm instances belong
- name: GCP_PROJECT_ID
value: ''
# Label of the target vm instance(s)
- name: INSTANCE_LABEL
value: ''
# Zone in which the target vm instance(s) filtered by the label exist
# all the instances should lie in a single zone
- name: ZONES
value: ''
# enable it if the target instance is a part of a managed instance group
- name: MANAGED_INSTANCE_GROUP
value: 'disable'
# set the percentage value of the instances with the given label
# which should be targeted as part of the chaos injection
- name: INSTANCE_AFFECTED_PERC
value: ''
labels:
name: gcp-vm-instance-stop-by-label
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest
secrets:
- name: cloud-secret
mountPath: /tmp/

View File

@ -1,33 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
name: gcp-vm-instance-stop-by-label
version: 0.1.0
annotations:
categories: gcp
spec:
displayName: gcp-vm-instance-stop-by-label
categoryDescription: >
Stops GCP VM instances and GKE nodes filtered by a label for a specified duration and later restarts them
keywords:
- "VM"
- "GCP"
platforms:
- "Minikube"
maturity: alpha
maintainers:
- name: Neelanjan Manna
email: neelanjan.manna@harness.io
minKubeVersion: 1.12.0
provider:
name: Harness
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/gcp/gcp-vm-instance-stop-by-label/
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/gcp/gcp-vm-instance-stop-by-label/experiment.yaml

View File

@ -1,66 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: gcp-vm-instance-stop-by-label-sa
namespace: default
labels:
name: gcp-vm-instance-stop-by-label-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: gcp-vm-instance-stop-by-label-sa
labels:
name: gcp-vm-instance-stop-by-label-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gcp-vm-instance-stop-by-label-sa
labels:
name: gcp-vm-instance-stop-by-label-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: gcp-vm-instance-stop-by-label-sa
subjects:
- kind: ServiceAccount
name: gcp-vm-instance-stop-by-label-sa
namespace: default

View File

@ -1,94 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Stops GCP VM instances and GKE nodes for a specified duration and later restarts them
kind: ChaosExperiment
metadata:
name: gcp-vm-instance-stop
labels:
name: gcp-vm-instance-stop
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name gcp-vm-instance-stop
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '30'
- name: CHAOS_INTERVAL
value: '30'
# parallel or serial; determines how the VM instances are terminated, all at once or one at a time
- name: SEQUENCE
value: 'parallel'
# period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# enable or disable; shall be set to enable if the target instances are a part of a managed instance group
- name: MANAGED_INSTANCE_GROUP
value: 'disable'
# Instance name of the target vm instance(s)
# Multiple instance names can be provided as comma separated values ex: instance1,instance2
- name: VM_INSTANCE_NAMES
value: ''
# GCP project ID to which the vm instances belong
- name: GCP_PROJECT_ID
value: ''
# Instance zone(s) of the target vm instance(s)
# If more than one instance is targetted, provide zone for each in the order of their
# respective instance name in VM_INSTANCE_NAME as comma separated values ex: zone1,zone2
- name: ZONES
value: ''
labels:
name: gcp-vm-instance-stop
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest
secrets:
- name: cloud-secret
mountPath: /tmp/

View File

@ -1,33 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
name: gcp-vm-instance-stop
version: 0.1.0
annotations:
categories: gcp
spec:
displayName: gcp-vm-instance-stop
categoryDescription: >
Stops GCP VM instances and GKE nodes for a specified duration and later restarts them
keywords:
- "VM"
- "GCP"
platforms:
- "GCP"
maturity: alpha
maintainers:
- name: Neelanjan Manna
email: neelanjan@chaosnative.com
minKubeVersion: 1.12.0
provider:
name: ChaosNative
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/gcp/gcp-vm-instance-stop/
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/gcp/gcp-vm-instance-stop/experiment.yaml

View File

@ -1,66 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: gcp-vm-instance-stop-sa
namespace: default
labels:
name: gcp-vm-instance-stop-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: gcp-vm-instance-stop-sa
labels:
name: gcp-vm-instance-stop-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["secrets","configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gcp-vm-instance-stop-sa
labels:
name: gcp-vm-instance-stop-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: gcp-vm-instance-stop-sa
subjects:
- kind: ServiceAccount
name: gcp-vm-instance-stop-sa
namespace: default

View File

@ -1,38 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
name: gcp
version: 0.1.0
annotations:
categories: gcp
spec:
displayName: gcp chaos
categoryDescription: >
GCP contains chaos to disrupt state of gcp resources running part of the gcp services
experiments:
- gcp-vm-instance-stop
- gcp-vm-disk-loss
- gcp-vm-instance-stop-by-label
- gcp-vm-disk-loss-by-label
keywords:
- "VM"
- "Disk"
- "GCP"
- "Infra"
maintainers:
- name: Neelanjan Manna
email: neelanjan.manna@harness.io
minKubeVersion: 1.12.0
provider:
name: Harness
links:
- name: GCP Website
url: https://cloud.google.com/
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/contents/#cloud-infrastructure
- name: Community Slack
url: https://app.slack.com/client/T09NY5SBT/CNXNB0ZTN
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/gcp/experiments.yaml

View File

@ -1,48 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2019-10-15T10:28:08Z
name: container-kill
version: 0.1.13
annotations:
categories: "Kubernetes"
vendor: "CNCF"
support: https://slack.openebs.io/
spec:
displayName: container-kill
categoryDescription: |
Container kill contains chaos to disrupt state of kubernetes resources. Experiments can inject random container delete failures against specified application.
- Executes SIGKILL on containers of random replicas of an application deployment.
- Tests deployment sanity (replica availability & uninterrupted service) and recovery workflows of the application pod.
keywords:
- Kubernetes
- K8S
- Pod
- Container
platforms:
- GKE
- Minikube
- Packet(Kubeadm)
- EKS
- AKS
maturity: alpha
maintainers:
- name: ksatchit
email: karthik.s@mayadata.io
minKubeVersion: 1.12.0
provider:
name: Mayadata
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/generic/container-kill
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/pods/container-kill/
- name: Video
url: https://www.youtube.com/watch?v=XKyMNdVsKMo
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/container-kill/experiment.yaml

View File

@ -1,42 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
# It can be active/stop
engineState: 'active'
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
chaosServiceAccount: container-kill-sa
experiments:
- name: container-kill
spec:
components:
env:
# provide the total chaos duration
- name: TOTAL_CHAOS_DURATION
value: '20'
# provide the chaos interval
- name: CHAOS_INTERVAL
value: '10'
# provide the name of container runtime
# for litmus LIB, it supports docker, containerd, crio
# for pumba LIB, it supports docker only
- name: CONTAINER_RUNTIME
value: 'containerd'
# provide the socket file path
- name: SOCKET_PATH
value: '/run/containerd/containerd.sock'
- name: PODS_AFFECTED_PERC
value: ''
- name: TARGET_CONTAINER
value: ''

View File

@ -1,46 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: nginx
spec:
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
appinfo:
appns: 'nginx'
applabel: 'app=nginx'
appkind: 'deployment'
chaosServiceAccount: container-kill-sa
# It can be delete/retain
jobCleanUpPolicy: 'delete'
experiments:
- name: container-kill
spec:
components:
env:
# provide the total chaos duration
- name: TOTAL_CHAOS_DURATION
value: '20'
# provide the chaos interval
- name: CHAOS_INTERVAL
value: '10'
# provide the name of container runtime
# for litmus LIB, it supports docker, containerd, crio
# for pumba LIB, it supports docker only
- name: CONTAINER_RUNTIME
value: 'containerd'
# provide the socket file path
- name: SOCKET_PATH
value: '/run/containerd/containerd.sock'
- name: PODS_AFFECTED_PERC
value: ''
- name: TARGET_CONTAINER
value: ''

View File

@ -1,121 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: "Kills a container belonging to an application pod \n"
kind: ChaosExperiment
metadata:
name: container-kill
labels:
name: container-kill
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Namespaced
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# deriving the parent/owner details of the pod(if parent is anyof {deployment, statefulset, daemonsets})
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","replicasets", "daemonsets"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: ["apps.openshift.io"]
resources: ["deploymentconfigs"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: [""]
resources: ["replicationcontrollers"]
verbs: ["get","list"]
# deriving the parent/owner details of the pod(if parent is argo-rollouts)
- apiGroups: ["argoproj.io"]
resources: ["rollouts"]
verbs: ["list","get"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name container-kill
command:
- /bin/bash
env:
- name: TARGET_CONTAINER
value: ''
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
- name: TARGET_PODS
value: ''
# provide the chaos interval
- name: CHAOS_INTERVAL
value: '10'
- name: SIGNAL
value: 'SIGKILL'
# provide the socket file path
- name: SOCKET_PATH
value: '/run/containerd/containerd.sock'
# provide the name of container runtime
# for litmus LIB, it supports docker, containerd, crio
# for pumba LIB, it supports docker only
- name: CONTAINER_RUNTIME
value: 'containerd'
# provide the total chaos duration
- name: TOTAL_CHAOS_DURATION
value: '20'
## percentage of total pods to target
- name: PODS_AFFECTED_PERC
value: ''
# To select pods on specific node(s)
- name: NODE_LABEL
value: ''
- name: LIB_IMAGE
value: 'litmuschaos/go-runner:latest'
## it defines the sequence of chaos execution for multiple target pods
## supported values: serial, parallel
- name: SEQUENCE
value: 'parallel'
labels:
name: container-kill
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/runtime-api-usage: "true"
app.kubernetes.io/version: latest

View File

@ -1,86 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: container-kill-sa
namespace: default
labels:
name: container-kill-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: container-kill-sa
namespace: default
labels:
name: container-kill-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# deriving the parent/owner details of the pod(if parent is anyof {deployment, statefulset, daemonsets})
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","replicasets", "daemonsets"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: ["apps.openshift.io"]
resources: ["deploymentconfigs"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: [""]
resources: ["replicationcontrollers"]
verbs: ["get","list"]
# deriving the parent/owner details of the pod(if parent is argo-rollouts)
- apiGroups: ["argoproj.io"]
resources: ["rollouts"]
verbs: ["list","get"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# use litmus psp
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames: ["litmus"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: container-kill-sa
namespace: default
labels:
name: container-kill-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: container-kill-sa
subjects:
- kind: ServiceAccount
name: container-kill-sa
namespace: default

View File

@ -1,81 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: container-kill-sa
namespace: default
labels:
name: container-kill-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: container-kill-sa
namespace: default
labels:
name: container-kill-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# deriving the parent/owner details of the pod(if parent is anyof {deployment, statefulset, daemonsets})
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","replicasets", "daemonsets"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: ["apps.openshift.io"]
resources: ["deploymentconfigs"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: [""]
resources: ["replicationcontrollers"]
verbs: ["get","list"]
# deriving the parent/owner details of the pod(if parent is argo-rollouts)
- apiGroups: ["argoproj.io"]
resources: ["rollouts"]
verbs: ["list","get"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: container-kill-sa
namespace: default
labels:
name: container-kill-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: container-kill-sa
subjects:
- kind: ServiceAccount
name: container-kill-sa
namespace: default

View File

@ -1,78 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: container-kill-sa
namespace: nginx
labels:
name: container-kill-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: container-kill-sa
namespace: nginx
labels:
name: container-kill-sa
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# deriving the parent/owner details of the pod(if parent is anyof {deployment, statefulset, daemonsets})
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","replicasets", "daemonsets"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: ["apps.openshift.io"]
resources: ["deploymentconfigs"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: [""]
resources: ["replicationcontrollers"]
verbs: ["get","list"]
# deriving the parent/owner details of the pod(if parent is argo-rollouts)
- apiGroups: ["argoproj.io"]
resources: ["rollouts"]
verbs: ["list","get"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: container-kill-sa
namespace: nginx
labels:
name: container-kill-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: container-kill-sa
subjects:
- kind: ServiceAccount
name: container-kill-sa
namespace: nginx

View File

@ -1,123 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Fillup Ephemeral Storage of a Resource
kind: ChaosExperiment
metadata:
name: disk-fill
labels:
name: disk-fill
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Namespaced
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# deriving the parent/owner details of the pod(if parent is anyof {deployment, statefulset, daemonsets})
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","replicasets", "daemonsets"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: ["apps.openshift.io"]
resources: ["deploymentconfigs"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: [""]
resources: ["replicationcontrollers"]
verbs: ["get","list"]
# deriving the parent/owner details of the pod(if parent is argo-rollouts)
- apiGroups: ["argoproj.io"]
resources: ["rollouts"]
verbs: ["list","get"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name disk-fill
command:
- /bin/bash
env:
- name: TARGET_CONTAINER
value: ''
- name: FILL_PERCENTAGE
value: '80'
- name: TOTAL_CHAOS_DURATION
value: '60'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# provide the data block size
# supported unit is KB
- name: DATA_BLOCK_SIZE
value: '256'
- name: TARGET_PODS
value: ''
- name: EPHEMERAL_STORAGE_MEBIBYTES
value: ''
# To select pods on specific node(s)
- name: NODE_LABEL
value: ''
## percentage of total pods to target
- name: PODS_AFFECTED_PERC
value: ''
- name: LIB_IMAGE
value: 'litmuschaos/go-runner:latest'
# provide the name of container runtime, it supports docker, containerd, crio
- name: CONTAINER_RUNTIME
value: 'containerd'
# provide the socket file path
- name: SOCKET_PATH
value: '/run/containerd/containerd.sock'
## it defines the sequence of chaos execution for multiple target pods
## supported values: serial, parallel
- name: SEQUENCE
value: 'parallel'
labels:
name: disk-fill
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/host-path-usage: "true"
app.kubernetes.io/version: latest

View File

@ -1,85 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: disk-fill-sa
namespace: default
labels:
name: disk-fill-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: disk-fill-sa
namespace: default
labels:
name: disk-fill-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# deriving the parent/owner details of the pod(if parent is anyof {deployment, statefulset, daemonsets})
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","replicasets", "daemonsets"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: ["apps.openshift.io"]
resources: ["deploymentconfigs"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: [""]
resources: ["replicationcontrollers"]
verbs: ["get","list"]
# deriving the parent/owner details of the pod(if parent is argo-rollouts)
- apiGroups: ["argoproj.io"]
resources: ["rollouts"]
verbs: ["list","get"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# use litmus psp
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames: ["litmus"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: disk-fill-sa
namespace: default
labels:
name: disk-fill-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: disk-fill-sa
subjects:
- kind: ServiceAccount
name: disk-fill-sa
namespace: default

View File

@ -1,80 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: disk-fill-sa
namespace: default
labels:
name: disk-fill-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: disk-fill-sa
namespace: default
labels:
name: disk-fill-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# deriving the parent/owner details of the pod(if parent is anyof {deployment, statefulset, daemonsets})
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","replicasets", "daemonsets"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: ["apps.openshift.io"]
resources: ["deploymentconfigs"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: [""]
resources: ["replicationcontrollers"]
verbs: ["get","list"]
# deriving the parent/owner details of the pod(if parent is argo-rollouts)
- apiGroups: ["argoproj.io"]
resources: ["rollouts"]
verbs: ["list","get"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: disk-fill-sa
namespace: default
labels:
name: disk-fill-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: disk-fill-sa
subjects:
- kind: ServiceAccount
name: disk-fill-sa
namespace: default

View File

@ -1,47 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2020-07-14T10:28:08Z
name: docker-service-kill
version: 0.1.1
annotations:
categories: Kubernetes
vendor: CNCF
support: https://slack.kubernetes.io/
spec:
displayName: docker-service-kill
categoryDescription: |
docker-service-kill contains killing docker service gracefully for a certain chaos duration.
- Causes replicas may be evicted or becomes unreachable on account on nodes turning unschedulable (Not Ready) due to docker service kill.
- The application node should be healthy once chaos is stopped and the services are reaccessable.
keywords:
- Kubernetes
- K8S
- Node
- Service
- Docker
platforms:
- GKE
- AKS
maturity: alpha
maintainers:
- name: Ankur Ghosh
email: ankur.ghosh3@wipro.com
minKubeVersion: 1.12.0
provider:
name: Wipro
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/generic/docker-service-kill
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/nodes/docker-service-kill/
- name: Video
url:
icon:
- base64data: ""
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/docker-service-kill/experiment.yaml

View File

@ -1,71 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: docker-service-kill-sa
namespace: default
labels:
name: docker-service-kill-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: docker-service-kill-sa
labels:
name: docker-service-kill-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
# use litmus psp
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames: ["litmus"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: docker-service-kill-sa
labels:
name: docker-service-kill-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: docker-service-kill-sa
subjects:
- kind: ServiceAccount
name: docker-service-kill-sa
namespace: default

View File

@ -1,66 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: docker-service-kill-sa
namespace: default
labels:
name: docker-service-kill-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: docker-service-kill-sa
labels:
name: docker-service-kill-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: docker-service-kill-sa
labels:
name: docker-service-kill-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: docker-service-kill-sa
subjects:
- kind: ServiceAccount
name: docker-service-kill-sa
namespace: default

File diff suppressed because it is too large Load Diff

View File

@ -1,80 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2019-09-26T10:28:08Z
name: generic
version: 0.1.16
annotations:
categories: Kubernetes
chartDescription: Injects generic kubernetes chaos
spec:
displayName: Generic Chaos
categoryDescription: >
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easier management and discovery. It will install all the experiments which can be used to inject chaos into containerized applications.
experiments:
- pod-delete
- container-kill
- pod-cpu-hog
- pod-network-loss
- pod-network-latency
- pod-network-corruption
- node-drain
- node-cpu-hog
- disk-fill
- node-memory-hog
- pod-memory-hog
- kubelet-service-kill
- pod-network-duplication
- node-taint
- docker-service-kill
- pod-autoscaler
- node-io-stress
- pod-io-stress
- node-restart
- pod-dns-error
- pod-dns-spoof
- pod-cpu-hog-exec
- pod-memory-hog-exec
- pod-network-partition
- pod-http-latency
- pod-http-status-code
- pod-http-modify-header
- pod-http-modify-body
- pod-http-reset-peer
keywords:
- Kubernetes
- K8S
- Container
- Node
- Pod
- Disk
- IO
- Filesystem
- Network
- CPU
- Memory
- Stress
- Service
- DNS
- Scale
- Http
maintainers:
- name: ksatchit
email: karthik.s@mayadata.io
minKubeVersion: 1.12.0
provider:
name: Mayadata
links:
- name: Kubernetes Website
url: https://kubernetes.io
- name: Source Code
url: https://github.com/kubernetes/kubernetes
- name: Kubernetes Slack
url: https://slack.kubernetes.io/
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/contents/#generic
icon:
- url: https://raw.githubusercontent.com/litmuschaos/charthub.litmuschaos.io/master/public/litmus.ico
mediatype: image/png
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/experiments.yaml

View File

@ -1,89 +0,0 @@
packageName: generic
experiments:
- name: pod-delete
CSV: pod-delete.chartserviceversion.yaml
desc: "pod-delete"
- name: container-kill
CSV: container-kill.chartserviceversion.yaml
desc: "container-kill"
- name: pod-network-loss
CSV: pod-network-loss.chartserviceversion.yaml
desc: "Pod-network-loss"
- name: pod-network-latency
CSV: pod-network-latency.chartserviceversion.yaml
desc: "pod-network-latency"
- name: pod-cpu-hog
CSV: pod-cpu-hog.chartserviceversion.yaml
desc: "pod-cpu-hog"
- name: node-cpu-hog
CSV: node-cpu-hog.chartserviceversion.yaml
desc: "node-cpu-hog"
- name: disk-fill
CSV: disk-fill.chartserviceversion.yaml
desc: "disk-fill"
- name: node-drain
CSV: node-drain.chartserviceversion.yaml
desc: "node-drain"
- name: pod-network-corruption
CSV: pod-network-corruption.chartserviceversion.yaml
desc: "pod-network-corruption"
- name: node-memory-hog
CSV: node-memory-hog.chartserviceversion.yaml
desc: "node-memory-hog"
- name: pod-memory-hog
CSV: pod-memory-hog.chartserviceversion.yaml
desc: "pod-memory-hog"
- name: kubelet-service-kill
CSV: kubelet-service-kill.chartserviceversion.yaml
desc: "kubelet-service-kill"
- name: pod-network-duplication
CSV: pod-network-duplication.chartserviceversion.yaml
desc: "pod-network-duplication"
- name: node-taint
CSV: node-taint.chartserviceversion.yaml
desc: "node-taint"
- name: docker-service-kill
CSV: docker-service-kill.chartserviceversion.yaml
desc: "docker-service-kill"
- name: pod-autoscaler
CSV: pod-autoscaler.chartserviceversion.yaml
desc: "pod-autoscaler"
- name: node-io-stress
CSV: node-io-stress.chartserviceversion.yaml
desc: "node-io-stress"
- name: pod-io-stress
CSV: pod-io-stress.chartserviceversion.yaml
desc: "pod-io-stress"
- name: node-restart
CSV: node-restart.chartserviceversion.yaml
desc: "node-restart"
- name: pod-dns-error
CSV: pod-dns-error.chartserviceversion.yaml
desc: "pod-dns-error"
- name: pod-dns-spoof
CSV: pod-dns-spoof.chartserviceversion.yaml
desc: "pod-dns-spoof"
- name: pod-cpu-hog-exec
CSV: pod-cpu-hog-exec.chartserviceversion.yaml
desc: "pod-cpu-hog-exec"
- name: pod-memory-hog-exec
CSV: pod-memory-hog-exec.chartserviceversion.yaml
desc: "pod-memory-hog-exec"
- name: pod-network-partition
CSV: pod-network-partition.chartserviceversion.yaml
desc: "pod-network-partition"
- name: pod-http-latency
CSV: pod-http-latency.chartserviceversion.yaml
desc: "pod-http-latency"
- name: pod-http-status-code
CSV: pod-http-status-code.chartserviceversion.yaml
desc: "pod-http-status-code"
- name: pod-http-modify-header
CSV: pod-http-modify-header.chartserviceversion.yaml
desc: "pod-http-modify-header"
- name: pod-http-modify-body
CSV: pod-http-modify-body.chartserviceversion.yaml
desc: "pod-http-modify-body"
- name: pod-http-reset-peer
CSV: pod-http-reset-peer.chartserviceversion.yaml
desc: "pod-http-reset-peer"

View File

@ -1,81 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Kills the kubelet service on the application node to check the resiliency.
kind: ChaosExperiment
metadata:
name: kubelet-service-kill
labels:
name: kubelet-service-kill
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name kubelet-service-kill
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '60' # in seconds
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
- name: NODE_LABEL
value: ''
# provide lib image
- name: LIB_IMAGE
value: 'ubuntu:16.04'
# provide the target node name
- name: TARGET_NODE
value: ''
labels:
name: kubelet-service-kill
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/service-kill: "true"
app.kubernetes.io/version: latest

View File

@ -1,50 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2020-06-13T10:28:08Z
name: kubelet-service-kill
version: 0.1.3
annotations:
categories: Kubernetes
vendor: CNCF
support: https://slack.kubernetes.io/
spec:
displayName: kubelet-service-kill
categoryDescription: |
kubelet-service-kill contains killing kubelet service gracefully for a certain chaos duration.
- Causes replicas may be evicted or becomes unreachable on account on nodes turning unschedulable (Not Ready) due to kubelet service kill.
- The application node should be healthy once chaos is stopped and the services are reaccessable.
keywords:
- Kubernetes
- K8S
- Kubelet
- Node
- Service
platforms:
- GKE
- Packet(Kubeadm)
- Minikube
- EKS
- AKS
maturity: alpha
maintainers:
- name: Udit Gaurav
email: udit.gaurav@mayadata.io
minKubeVersion: 1.12.0
provider:
name: Mayadata
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/generic/kubelet-service-kill
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/nodes/kubelet-service-kill/
- name: Video
url:
icon:
- base64data: ""
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/kubelet-service-kill/experiment.yaml

View File

@ -1,71 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubelet-service-kill-sa
namespace: default
labels:
name: kubelet-service-kill-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubelet-service-kill-sa
labels:
name: kubelet-service-kill-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
# use litmus psp
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames: ["litmus"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubelet-service-kill-sa
labels:
name: kubelet-service-kill-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubelet-service-kill-sa
subjects:
- kind: ServiceAccount
name: kubelet-service-kill-sa
namespace: default

View File

@ -1,66 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubelet-service-kill-sa
namespace: default
labels:
name: kubelet-service-kill-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubelet-service-kill-sa
labels:
name: kubelet-service-kill-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubelet-service-kill-sa
labels:
name: kubelet-service-kill-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubelet-service-kill-sa
subjects:
- kind: ServiceAccount
name: kubelet-service-kill-sa
namespace: default

View File

@ -1,99 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Give a cpu spike on a node belonging to a deployment
kind: ChaosExperiment
metadata:
name: node-cpu-hog
labels:
name: node-cpu-hog
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name node-cpu-hog
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '60'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
## ENTER THE NUMBER OF CORES OF CPU FOR CPU HOGGING
## OPTIONAL VALUE IN CASE OF EMPTY VALUE IT WILL TAKE NODE CPU CAPACITY
- name: NODE_CPU_CORE
value: ''
## LOAD CPU WITH GIVEN PERCENT LOADING FOR THE CPU STRESS WORKERS.
## 0 IS EFFECTIVELY A SLEEP (NO LOAD) AND 100 IS FULL LOADING
- name: CPU_LOAD
value: '100'
# ENTER THE COMMA SEPARATED TARGET NODES NAME
- name: TARGET_NODES
value: ''
- name: NODE_LABEL
value: ''
# provide lib image
- name: LIB_IMAGE
value: 'litmuschaos/go-runner:latest'
## percentage of total nodes to target
- name: NODES_AFFECTED_PERC
value: ''
## it defines the sequence of chaos execution for multiple target nodes
## supported values: serial, parallel
- name: SEQUENCE
value: 'parallel'
labels:
name: node-cpu-hog
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest

View File

@ -1,50 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2019-01-28T10:28:08Z
name: node-cpu-hog
version: 0.0.15
annotations:
categories: Kubernetes
vendor: CNCF
support: https://slack.kubernetes.io/
spec:
displayName: node-cpu-hog
categoryDescription: |
Node CPU hog contains chaos to disrupt the state of Kubernetes resources. Experiments can inject a CPU spike on a node where the application pod is scheduled.
- CPU hog on a particular node where the application deployment is available.
- After test, the recovery should be manual for the application pod and node in case they are not in an appropriate state.
keywords:
- Kubernetes
- K8S
- CPU
- Node
platforms:
- GKE
- EKS
- AKS
- Kind
- Rancher
- OpenShift(OKD)
maturity: alpha
chaosType: infra
maintainers:
- name: ksatchit
email: karthik.s@mayadata.io
minKubeVersion: 1.12.0
provider:
name: Mayadata
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/generic/node-cpu-hog
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/nodes/node-cpu-hog/
- name: Video
url: https://www.youtube.com/watch?v=jpJttftsZqA
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/node-cpu-hog/experiment.yaml

View File

@ -1,71 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-cpu-hog-sa
namespace: default
labels:
name: node-cpu-hog-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-cpu-hog-sa
labels:
name: node-cpu-hog-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
# use litmus psp
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames: ["litmus"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-cpu-hog-sa
labels:
name: node-cpu-hog-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-cpu-hog-sa
subjects:
- kind: ServiceAccount
name: node-cpu-hog-sa
namespace: default

View File

@ -1,66 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-cpu-hog-sa
namespace: default
labels:
name: node-cpu-hog-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-cpu-hog-sa
labels:
name: node-cpu-hog-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-cpu-hog-sa
labels:
name: node-cpu-hog-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-cpu-hog-sa
subjects:
- kind: ServiceAccount
name: node-cpu-hog-sa
namespace: default

View File

@ -1,26 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
chaosServiceAccount: node-drain-sa
experiments:
- name: node-drain
spec:
components:
# nodeSelector:
# # provide the node labels
# kubernetes.io/hostname: 'node02'
env:
- name: TOTAL_CHAOS_DURATION
value: '60'
# enter the target node name
- name: TARGET_NODE
value: ''

View File

@ -1,80 +0,0 @@
---
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Drain the node where application pod is scheduled
kind: ChaosExperiment
metadata:
name: node-drain
labels:
name: node-drain
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec","pods/eviction"]
verbs: ["get","list","create"]
# ignore daemonsets while draining the node
- apiGroups: ["apps"]
resources: ["daemonsets"]
verbs: ["list","get","delete"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list","patch"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name node-drain
command:
- /bin/bash
env:
- name: TARGET_NODE
value: ''
- name: NODE_LABEL
value: ''
- name: TOTAL_CHAOS_DURATION
value: '60'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
labels:
name: node-drain
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest

View File

@ -1,49 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2019-01-13T10:28:08Z
name: node-drain
version: 0.1.12
annotations:
categories: Kubernetes
vendor: Mayadata
repository: https://github.com/litmuschaos/chaos-charts
support: https://app.slack.com/client/T09NY5SBT/CNXNB0ZTN
spec:
displayName: node-drain
categoryDescription: >
Drain the node where application pod is scheduled
keywords:
- Kubernetes
- K8S
- Node
- Drain
platforms:
- GKE
- AWS(KOPS)
- Packet(Kubeadm)
- Konvoy
- EKS
- AKS
maturity: alpha
chaosType: infra
maintainers:
- name: shubham chaudhary
email: shubham.chaudhary@mayadata.io
minKubeVersion: 1.12.0
provider:
name: Mayadata
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/generic/node-drain
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/nodes/node-drain/
- name: Video
url: https://www.youtube.com/watch?v=LQVCZUQ4-ok
icon:
- url: ""
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/node-drain/experiment.yaml

View File

@ -1,75 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-drain-sa
namespace: default
labels:
name: node-drain-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-drain-sa
labels:
name: node-drain-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec","pods/eviction"]
verbs: ["get","list","create"]
# ignore daemonsets while draining the node
- apiGroups: ["apps"]
resources: ["daemonsets"]
verbs: ["list","get","delete"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list","patch"]
# use litmus psp
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames: ["litmus"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-drain-sa
labels:
name: node-drain-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-drain-sa
subjects:
- kind: ServiceAccount
name: node-drain-sa
namespace: default

View File

@ -1,70 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-drain-sa
namespace: default
labels:
name: node-drain-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-drain-sa
labels:
name: node-drain-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec","pods/eviction"]
verbs: ["get","list","create"]
# ignore daemonsets while draining the node
- apiGroups: ["apps"]
resources: ["daemonsets"]
verbs: ["list","get","delete"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list","patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-drain-sa
labels:
name: node-drain-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-drain-sa
subjects:
- kind: ServiceAccount
name: node-drain-sa
namespace: default

View File

@ -1,111 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Give IO disk stress on a node belonging to a deployment
kind: ChaosExperiment
metadata:
name: node-io-stress
labels:
name: node-io-stress
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name node-io-stress
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '120'
## specify the size as percentage of free space on the file system
## default value 90 (in percentage)
- name: FILESYSTEM_UTILIZATION_PERCENTAGE
value: '10'
## we can specify the size in Gigabyte (Gb) also in place of percentage of free space
## NOTE: for selecting this option FILESYSTEM_UTILIZATION_PERCENTAGE should be empty
- name: FILESYSTEM_UTILIZATION_BYTES
value: ''
## Number of core of CPU
- name: CPU
value: '1'
## Total number of workers default value is 4
- name: NUMBER_OF_WORKERS
value: '4'
## Total number of vm workers
- name: VM_WORKERS
value: '1'
## enter the comma separated target nodes name
- name: TARGET_NODES
value: ''
- name: NODE_LABEL
value: ''
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# provide lib image
- name: LIB_IMAGE
value: 'litmuschaos/go-runner:latest'
## percentage of total nodes to target
- name: NODES_AFFECTED_PERC
value: ''
## it defines the sequence of chaos execution for multiple target nodes
## supported values: serial, parallel
- name: SEQUENCE
value: 'parallel'
labels:
name: node-io-stress
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest

View File

@ -1,49 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2020-09-12T10:28:08Z
name: node-io-stress
version: 0.1.0
annotations:
categories: Kubernetes
vendor: CNCF
support: https://slack.kubernetes.io/
spec:
displayName: node-io-stress
categoryDescription: |
This experiment causes disk stress on the Kubernetes node. The experiment aims to verify the resiliency of applications that share this disk resource for ephemeral or persistent storage purposes..
- Disk stress on a particular node filesystem where the application deployment is available.
- The amount of disk stress can be either specifed as the size in percentage of the total free space on the file system or simply in Gigabytes(GB)
keywords:
- Kubernetes
- K8S
- Disk
- IO
- Filesystem
- Node
platforms:
- GKE
- EKS
- AKS
maturity: alpha
chaosType: infra
maintainers:
- name: Udit Gaurav
email: udit.gaurav@mayadata.io
minKubeVersion: 1.12.0
provider:
name: Mayadata
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/generic/node-io-stress
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/nodes/node-io-stress/
- name: Video
url:
icon:
- url: ""
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/node-io-stress/experiment.yaml

View File

@ -1,71 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-io-stress-sa
namespace: default
labels:
name: node-io-stress-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-io-stress-sa
labels:
name: node-io-stress-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
# use litmus psp
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames: ["litmus"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-io-stress-sa
labels:
name: node-io-stress-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-io-stress-sa
subjects:
- kind: ServiceAccount
name: node-io-stress-sa
namespace: default

View File

@ -1,66 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-io-stress-sa
namespace: default
labels:
name: node-io-stress-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-io-stress-sa
labels:
name: node-io-stress-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-io-stress-sa
labels:
name: node-io-stress-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-io-stress-sa
subjects:
- kind: ServiceAccount
name: node-io-stress-sa
namespace: default

View File

@ -1,102 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Give a memory hog on a node belonging to a deployment
kind: ChaosExperiment
metadata:
name: node-memory-hog
labels:
name: node-memory-hog
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name node-memory-hog
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '120'
## Specify the size as percent of total node capacity Ex: '30'
## NOTE: for selecting this option keep MEMORY_CONSUMPTION_MEBIBYTES empty
- name: MEMORY_CONSUMPTION_PERCENTAGE
value: ''
## Specify the amount of memory to be consumed in mebibytes
## NOTE: for selecting this option keep MEMORY_CONSUMPTION_PERCENTAGE empty
- name: MEMORY_CONSUMPTION_MEBIBYTES
value: ''
- name: NUMBER_OF_WORKERS
value: '1'
# ENTER THE COMMA SEPARATED TARGET NODES NAME
- name: TARGET_NODES
value: ''
- name: NODE_LABEL
value: ''
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# provide lib image
- name: LIB_IMAGE
value: 'litmuschaos/go-runner:latest'
## percentage of total nodes to target
- name: NODES_AFFECTED_PERC
value: ''
## it defines the sequence of chaos execution for multiple target nodes
## supported values: serial, parallel
- name: SEQUENCE
value: 'parallel'
labels:
name: node-memory-hog
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest

View File

@ -1,50 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2020-03-28T10:28:08Z
name: node-memory-hog
version: 0.1.5
annotations:
categories: Kubernetes
vendor: CNCF
support: https://slack.kubernetes.io/
spec:
displayName: node-memory-hog
categoryDescription: |
Kubernetes Node memory hog contains chaos to disrupt the state of Kubernetes resources. Experiments can inject a memory spike on a node where the application pod is scheduled.
- Memory hog on a particular node where the application deployment is available.
- After the test, the recovery should be manual for the application pod and node in case they are not in an appropriate state.
keywords:
- Kubernetes
- K8S
- Memory
- Node
platforms:
- GKE
- EKS
- AKS
- Kind
- Rancher
- OpenShift(OKD)
maturity: alpha
chaosType: infra
maintainers:
- name: Udit Gaurav
email: udit.gaurav@mayadata.io
minKubeVersion: 1.12.0
provider:
name: Mayadata
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/generic/node-memory-hog
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/nodes/node-memory-hog/
- name: Video
url: https://www.youtube.com/watch?v=ECxlWgQ8F5w
icon:
- url: ""
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/node-memory-hog/experiment.yaml

View File

@ -1,71 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-memory-hog-sa
namespace: default
labels:
name: node-memory-hog-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-memory-hog-sa
labels:
name: node-memory-hog-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
# use litmus psp
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames: ["litmus"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-memory-hog-sa
labels:
name: node-memory-hog-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-memory-hog-sa
subjects:
- kind: ServiceAccount
name: node-memory-hog-sa
namespace: default

View File

@ -1,66 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-memory-hog-sa
namespace: default
labels:
name: node-memory-hog-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-memory-hog-sa
labels:
name: node-memory-hog-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-memory-hog-sa
labels:
name: node-memory-hog-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-memory-hog-sa
subjects:
- kind: ServiceAccount
name: node-memory-hog-sa
namespace: default

View File

@ -1,33 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
chaosServiceAccount: node-poweroff-sa
experiments:
- name: node-poweroff
spec:
components:
# nodeSelector:
# # provide the node labels
# kubernetes.io/hostname: 'node02'
env:
- name: TOTAL_CHAOS_DURATION
value: '60'
# ENTER THE TARGET NODE NAME
- name: TARGET_NODE
value: ''
# ENTER THE TARGET NODE IP
- name: TARGET_NODE_IP
value: ''
# ENTER THE USER TO BE USED FOR SSH AUTH
- name: SSH_USER
value: 'root'

View File

@ -1,39 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2019-01-28T10:28:08Z
name: node-poweroff
version: 0.1.0
annotations:
categories: Kubernetes
vendor: CNCF
support: https://slack.kubernetes.io/
spec:
displayName: node-poweroff
categoryDescription: |
Node poweroff contains chaos experiment to poweroff a node via SSH.
keywords:
- Kubernetes
- K8S
- Poweroff
- Node
platforms:
- KVM/LibVirt based K8s
- EKS
maturity: alpha
chaosType: infra
maintainers:
- name: jordigilh
email: jordi.gil@gmail.com
minKubeVersion: 1.12.0
provider:
name: Mayadata
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/generic/node-restart
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/nodes/node-restart
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/node-poweroff/experiment.yaml

View File

@ -1,71 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-poweroff-sa
namespace: default
labels:
name: node-poweroff-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-poweroff-sa
labels:
name: node-poweroff-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps","secrets"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
# use litmus psp
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames: ["litmus"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-poweroff-sa
labels:
name: node-poweroff-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-poweroff-sa
subjects:
- kind: ServiceAccount
name: node-poweroff-sa
namespace: default

View File

@ -1,66 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-poweroff-sa
namespace: default
labels:
name: node-poweroff-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-poweroff-sa
labels:
name: node-poweroff-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps","secrets"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-poweroff-sa
labels:
name: node-poweroff-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-poweroff-sa
subjects:
- kind: ServiceAccount
name: node-poweroff-sa
namespace: default

View File

@ -1,33 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
# It can be active/stop
engineState: 'active'
#ex. values: ns1:name=percona,ns2:run=nginx
auxiliaryAppInfo: ''
chaosServiceAccount: node-restart-sa
experiments:
- name: node-restart
spec:
components:
# nodeSelector:
# # provide the node labels
# kubernetes.io/hostname: 'node02'
env:
- name: TOTAL_CHAOS_DURATION
value: '60'
# ENTER THE TARGET NODE NAME
- name: TARGET_NODE
value: ''
# ENTER THE TARGET NODE IP
- name: TARGET_NODE_IP
value: ''
# ENTER THE USER TO BE USED FOR SSH AUTH
- name: SSH_USER
value: 'root'

View File

@ -1,89 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Restart node
kind: ChaosExperiment
metadata:
name: node-restart
labels:
name: node-restart
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps","secrets"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name node-restart
command:
- /bin/bash
env:
- name: SSH_USER
value: 'root'
- name: TOTAL_CHAOS_DURATION
value: '60'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# provide lib image
- name: LIB_IMAGE
value: "litmuschaos/go-runner:latest"
# ENTER THE TARGET NODE NAME
- name: TARGET_NODE
value: ''
- name: NODE_LABEL
value: ''
# ENTER THE TARGET NODE IP
- name: TARGET_NODE_IP
value: ''
labels:
name: node-restart
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest
secrets:
- name: id-rsa
mountPath: /mnt/

View File

@ -1,39 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2019-01-28T10:28:08Z
name: node-restart
version: 0.1.0
annotations:
categories: Kubernetes
vendor: CNCF
support: https://slack.kubernetes.io/
spec:
displayName: node-restart
categoryDescription: |
Node restart contains chaos to restart the node via SSH.
keywords:
- Kubernetes
- K8S
- Restart
- Node
platforms:
- KVM/LibVirt based K8s
- EKS
maturity: alpha
chaosType: infra
maintainers:
- name: machacekondra
email: machacek.ondra@gmail.com
minKubeVersion: 1.12.0
provider:
name: Mayadata
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/generic/node-restart
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/nodes/node-restart
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/node-restart/experiment.yaml

View File

@ -1,71 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-restart-sa
namespace: default
labels:
name: node-restart-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-restart-sa
labels:
name: node-restart-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps","secrets"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
# use litmus psp
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames: ["litmus"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-restart-sa
labels:
name: node-restart-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-restart-sa
subjects:
- kind: ServiceAccount
name: node-restart-sa
namespace: default

View File

@ -1,66 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-restart-sa
namespace: default
labels:
name: node-restart-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-restart-sa
labels:
name: node-restart-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps","secrets"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-restart-sa
labels:
name: node-restart-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-restart-sa
subjects:
- kind: ServiceAccount
name: node-restart-sa
namespace: default

View File

@ -1,85 +0,0 @@
---
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Taint the node where application pod is scheduled
kind: ChaosExperiment
metadata:
name: node-taint
labels:
name: node-taint
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec","pods/eviction"]
verbs: ["get","list","create"]
# ignore daemonsets while draining the node
- apiGroups: ["apps"]
resources: ["daemonsets"]
verbs: ["list","get","delete"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list","patch","update"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name node-taint
command:
- /bin/bash
env:
- name: TARGET_NODE
value: ''
- name: NODE_LABEL
value: ''
- name: TOTAL_CHAOS_DURATION
value: '60'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# set taint label & effect
# key=value:effect or key:effect
- name: TAINTS
value: ''
labels:
name: node-taint
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest

View File

@ -1,49 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2019-01-13T10:28:08Z
name: node-taint
version: 0.1.2
annotations:
categories: Kubernetes
vendor: Mayadata
repository: https://github.com/litmuschaos/chaos-charts
support: https://app.slack.com/client/T09NY5SBT/CNXNB0ZTN
spec:
displayName: node-taint
categoryDescription: >
Taint the node where application pod is scheduled
keywords:
- Kubernetes
- K8S
- Node
- Taint
platforms:
- GKE
- AWS(KOPS)
- Packet(Kubeadm)
- Konvoy
- EKS
- AKS
maturity: alpha
chaosType: infra
maintainers:
- name: shubham chaudhary
email: shubham.chaudhary@mayadata.io
minKubeVersion: 1.12.0
provider:
name: Mayadata
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/generic/node-taint
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/nodes/node-taint/
- name: Video
url:
icon:
- url: ""
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/node-taint/experiment.yaml

View File

@ -1,75 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-taint-sa
namespace: default
labels:
name: node-taint-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-taint-sa
labels:
name: node-taint-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec","pods/eviction"]
verbs: ["get","list","create"]
# ignore daemonsets while draining the node
- apiGroups: ["apps"]
resources: ["daemonsets"]
verbs: ["list","get","delete"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list","patch","update"]
# use litmus psp
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames: ["litmus"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-taint-sa
labels:
name: node-taint-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-taint-sa
subjects:
- kind: ServiceAccount
name: node-taint-sa
namespace: default

View File

@ -1,70 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-taint-sa
namespace: default
labels:
name: node-taint-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-taint-sa
labels:
name: node-taint-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec","pods/eviction"]
verbs: ["get","list","create"]
# ignore daemonsets while draining the node
- apiGroups: ["apps"]
resources: ["daemonsets"]
verbs: ["list","get","delete"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# for experiment to perform node status checks
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list","patch","update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-taint-sa
labels:
name: node-taint-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-taint-sa
subjects:
- kind: ServiceAccount
name: node-taint-sa
namespace: default

View File

@ -1,73 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Scale the application replicas and test the node autoscaling on cluster
kind: ChaosExperiment
metadata:
name: pod-autoscaler
labels:
name: pod-autoscaler
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# performs CRUD operations on the deployments and statefulsets
- apiGroups: ["apps"]
resources: ["deployments","statefulsets"]
verbs: ["list","get","patch","update"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name pod-autoscaler
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '60'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# Number of replicas to scale
- name: REPLICA_COUNT
value: '5'
labels:
name: pod-autoscaler
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest

View File

@ -1,47 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2020-08-08T10:28:08Z
name: pod-autoscaler
version: 0.1.0
annotations:
categories: Kubernetes
vendor: CNCF
support: https://slack.kubernetes.io/
spec:
displayName: pod-autoscaler
categoryDescription: |
The experiment aims to check the ability of nodes to accommodate the number of replicas a given application pod.
This experiment can be used for other scenarios as well, such as for checking the Node auto-scaling feature. For example, check if the pods are successfully rescheduled within a specified period in cases where the existing nodes are already running at the specified limits.
keywords:
- Kubernetes
- K8S
- Scale
- Pod
platforms:
- GKE
- EKS
- Minikube
- AKS
maturity: alpha
chaosType: infra
maintainers:
- name: Udit Gaurav
email: udit.gaurav@mayadata.io
minKubeVersion: 1.12.0
provider:
name: Mayadata
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/generic/pod-autoscaler
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-autoscaler/
- name: Video
url:
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-autoscaler/experiment.yaml

View File

@ -1,71 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pod-autoscaler-sa
namespace: default
labels:
name: pod-autoscaler-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-autoscaler-sa
labels:
name: pod-autoscaler-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# performs CRUD operations on the deployments and statefulsets
- apiGroups: ["apps"]
resources: ["deployments","statefulsets"]
verbs: ["list","get","patch","update"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# use litmus psp
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames: ["litmus"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pod-autoscaler-sa
labels:
name: pod-autoscaler-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pod-autoscaler-sa
subjects:
- kind: ServiceAccount
name: pod-autoscaler-sa
namespace: default

View File

@ -1,66 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pod-autoscaler-sa
namespace: default
labels:
name: pod-autoscaler-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-autoscaler-sa
labels:
name: pod-autoscaler-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# performs CRUD operations on the deployments and statefulsets
- apiGroups: ["apps"]
resources: ["deployments","statefulsets"]
verbs: ["list","get","patch","update"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pod-autoscaler-sa
labels:
name: pod-autoscaler-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pod-autoscaler-sa
subjects:
- kind: ServiceAccount
name: pod-autoscaler-sa
namespace: default

View File

@ -1,30 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
# It can be active/stop
engineState: 'active'
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
chaosServiceAccount: pod-cpu-hog-exec-sa
experiments:
- name: pod-cpu-hog-exec
spec:
components:
env:
- name: TOTAL_CHAOS_DURATION
value: '60' # in seconds
#number of cpu cores to be consumed
#verify the resources the app has been launched with
- name: CPU_CORES
value: '1'
## Percentage of total pods to target
- name: PODS_AFFECTED_PERC
value: ''

View File

@ -1,100 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Injects cpu consumption on pods belonging to an app deployment
kind: ChaosExperiment
metadata:
name: pod-cpu-hog-exec
labels:
name: pod-cpu-hog-exec
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Namespaced
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# deriving the parent/owner details of the pod(if parent is anyof {deployment, statefulset, daemonsets})
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","replicasets", "daemonsets"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: ["apps.openshift.io"]
resources: ["deploymentconfigs"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: [""]
resources: ["replicationcontrollers"]
verbs: ["get","list"]
# deriving the parent/owner details of the pod(if parent is argo-rollouts)
- apiGroups: ["argoproj.io"]
resources: ["rollouts"]
verbs: ["list","get"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name pod-cpu-hog-exec
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '60'
## Number of CPU cores to stress
- name: CPU_CORES
value: '1'
## Percentage of total pods to target
- name: PODS_AFFECTED_PERC
value: ''
## Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# The command to kill the chaos process
- name: CHAOS_KILL_COMMAND
value: "kill $(find /proc -name exe -lname '*/md5sum' 2>&1 | grep -v 'Permission denied' | awk -F/ '{print $(NF-1)}')"
- name: TARGET_PODS
value: ''
## it defines the sequence of chaos execution for multiple target pods
## supported values: serial, parallel
- name: SEQUENCE
value: 'parallel'
labels:
name: pod-cpu-hog-exec
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest

View File

@ -1,51 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2021-06-16T10:28:08Z
name: pod-cpu-hog-exec
version: 0.1.0
annotations:
categories: Kubernetes
vendor: CNCF
support: https://slack.kubernetes.io/
spec:
displayName: pod-cpu-hog-exec
categoryDescription: |
pod-cpu-hog-exec contains chaos to consume CPU resouces of specified containers in Kubernetes pods.
- Causes high CPU resource consumption utilizing one or more cores by triggering md5sum commands
- The application pod should be healthy once chaos is stopped. Expectation is that service-requests should be served despite chaos.
keywords:
- Kubernetes
- K8S
- CPU
- Pod
- Exec
- Stress
platforms:
- GKE
- Packet(Kubeadm)
- Minikube
- EKS
- AKS
- Kind
maturity: alpha
maintainers:
- name: ksatchit
email: karthik@chaosnative.com
minKubeVersion: 1.12.0
provider:
name: ChaosNative
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/generic/pod-cpu-hog-exec
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-cpu-hog-exec/
- name: Video
url: https://www.youtube.com/watch?v=MBGSPmZKb2I
icon:
- base64data: ""
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-cpu-hog-exec/experiment.yaml

View File

@ -1,85 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pod-cpu-hog-exec-sa
namespace: default
labels:
name: pod-cpu-hog-exec-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-cpu-hog-exec-sa
namespace: default
labels:
name: pod-cpu-hog-exec-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# deriving the parent/owner details of the pod(if parent is anyof {deployment, statefulset, daemonsets})
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","replicasets", "daemonsets"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: ["apps.openshift.io"]
resources: ["deploymentconfigs"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: [""]
resources: ["replicationcontrollers"]
verbs: ["get","list"]
# deriving the parent/owner details of the pod(if parent is argo-rollouts)
- apiGroups: ["argoproj.io"]
resources: ["rollouts"]
verbs: ["list","get"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
# use litmus psp
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames: ["litmus"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-cpu-hog-exec-sa
namespace: default
labels:
name: pod-cpu-hog-exec-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-cpu-hog-exec-sa
subjects:
- kind: ServiceAccount
name: pod-cpu-hog-exec-sa
namespace: default

View File

@ -1,80 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pod-cpu-hog-exec-sa
namespace: default
labels:
name: pod-cpu-hog-exec-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-cpu-hog-exec-sa
namespace: default
labels:
name: pod-cpu-hog-exec-sa
app.kubernetes.io/part-of: litmus
rules:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# deriving the parent/owner details of the pod(if parent is anyof {deployment, statefulset, daemonsets})
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","replicasets", "daemonsets"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: ["apps.openshift.io"]
resources: ["deploymentconfigs"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: [""]
resources: ["replicationcontrollers"]
verbs: ["get","list"]
# deriving the parent/owner details of the pod(if parent is argo-rollouts)
- apiGroups: ["argoproj.io"]
resources: ["rollouts"]
verbs: ["list","get"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-cpu-hog-exec-sa
namespace: default
labels:
name: pod-cpu-hog-exec-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-cpu-hog-exec-sa
subjects:
- kind: ServiceAccount
name: pod-cpu-hog-exec-sa
namespace: default

View File

@ -1,36 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
# It can be active/stop
engineState: 'active'
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
chaosServiceAccount: pod-cpu-hog-sa
experiments:
- name: pod-cpu-hog
spec:
components:
env:
- name: TOTAL_CHAOS_DURATION
value: '60' # in seconds
- name: CPU_CORES
value: '1'
## Percentage of total pods to target
- name: PODS_AFFECTED_PERC
value: ''
## provide the cluster runtime
- name: CONTAINER_RUNTIME
value: 'containerd'
# provide the socket file path
- name: SOCKET_PATH
value: '/run/containerd/containerd.sock'

View File

@ -1,122 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Injects cpu consumption on pods belonging to an app deployment
kind: ChaosExperiment
metadata:
name: pod-cpu-hog
labels:
name: pod-cpu-hog
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Namespaced
permissions:
# Create and monitor the experiment & helper pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
# Performs CRUD operations on the events inside chaosengine and chaosresult
- apiGroups: [""]
resources: ["events"]
verbs: ["create","get","list","patch","update"]
# Fetch configmaps details and mount it to the experiment pod (if specified)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get","list","create"]
# deriving the parent/owner details of the pod(if parent is anyof {deployment, statefulset, daemonsets})
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","replicasets", "daemonsets"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: ["apps.openshift.io"]
resources: ["deploymentconfigs"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: [""]
resources: ["replicationcontrollers"]
verbs: ["get","list"]
# deriving the parent/owner details of the pod(if parent is argo-rollouts)
- apiGroups: ["argoproj.io"]
resources: ["rollouts"]
verbs: ["list","get"]
# for configuring and monitor the experiment job by the chaos-runner pod
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
image: "litmuschaos/go-runner:latest"
imagePullPolicy: Always
args:
- -c
- ./experiments -name pod-cpu-hog
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '60'
## Number of CPU cores to stress
- name: CPU_CORES
value: '1'
## LOAD CPU WITH GIVEN PERCENT LOADING FOR THE CPU STRESS WORKERS.
## 0 IS EFFECTIVELY A SLEEP (NO LOAD) AND 100 IS FULL LOADING
- name: CPU_LOAD
value: '100'
## Percentage of total pods to target
- name: PODS_AFFECTED_PERC
value: ''
## Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
## It is used in pumba lib only
- name: LIB_IMAGE
value: 'litmuschaos/go-runner:latest'
## It is used in pumba lib only
- name: STRESS_IMAGE
value: 'alexeiled/stress-ng:latest-ubuntu'
## provide the cluster runtime
- name: CONTAINER_RUNTIME
value: 'containerd'
# provide the socket file path
- name: SOCKET_PATH
value: '/run/containerd/containerd.sock'
- name: TARGET_PODS
value: ''
# To select pods on specific node(s)
- name: NODE_LABEL
value: ''
## it defines the sequence of chaos execution for multiple target pods
## supported values: serial, parallel
- name: SEQUENCE
value: 'parallel'
labels:
name: pod-cpu-hog
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/runtime-api-usage: "true"
app.kubernetes.io/version: latest

View File

@ -1,53 +0,0 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2021-06-16T10:28:08Z
name: pod-cpu-hog
version: 0.1.0
annotations:
categories: Kubernetes
vendor: CNCF
support: https://slack.kubernetes.io/
spec:
displayName: pod-cpu-hog
categoryDescription: |
Pod-CPU-Hog contains chaos to consume CPU resouces of specified containers in Kubernetes pods.
- Causes CPU resource consumption on specified application containers using cgroups and litmus nsutil which consume CPU resources of the given target containers.
- It Can test the application's resilience to potential slowness/unavailability of some replicas due to high CPU load
- The application pod should be healthy once chaos is stopped. Expectation is that service-requests should be served despite chaos.
keywords:
- Kubernetes
- K8S
- CPU
- Pod
- Stress
platforms:
- GKE
- Packet(Kubeadm)
- Minikube
- EKS
- AKS
- Kind
maturity: alpha
maintainers:
- name: ksatchit
email: karthik@chaosnative.com
- name: Udit Gaurav
email: udit@chaosnative.com
minKubeVersion: 1.12.0
provider:
name: ChaosNative
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/generic/pod-cpu-hog
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-cpu-hog/
- name: Video
url: https://www.youtube.com/watch?v=MBGSPmZKb2I
icon:
- base64data: ""
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-cpu-hog/experiment.yaml

Some files were not shown because too many files have changed in this diff Show More