Compare commits

..

41 Commits

Author SHA1 Message Date
Udit Gaurav ad097d673c
[Cherry-Pick for 2.2.0] (#511)
* chore(kyverno): Adding kyverno pod security policies for litmus pods (#504)

* chore(kyverno): Adding security policies

Signed-off-by: shubham chaudhary <shubham@chaosnative.com>

* chore(kyverno): updating policies

Signed-off-by: shubham chaudhary <shubham@chaosnative.com>

* chore(kyverno): updating policies

Signed-off-by: shubham chaudhary <shubham@chaosnative.com>

* chore(kyverno): changed the file names

Signed-off-by: shubham chaudhary <shubham@chaosnative.com>

* [Cherry-Pick for 2.2.0]

Signed-off-by: udit <udit@chaosnative.com>

Co-authored-by: Shubham Chaudhary <shubham@chaosnative.com>
2021-10-14 01:38:28 +05:30
OUM NIVRATHI KALE bf751c658d
Updating schema for CMDProbe in Workflows (#509)
Signed-off-by: Oum Kale <oumkale@chaosnative.com>
2021-10-12 19:36:37 +05:30
litmusbot e018c102ec 1331766963: version upgraded for chaos-charts 2021-10-12 05:35:11 +00:00
OUM NIVRATHI KALE 927905e31e
updating versions (#508)
Signed-off-by: Oum Kale <oumkale@chaosnative.com>
2021-10-12 11:04:41 +05:30
litmusbot 40021df558 1294760635: version upgraded for chaos-charts 2021-10-01 10:58:21 +00:00
Udit Gaurav f683ddbee0
Cherry Pick for 2.1.1 (#507)
* updated vm-poweroff experiment docs; updated mainter description of gcp chaos docs (#505)

Signed-off-by: neelanjan00 <neelanjan@chaosnative.com>

* resolve conflict

Signed-off-by: udit <udit@chaosnative.com>

* cherry pick for 2.1.1

Signed-off-by: udit <udit@chaosnative.com>

* update version

Signed-off-by: udit <udit@chaosnative.com>

Co-authored-by: Neelanjan Manna <neelanjan@chaosnative.com>
Co-authored-by: Shubham Chaudhary <shubham@chaosnative.com>
2021-10-01 16:27:53 +05:30
litmusbot bd562b8a15 1233402804: version upgraded for chaos-charts 2021-09-14 11:22:04 +00:00
Shubham Chaudhary d0513c672f
Merge pull request #503 from uditgaurav/v2.1.x-tracker
Cherry-Pick for 2.1.0
2021-09-14 16:51:35 +05:30
udit 280b26f7fd Chore(2.1.0): Add experiment manifest for 2.1.0
Signed-off-by: udit <udit@chaosnative.com>
2021-09-14 16:50:09 +05:30
Shubham Chaudhary a106336c15 chart(pod-network-partition): Adding chart of the pod-network-partition experiment (#501)
Signed-off-by: shubham chaudhary <shubham@chaosnative.com>
2021-09-14 16:15:46 +05:30
Shubham Chaudhary a631487804 adding minimal keywords for searching in chaoshub (#502)
* adding minimal keywords for searching on chaoshub

Signed-off-by: shubham chaudhary <shubham@chaosnative.com>

* adding k8s keywords

Signed-off-by: shubham chaudhary <shubham@chaosnative.com>
2021-09-14 16:15:35 +05:30
Akash Shrivastava 9eaae2591a Chore(Azure): azure-disk-loss experiment charts (#482)
* Added charts for azure disk loss experiment

Signed-off-by: Akash Shrivastava <akash@chaosnative.com>
2021-09-14 16:15:22 +05:30
Ishan Gupta 0ec17f3eaa updated default chaos event and verdict queries (#493)
Signed-off-by: ishangupta-ds <ishan@chaosnative.com>
2021-09-14 16:15:09 +05:30
Shubham Chaudhary 63907237ba
(docs): resolving conflicts (#499)
Signed-off-by: shubham chaudhary <shubham@chaosnative.com>
2021-08-16 15:45:05 +05:30
litmusbot 2d254340af 1132944969: version upgraded for chaos-charts 2021-08-15 15:28:38 +00:00
Udit Gaurav 6efdae0ab7
[Cherry Pick for 2.0.0] (#497)
* Added default kill command in experiment.yaml (#495)

* Added default kill command in experiment.yaml

Signed-off-by: Akash Shrivastava <akash@chaosnative.com>

* Updating environment and kafka image tag (#494)

* updating env

Signed-off-by: Oum Kale <oumkale@chaosnative.com>

* Added SCALE_SET to azure instance stop experiment (#496)

Signed-off-by: Akash Shrivastava <akash@chaosnative.com>

* update version from 2.0.0-RC1 to 2.0.0

Signed-off-by: udit <udit@chaosnative.com>

* update version from 2.0.0-RC1 to 2.0.0

Signed-off-by: udit <udit@chaosnative.com>

* update workflows directory

Signed-off-by: udit <udit@chaosnative.com>

* update workflows directory

Signed-off-by: udit <udit@chaosnative.com>

Co-authored-by: Akash Shrivastava <akash@chaosnative.com>
Co-authored-by: OUM NIVRATHI KALE <oumkale@chaosnative.com>
2021-08-15 20:58:12 +05:30
Udit Gaurav fe8b4fbd25
Fix: Remove disk loss charts (#492)
Signed-off-by: udit <udit@chaosnative.com>
2021-08-06 01:29:21 +05:30
litmusbot a5c7f88fb9 1102642648: version upgraded for chaos-charts 2021-08-05 19:13:59 +00:00
Udit Gaurav 09a94a79cc
[Cherry-Pick for 2.0.0-RC1] (#491)
* Chore(new_chart): Add Chaos Charts for Azure instance terminate experiment (#442)

* Chore(new_chart): Add Chaos Charts for Azure instance terminate experiment

Signed-off-by: uditgaurav <udit@chaosnative.com>

* Update azure.chartserviceversion.yaml

Co-authored-by: Shubham Chaudhary <shubham.chaudhary@mayadata.io>

* resolve conflicts

Signed-off-by: udit <udit@chaosnative.com>

* GCP VM Instance Stop charts (#480)

* Added charts for GCP vm-instance-stop and vm-disk-loss experiments

* Removed temp file

* GCP charts updated for gcp-vm-instance-stop; removed vm-disk-loss experiment

* Added experiment image name

* Removed exec keyword, updated chaos interval

* Updated gcp charts messages

* Updated image tag to ci

* Updated experiment name in csv, removed exec

* Added charts for gcp-vm-disk-loss experiment

* Removed gcp-vm-disk-loss charts

* Removed experiment inputs and tagged the experiment image to ci

* Removed jobCleanupPolicy

* updated experiment description

* updated chartserviceversion description

* removed patch verb

Co-authored-by: Udit Gaurav <35391335+uditgaurav@users.noreply.github.com>
Co-authored-by: Shubham Chaudhary <shubham@chaosnative.com>

* GCP VM Disk Loss Charts (#483)

* Added charts for GCP vm-disk-loss experiments

Signed-off-by: neelanjan00 <neelanjanmanna@gmail.com>

* update entrypoint for py-runner (#490)

Signed-off-by: Oum Kale <oumkale@chaosnative.com>

* update go-runner version to v2.0.0-RC1

Signed-off-by: udit <udit@chaosnative.com>

* remove disk-loss

Signed-off-by: udit <udit@chaosnative.com>

Co-authored-by: Shubham Chaudhary <shubham.chaudhary@mayadata.io>
Co-authored-by: Neelanjan Manna <neelanjanmanna@gmail.com>
Co-authored-by: Shubham Chaudhary <shubham@chaosnative.com>
Co-authored-by: OUM NIVRATHI KALE <oumkale@chaosnative.com>
2021-08-06 00:43:30 +05:30
litmusbot cb9e6584f9 1035251081: version upgraded for chaos-charts 2021-07-15 20:16:24 +00:00
Shubham Chaudhary e5707b4610
[cherrypick for 1.13.8] (#489)
* Chore(aws-ssm): Add AWS SSM chaos experiment charts (#469)

Signed-off-by: udit <udit@chaosnative.com>

Co-authored-by: Shubham Chaudhary <shubham.chaudhary@mayadata.io>

* stress charts(resolved conflicts)

Signed-off-by: shubham chaudhary <shubham@chaosnative.com>

* Add litmus-portal dashboards in monitoring directory (#478)

Signed-off-by: Amit Kumar Das <amit@chaosnative.com>

* engine minimal env(resolved conflicts)

Signed-off-by: shubham chaudhary <shubham@chaosnative.com>

* updates for portal dashboards (#485)

Signed-off-by: ishangupta-ds <ishan@chaosnative.com>

* adding bank-of-anthos predefined workflow (#481)

Signed-off-by: Oum Kale <oumkale@chaosnative.com>

* ansible cleanup(resolved conflicts)

Signed-off-by: shubham chaudhary <shubham@chaosnative.com>

* Updated AWS SSM name

Signed-off-by: Amit Kumar Das <amit@chaosnative.com>

* updating icon for workflow

Signed-off-by: Oum Kale <oumkale@chaosnative.com>

* byoc cleanup(resolved conflicts)

Signed-off-by: shubham chaudhary <shubham@chaosnative.com>

* updating version in charts & workflows

Signed-off-by: shubham chaudhary <shubham@chaosnative.com>

Co-authored-by: Udit Gaurav <35391335+uditgaurav@users.noreply.github.com>
Co-authored-by: Amit Kumar Das <40661238+amityt@users.noreply.github.com>
Co-authored-by: Ishan Gupta <ishan@chaosnative.com>
Co-authored-by: OUM NIVRATHI KALE <oumkale@chaosnative.com>
Co-authored-by: Adarshkumar14 <56665829+Adarshkumar14@users.noreply.github.com>
Co-authored-by: Amit Kumar Das <amit@chaosnative.com>
2021-07-16 01:45:53 +05:30
litmusbot 72db7164ab 981457570: version upgraded for chaos-charts 2021-06-29 04:58:03 +00:00
Shubham Chaudhary 0a06f0c44a
feat(1.13.7): updating go-runner image tag (#477)
* feat(1.13.7): updating go-runner image tag

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* feat(1.13.7): updating version

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>
2021-06-29 10:27:40 +05:30
litmusbot 32fc5d5892 940298847: version upgraded for chaos-charts 2021-06-15 18:22:01 +00:00
Udit Gaurav 165dcf3936
[Cherry-Pick for 1.13.6] (#474)
* chore(wf): Adding csv & icons for workflows (#464)

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* chore(wf): adding platforms in wf csv (#465)

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* chore(jobCleanUpPolicy): added retain as default jobCleanUpPolicy

Signed-off-by: udit <udit@chaosnative.com>

* Added separate directory for sock-shop application

Signed-off-by: Amit Kumar Das <amit@chaosnative.com>

* Minor change in CSV

Signed-off-by: Amit Kumar Das <amit@chaosnative.com>

* Updated icons url and csv filename

Signed-off-by: Amit Kumar Das <amit@chaosnative.com>

* Removed CSV for sock-shop-promProbe workflow (#470)

* Removed CSV for sock-shop-PromProbe

Signed-off-by: Amit Kumar Das <amit@chaosnative.com>

* Renamed sock-shop-cmdProbe to sock-shop

Signed-off-by: Amit Kumar Das <amit@chaosnative.com>

* Chore(vmware): Remove some ENVs from vmware chaosengine (#472)

Signed-off-by: udit <udit@chaosnative.com>

* Chore(charts): Update docker service kill charts

Signed-off-by: udit <udit@chaosnative.com>

* change the experiment version to 1.13.6 in charts and 1.13.5 in workflows

Signed-off-by: udit <udit@chaosnative.com>

* Resolve Conflicts

Signed-off-by: udit <udit@chaosnative.com>

* Change jobCleanupPolicy to retain

Signed-off-by: udit <udit@chaosnative.com>

* update the experiment version

Signed-off-by: udit <udit@chaosnative.com>

Co-authored-by: Shubham Chaudhary <shubham.chaudhary@mayadata.io>
Co-authored-by: Amit Kumar Das <amit@chaosnative.com>
Co-authored-by: Amit Kumar Das <40661238+amityt@users.noreply.github.com>
2021-06-15 23:51:36 +05:30
OUM NIVRATHI KALE 6ad01f7c86
update podtato-head workflow (#467)
Signed-off-by: Oum Kale <oumkale@chaosnative.com>
2021-06-01 13:57:43 +05:30
litmusbot d9d07280bb 894758504: version upgraded for chaos-charts 2021-06-01 04:45:58 +00:00
VEDANT SHROTRIA 8ba459a496
Updated images for chaos-runner and endpoints for experiments to 1.13.5. (#466)
Signed-off-by: Jonsy13 <vedant.shrotria@chaosnative.com>
2021-06-01 10:15:30 +05:30
Udit Gaurav 655a67cd48
Fix: Update version for pod dns experiments (#462)
Signed-off-by: uditgaurav <udit@chaosnative.com>
2021-05-15 22:30:12 +05:30
VEDANT SHROTRIA b2b8630fad
Updated image for install and revert-chaos (#460)
Signed-off-by: Jonsy13 <vedant.shrotria@chaosnative.com>
2021-05-15 22:13:38 +05:30
litmusbot 8f5dfa99f4 845257218: version upgraded for chaos-charts 2021-05-15 16:30:20 +00:00
Udit Gaurav f60b30b94a
[Cherry-Pick for 1.13.5] (#461)
* chore(charts):Added Labels for Workflow and Engine  (#437)

* Adding subject and label

Signed-off-by: Oum Kale <oumkale@chaosnative.com>

* updating context for infra level

Signed-off-by: Oum Kale <oumkale@chaosnative.com>

* updating chaosengine name

Signed-off-by: Oum Kale <oumkale@chaosnative.com>

* updating image to litmuschaos/k8s:latest

Signed-off-by: Oum Kale <oumkale@chaosnative.com>

* updating to litmuschaos/k8s:latest (#456)

Signed-off-by: Oum Kale <oumkale@chaosnative.com>

* Added DNS Spoof chaos

Signed-off-by: uditgaurav <udit@chaosnative.com>

* fixed typo (#457)

Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>

* Chore(ebs-loss): Add EBS Loss By Tag Experiment (#459)

* Chore(ebs-loss): Add EBS Loss By Tag Experiment

Signed-off-by: uditgaurav <udit@chaosnative.com>

* [Cherry-Pick for 1.13.5]

Signed-off-by: uditgaurav <udit@chaosnative.com>

Co-authored-by: OUM NIVRATHI KALE <oum.kale@mayadata.io>
Co-authored-by: Soumya Ghosh Dastidar <44349253+gdsoumya@users.noreply.github.com>
2021-05-15 21:59:52 +05:30
litmusbot d5b932ae26 806642959: version upgraded for chaos-charts 2021-05-03 11:23:27 +00:00
Udit Gaurav 273cc12f19
[Cherry-Pick for 1.13.4] (#452)
* chore(psp): removed runAsUser from psp and update the go-runner image in workflows (#446)

Signed-off-by: uditgaurav <udit@chaosnative.com>

* Chore(New Charts): Add charts for vm-poweroff experiment (#433)

* Chore(New Charts): Adding Charts for vm-delete experiment

Signed-off-by: Ubuntu <ubuntu@ip-172-31-31-101.ap-south-1.compute.internal>

* version upgraded for chaos-charts

Signed-off-by: uditgaurav <udit@chaosnative.com>

* Namespace scope flag added  and flow update for workflows  (#448)

* namespace scope flag added for workflows

Signed-off-by: Oum Kale <oumkale@chaosnative.com>

* Priority flow for workflow changed

Signed-off-by: Oum Kale <oumkale@chaosnative.com>

* updated scope

Signed-off-by: Oum Kale <oumkale@chaosnative.com>

* updated scope

Signed-off-by: Oum Kale <oumkale@chaosnative.com>

* chore(env): adding node-label in node experiments and block-size in disk-fill (#450)

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

Co-authored-by: Udit Gaurav <35391335+uditgaurav@users.noreply.github.com>

* Chore(cleanup): Remove go binary and vendor file from chaos charts

Signed-off-by: uditgaurav <udit@chaosnative.com>

* Chore(vmware): Update VMware CSV file (#451)

Signed-off-by: uditgaurav <udit@chaosnative.com>

* update the cr version to 1.13.4

Signed-off-by: uditgaurav <udit@chaosnative.com>

* resolve conflicts

Signed-off-by: uditgaurav <udit@chaosnative.com>

* update version

Signed-off-by: uditgaurav <udit@chaosnative.com>

Co-authored-by: Shubham Chaudhary <shubham.chaudhary@mayadata.io>
Co-authored-by: iassurewipro <81607462+iassurewipro@users.noreply.github.com>
Co-authored-by: litmusbot <litmuschaos@gmail.com>
Co-authored-by: OUM NIVRATHI KALE <oum.kale@mayadata.io>
2021-05-03 16:52:55 +05:30
Shubham Chaudhary d3eb052a6a
chore(version): updating the image version in workflows (#447)
Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>
2021-04-20 22:04:55 +05:30
litmusbot 6c7ac2d7ee 753654842: version upgraded for chaos-charts 2021-04-15 22:36:50 +00:00
Udit Gaurav cf8982ef27
Remove pod dns chaos from 1.13.3 build (#445)
Signed-off-by: uditgaurav <udit@chaosnative.com>
2021-04-16 04:06:36 +05:30
Udit Gaurav f4869f720e
[Cherry-Pick 1.13.3] (#444)
* chore(workflow): Updating the http schema and few minor fixes

Signed-off-by: uditgaurav <udit@chaosnative.com>

* Updated image with litmuschaos/k8s:latest required for predefined workflows (#430)

Signed-off-by: Amit Kumar Das <amitkumar.das@mayadata.io>

* chore(nodeselectors): comment out the nodeselectors from the chaosengine (#432)

* update(workflows): Updating k8Probe schema inside workflows

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* chore(nodeselectors): comment out the nodeselectors from the chaosengine

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* chore(chaosengine): Removed monitoring from all experiments & appinfo from infra experiments (#431)

* update(workflows): Updating k8Probe schema inside workflows

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* chore(chaosengine): Removed monitoring from all experiments & appinfo from infra experiments

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* chore(pre-define workflow): Added podtato-head workflow  (#434)

* podtato-head workflow added

Signed-off-by: Oum Kale <oumkale@chaosnative.com>

* podtato-head predefined workflow

Signed-off-by: Oum Kale <oumkale@chaosnative.com>

Co-authored-by: Shubham Chaudhary <shubham.chaudhary@mayadata.io>

* updating sock-shop workflow app-deployer schema (#435)

Signed-off-by: Oum Kale <oumkale@chaosnative.com>

Co-authored-by: Shubham Chaudhary <shubham.chaudhary@mayadata.io>

* appinfo removed from engines, infra level experiments

Signed-off-by: Oum Kale <oumkale@chaosnative.com>

* (chore)env: add stress image env to pod resource exp (#439)

Signed-off-by: ksatchit <karthik.s@mayadata.io>

* Added charts for Pod DNS Chaos (#436)

* Added charts for pod dns

Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>

* chore(disk-fill): converting disk-fill RBAC to Role from ClusterRole (#441)

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* Chore(ec2): Add charts for ec2 terminate experiment-by-id and ec2-terminate-by-tag (#440)

* Chore(ec2): Add charts for ec2 terminate experiment-by-id and ec2-terminate-by-tag

Signed-off-by: uditgaurav <udit@chaosnative.com>

* add ec2 terminates by id and tag in pkg and csv

Signed-off-by: uditgaurav <udit@chaosnative.com>

* Chore(new_chart): Add Chaos Charts for Azure instance terminate experiment (#442)

* Chore(new_chart): Add Chaos Charts for Azure instance terminate experiment

Signed-off-by: uditgaurav <udit@chaosnative.com>

* Update azure.chartserviceversion.yaml

Co-authored-by: Shubham Chaudhary <shubham.chaudhary@mayadata.io>

* update version to 1.13.3

Signed-off-by: uditgaurav <udit@chaosnative.com>

* Remove azure experiment

Signed-off-by: uditgaurav <udit@chaosnative.com>

* update version and remove pod dns experiment

Signed-off-by: uditgaurav <udit@chaosnative.com>

* update version in workflow

Signed-off-by: uditgaurav <udit@chaosnative.com>

Co-authored-by: OUM NIVRATHI KALE <oum.kale@mayadata.io>
Co-authored-by: Amit Kumar Das <40661238+amityt@users.noreply.github.com>
Co-authored-by: Shubham Chaudhary <shubham.chaudhary@mayadata.io>
Co-authored-by: Oum Kale <oumkale@chaosnative.com>
Co-authored-by: Karthik Satchitanand <karthik.s@mayadata.io>
Co-authored-by: Soumya Ghosh Dastidar <44349253+gdsoumya@users.noreply.github.com>
2021-04-16 03:49:21 +05:30
litmusbot 9b310244a1 655363891: version upgraded for chaos-charts 2021-03-15 19:34:43 +00:00
Udit Gaurav 77b357656e
Cherry Pick for 1.13.2 (#427)
* charts

Signed-off-by: oumkale <oum.kale@mayadata.io>

* sock-shop workflow

Signed-off-by: oumkale <oum.kale@mayadata.io>

* sock-shop workflow

Signed-off-by: oumkale <oum.kale@mayadata.io>

* sock-shop workflow

Signed-off-by: oumkale <oum.kale@mayadata.io>

* workflow

Signed-off-by: oumkale <oum.kale@mayadata.io>

* predefined workflow

Signed-off-by: oumkale <oum.kale@mayadata.io>

* predefined workflow

Signed-off-by: oumkale <oum.kale@mayadata.io>

* fix root issue (#422)

Signed-off-by: oumkale <oum.kale@mayadata.io>

* update installation of experiment (#419)

Signed-off-by: oumkale <oum.kale@mayadata.io>

* chore(permissions): Adding minimal permissions in all experiments (#423)

* chore(permissions): Adding minimal permissions in all experimenys

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* fix(script): convert combine experiments code to binary

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* disk-fill experiment has beem added for sock-shp workflow (#420)

Signed-off-by: oumkale <oum.kale@mayadata.io>

* chore(env): adding EPHEMERAL_STORAGE_MEBIBYTES env in disk-fill (#424)

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* Chore(ec2): Update rbac permission and add managed nodegroup (#425)

Signed-off-by: udit <udit@chaosnative.com>

Co-authored-by: udit <udit@chaosnative.com>

* Cherry Pick for !.13.2

Signed-off-by: udit <udit@chaosnative.com>

* update workflow image to 1.13.2

Signed-off-by: udit <udit@chaosnative.com>

* update workflow hub link to 1.13.2

Signed-off-by: udit <udit@chaosnative.com>

* Chore(cleanup): Remove unwanted files (#426)

Signed-off-by: udit <udit@chaosnative.com>

Co-authored-by: udit <udit@chaosnative.com>

* update workflow hub link to 1.13.2

Signed-off-by: udit <udit@chaosnative.com>

* update(workflows): Updating k8Probe schema inside workflows (#428)

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>
Signed-off-by: udit <udit@chaosnative.com>

Co-authored-by: oumkale <oum.kale@mayadata.io>
Co-authored-by: Shubham Chaudhary <shubham.chaudhary@mayadata.io>
Co-authored-by: udit <udit@chaosnative.com>
Co-authored-by: litmusbot <litmuschaos@gmail.com>
2021-03-16 01:04:26 +05:30
Udit Gaurav f60c99bfa3
Chore(v1.13.0): Update charts with version 1.13.0 (#415)
* Chore(v1.13.0): Update charts with version 1.13.0

Signed-off-by: udit <udit.gaurav@mayadata.io>

* Update version in workflows

Signed-off-by: udit <udit.gaurav@mayadata.io>
2021-02-16 01:11:54 +05:30
681 changed files with 26332 additions and 22563 deletions

BIN
.DS_Store vendored Normal file

Binary file not shown.

215
.gitignore vendored
View File

@ -1,215 +0,0 @@
# Created by https://www.toptal.com/developers/gitignore/api/git,visualstudiocode,goland+all,jetbrains+all,macos
# Edit at https://www.toptal.com/developers/gitignore?templates=git,visualstudiocode,goland+all,jetbrains+all,macos
### Git ###
# Created by git for backups. To disable backups in Git:
# $ git config --global mergetool.keepBackup false
*.orig
# Created by git when using merge tools for conflicts
*.BACKUP.*
*.BASE.*
*.LOCAL.*
*.REMOTE.*
*_BACKUP_*.txt
*_BASE_*.txt
*_LOCAL_*.txt
*_REMOTE_*.txt
### GoLand+all ###
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
# User-specific stuff
.idea/**/workspace.xml
.idea/**/tasks.xml
.idea/**/usage.statistics.xml
.idea/**/dictionaries
.idea/**/shelf
# AWS User-specific
.idea/**/aws.xml
# Generated files
.idea/**/contentModel.xml
# Sensitive or high-churn files
.idea/**/dataSources/
.idea/**/dataSources.ids
.idea/**/dataSources.local.xml
.idea/**/sqlDataSources.xml
.idea/**/dynamic.xml
.idea/**/uiDesigner.xml
.idea/**/dbnavigator.xml
# Gradle
.idea/**/gradle.xml
.idea/**/libraries
# Gradle and Maven with auto-import
# When using Gradle or Maven with auto-import, you should exclude module files,
# since they will be recreated, and may cause churn. Uncomment if using
# auto-import.
# .idea/artifacts
# .idea/compiler.xml
# .idea/jarRepositories.xml
# .idea/modules.xml
# .idea/*.iml
# .idea/modules
# *.iml
# *.ipr
# CMake
cmake-build-*/
# Mongo Explorer plugin
.idea/**/mongoSettings.xml
# File-based project format
*.iws
# IntelliJ
out/
# mpeltonen/sbt-idea plugin
.idea_modules/
# JIRA plugin
atlassian-ide-plugin.xml
# Cursive Clojure plugin
.idea/replstate.xml
# SonarLint plugin
.idea/sonarlint/
# Crashlytics plugin (for Android Studio and IntelliJ)
com_crashlytics_export_strings.xml
crashlytics.properties
crashlytics-build.properties
fabric.properties
# Editor-based Rest Client
.idea/httpRequests
# Android studio 3.1+ serialized cache file
.idea/caches/build_file_checksums.ser
### GoLand+all Patch ###
# Ignore everything but code style settings and run configurations
# that are supposed to be shared within teams.
.idea/*
!.idea/codeStyles
!.idea/runConfigurations
### JetBrains+all ###
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
# User-specific stuff
# AWS User-specific
# Generated files
# Sensitive or high-churn files
# Gradle
# Gradle and Maven with auto-import
# When using Gradle or Maven with auto-import, you should exclude module files,
# since they will be recreated, and may cause churn. Uncomment if using
# auto-import.
# .idea/artifacts
# .idea/compiler.xml
# .idea/jarRepositories.xml
# .idea/modules.xml
# .idea/*.iml
# .idea/modules
# *.iml
# *.ipr
# CMake
# Mongo Explorer plugin
# File-based project format
# IntelliJ
# mpeltonen/sbt-idea plugin
# JIRA plugin
# Cursive Clojure plugin
# SonarLint plugin
# Crashlytics plugin (for Android Studio and IntelliJ)
# Editor-based Rest Client
# Android studio 3.1+ serialized cache file
### JetBrains+all Patch ###
# Ignore everything but code style settings and run configurations
# that are supposed to be shared within teams.
### macOS ###
# General
.DS_Store
.AppleDouble
.LSOverride
# Icon must end with two \r
Icon
# Thumbnails
._*
# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent
# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk
### macOS Patch ###
# iCloud generated files
*.icloud
### VisualStudioCode ###
.vscode/
.vscode/*
!.vscode/settings.json
!.vscode/tasks.json
!.vscode/launch.json
!.vscode/extensions.json
!.vscode/*.code-snippets
# Local History for Visual Studio Code
.history/
# Built Visual Studio Code Extensions
*.vsix
### VisualStudioCode Patch ###
# Ignore all local history of files
.history
.ionide
# End of https://www.toptal.com/developers/gitignore/api/git,visualstudiocode,goland+all,jetbrains+all,macos

View File

@ -10,6 +10,8 @@ Chaos Charts are a groups of categorized chaos experiments, represented as custo
- <b>Generic</b>: It contains chaos to disrupt state of kubernetes resources. i.e, pod-delete
- <b>OpenEBS</b>: It contains chaos to disrupt state of OpenEBS control/date plane components. i.e, openebs-target-failure
- <b>Cassandra</b>: It contains chaos to disrupt state of Cassandra Applications. i.e, cassandra-pod-delete
- <b>Kafka</b>: It contains chaos to disrupt state of Kafka Applications. i.e, kafka-broker-pod-delete
- <b>Coredns</b>: It contains chaos to disrupt state of Coredns pod. i.e, coredns-pod-delete
- <b>Kube-AWS</b>: It contains chaos to disrupt state of AWS resources running part of the kubernetes cluster. i.e, ebs-loss
- <b>Kube-Components</b>: It contains chaos to disrupt the state of kubernetes components. i.e, k8-kube-proxy.

210
README.md
View File

@ -1,218 +1,34 @@
# Chaos-Charts
[![Slack Channel](https://img.shields.io/badge/Slack-Join-purple)](https://slack.litmuschaos.io)
![GitHub Workflow](https://github.com/litmuschaos/chaos-charts/actions/workflows/push.yml/badge.svg?branch=master)
[![Docker Pulls](https://img.shields.io/docker/pulls/litmuschaos/go-runner.svg)](https://hub.docker.com/r/litmuschaos/go-runner)
[![GitHub issues](https://img.shields.io/github/issues/litmuschaos/chaos-charts)](https://github.com/litmuschaos/chaos-charts/issues)
[![Twitter Follow](https://img.shields.io/twitter/follow/litmuschaos?style=social)](https://twitter.com/LitmusChaos)
[![YouTube Channel](https://img.shields.io/badge/YouTube-Subscribe-red)](https://www.youtube.com/channel/UCa57PMqmz_j0wnteRa9nCaw)
<br><br>
[![FOSSA Status](https://app.fossa.io/api/projects/git%2Bgithub.com%2Flitmuschaos%2Fchaos-charts.svg?type=shield)](https://app.fossa.io/projects/git%2Bgithub.com%2Flitmuschaos%2Fchaos-charts?ref=badge_shield)
This repository hosts the Litmus Chaos Charts. A set of related chaos faults are bundled into a Chaos Chart. Chaos Charts are classified into the following categories.
This repository hosts the Litmus Chaos Charts.
- [Kubernetes Chaos](#kubernetes-chaos)
- [Application Chaos](#application-chaos)
- [Platform Chaos](#platform-chaos)
## Installation Steps for Chart Releases
### Kubernetes Chaos
*Note: Supported from release 1.1.0*
Chaos faults that apply to Kubernetes resources are classified in this category. Following chaos faults are supported for Kubernetes:
<table>
<tr>
<th> Fault Name </th>
<th> Description </th>
<th> Link </th>
</tr>
<tr>
<td> Container Kill </td>
<td> Kill one container in the application pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/container-kill"> container-kill </a></td>
<tr>
<tr>
<td> Disk Fill </td>
<td> Fill the Ephemeral Storage of the Pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/disk-fill"> disk-fill </a></td>
<tr>
<tr>
<td> Docker Service Kill </td>
<td> Kill docker service of the target node </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/docker-service-kill"> docker-service-kill </a></td>
<tr>
<tr>
<td> Kubelet Service Kill </td>
<td> Kill kubelet service of the target node </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/kubelet-service-kill"> kubelet-service-kill </a></td>
<tr>
<tr>
<td> Node CPU Hog </td>
<td> Stress the cpu of the target node </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/node-cpu-hog"> node-cpu-hog </a></td>
<tr>
<tr>
<td> Node Drain </td>
<td> Drain the target node </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/node-drain"> node-drain </a></td>
<tr>
<tr>
<td> Node IO Stress </td>
<td> Stress the IO of the target node </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/node-io-stress"> node-io-stress </a></td>
<tr>
<tr>
<td> Node Memory Hog </td>
<td> Stress the memory of the target node </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/node-memory-hog"> node-memory-hog </a></td>
<tr>
<tr>
<td> Node Restart </td>
<td> Restart the target node </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/node-restart"> node-restart </a></td>
<tr>
<tr>
<td> Node Taint </td>
<td> Taint the target node </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/node-taint"> node-taint </a></td>
<tr>
<tr>
<td> Pod Autoscaler </td>
<td> Scale the replicas of the target application </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-autoscaler"> pod-autoscaler </a></td>
<tr>
<tr>
<td> Pod CPU Hog </td>
<td> Stress the CPU of the target pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-cpu-hog"> pod-cpu-hog </a></td>
<tr>
<tr>
<td> Pod Delete </td>
<td> Delete the target pods </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-delete"> pod-delete </a></td>
<tr>
<tr>
<td> Pod DNS Spoof </td>
<td> Spoof dns requests to desired target hostnames </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-dns-spoof"> pod-dns-spoof </a></td>
<tr>
<tr>
<td> Pod DNS Error </td>
<td> Error the dns requests of the target pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-dns-error"> pod-dns-error </a></td>
<tr>
<tr>
<td> Pod IO Stress </td>
<td> Stress the IO of the target pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-io-stress"> pod-io-stress </a></td>
<tr>
<tr>
<td> Pod Memory Hog </td>
<td> Stress the memory of the target pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-memory-hog"> pod-memory-hog </a></td>
<tr>
<tr>
<td> Pod Network Latency </td>
<td> Induce the network latency in target pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-network-latency"> pod-network-latency </a></td>
<tr>
<tr>
<td> Pod Network Corruption </td>
<td> Induce the network packet corruption in target pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-network-corruption"> pod-network-corruption </a></td>
<tr>
<tr>
<td> Pod Network Duplication </td>
<td> Induce the network packet duplication in target pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-network-duplication"> pod-network-duplication </a></td>
<tr>
<tr>
<td> Pod Network Loss </td>
<td> Induce the network loss in target pod </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-network-loss"> pod-network-loss </a></td>
<tr>
<tr>
<td> Pod Network Partition </td>
<td> Disrupt network connectivity to kubernetes pods </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/kubernetes/pod-network-partition"> pod-network-partition </a></td>
<tr>
</table>
### Application Chaos
While chaos faults under the Kubernetes category offer the ability to induce chaos into Kubernetes resources, it is difficult to analyze and conclude if the induced chaos found a weakness in a given application. The application specific chaos faults are built with some checks on *pre-conditions* and some expected outcomes after the chaos injection. The result of the chaos faults is determined by matching the outcome with the expected outcome.
<table>
<tr>
<th> Fault Category </th>
<th> Description </th>
<th> Link </th>
</tr>
<tr>
<td> Spring Boot Faults </td>
<td> Injects faults in Spring Boot applications </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/spring-boot"> Spring Boot Faults</a></td>
<tr>
</table>
### Platform Chaos
Chaos faults that inject chaos into the platform and infrastructure resources are classified into this category. Management of platform resources vary significantly from each other, Chaos Charts may be maintained separately for each platform (For example: AWS, GCP, Azure, VMWare etc.)
Following chaos faults are classified in this category:
<table>
<tr>
<th> Fault Category </th>
<th> Description </th>
<th> Link </th>
</tr>
<tr>
<td> AWS Faults </td>
<td> AWS Platform specific chaos </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/aws"> AWS Faults </a></td>
<tr>
<tr>
<td> Azure Faults </td>
<td> Azure Platform specific chaos </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/azure"> Azure Faults </a></td>
<tr>
<tr>
<td> GCP Faults </td>
<td> GCP Platform specific chaos </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/gcp"> GCP Faults </a></td>
<tr>
<tr>
<td> VMWare Faults </td>
<td> VMWare Platform specific chaos </td>
<td> <a href="https://github.com/litmuschaos/chaos-charts/tree/master/faults/vmware"> VMWare Faults </a></td>
<tr>
</table>
## Installation Steps for Chart Releases
*Note: Supported from release 3.0.0*
- To install the chaos faults from a specific chart for a given release, execute the following commands
- To install the chaos experiments from a specific chart for a given release, execute the following commands
with the desired `<release_version>`, `<chart_name>` & `<namespace>`
```bash
## downloads and unzips the released source
tar -zxvf <(curl -sL https://github.com/litmuschaos/chaos-charts/archive/<release_version>.tar.gz)
## installs the chaosexperiment resources
## installs the chaosexperiment resources
find chaos-charts-<release_version> -name experiments.yaml | grep <chart-name> | xargs kubectl apply -n <namespace> -f
```
- For example, to install the *Kubernetes* fault chart bundle for release *3.0.0*, in the *sock-shop* namespace, run:
```
- For example, to install the *generic* experiment chart bundle for release *1.1.0*, in the *sock-shop* namespace, run:
```bash
tar -zxvf <(curl -sL https://github.com/litmuschaos/chaos-charts/archive/3.0.0.tar.gz)
find chaos-charts-3.0.0 -name experiments.yaml | grep kubernetes | xargs kubectl apply -n sock-shop -f
tar -zxvf <(curl -sL https://github.com/litmuschaos/chaos-charts/archive/1.1.0.tar.gz)
find chaos-charts-1.1.0 -name experiments.yaml | grep generic | xargs kubectl apply -n sock-shop -f
```
- If you would like to install a specific fault, replace the `experiments.yaml` in the above command with the relative path of the fault manifest within the parent chart. For example, to install only the *pod-delete* fault, run:
- If you would like to install a specific experiment, replace the `experiments.yaml` in the above command with the relative
path of the experiment manifest within the parent chart. For example, to install only the *pod-delete* experiment, run:
```bash
find chaos-charts-3.0.0 -name fault.yaml | grep 'kubernetes/pod-delete' | xargs kubectl apply -n sock-shop -f
find chaos-charts-1.1.0 -name experiment.yaml | grep 'generic/pod-delete' | xargs kubectl apply -n sock-shop -f
```

View File

@ -0,0 +1,10 @@
# Remote namespace
# This experiment help you to kill a micro service running on the k8 cluster
* Apply experiments for K8 - `kubectl apply -f experiments.yaml`
* Validate the experiments for k8 - `kubectl get chaosexperiments`
* Setup RBAC as admin mode - `kubectl apply -f rbac.yaml`
* Create pod Experiment - for health experiment -`kubectl create -f engine-kiam.yaml`
* Validate experiment - `kubectl get pods -w`
* Validate logs - `kubectl logs -f <delete pod>`
* Clean up chaosexperiment -`kubectl delete -f engine.yaml`
* Clean up rbac -`kubectl delete -f rbac.yaml`

View File

@ -0,0 +1,36 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-service-kill-health
namespace: default
spec:
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
annotationCheck: 'false'
engineState: 'active'
chaosServiceAccount: chaos-admin
experiments:
- name: k8-service-kill
spec:
components:
env:
# set chaos namespace
- name: NAME_SPACE
value: 'default'
# set chaos label name
- name: LABEL_NAME
value: 'nginx'
# pod endpoint
- name: APP_ENDPOINT
value: 'localhost'
- name: FILE
value: 'service-app-kill-health.json'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,80 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Deletes a pod belonging to a deployment/statefulset/daemonset
kind: ChaosExperiment
metadata:
name: k8-service-kill
labels:
name: k8-service-kill
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: 1.13.6
spec:
definition:
scope: Namespaced
permissions:
- apiGroups:
- ""
- "apps"
- "batch"
- "litmuschaos.io"
resources:
- "deployments"
- "jobs"
- "pods"
- "configmaps"
- "chaosengines"
- "chaosexperiments"
- "chaosresults"
verbs:
- "create"
- "list"
- "get"
- "patch"
- "update"
- "delete"
- apiGroups:
- ""
resources:
- "nodes"
verbs :
- "get"
- "list"
labels:
name: k8-service-kill
app.kubernetes.io/part-of: litmus
image: "litmuschaos/py-runner:2.1.0"
args:
- -c
- python /litmus/byoc/chaostest/chaostest/kubernetes/k8_wrapper.py; exit 0
command:
- /bin/bash
env:
- name: CHAOSTOOLKIT_IN_POD
value: 'true'
- name: FILE
value: 'service-app-kill-health.json'
- name: NAME_SPACE
value: ''
- name: LABEL_NAME
value: ''
- name: APP_ENDPOINT
value: ''
- name: PERCENTAGE
value: '50'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,36 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
name: k8-service-kill
version: 0.0.4
annotations:
categories: Kubernetes
vendor: CNCF
createdAt: 2020-02-24T10:28:08Z
support: https://slack.kubernetes.io/
spec:
displayName: k8-service-kill
categoryDescription: |
K8 service kill contains chaos to kill a micro service running on the k8 cluster. It uses chaostoolkit to inject micro service kill against specified applications
keywords:
- Kubernetes
- State
platforms:
- Minikube
maturity: alpha
maintainers:
- name: sumit
email: sumit_nagal@intuit.com
minKubeVersion: 1.12.0
provider:
name: Intuit
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: 2.1.0
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-python/tree/master/chaos-test
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/byoc/generic/k8-service-kill/experiment.yaml

View File

@ -0,0 +1,38 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: chaos-admin
labels:
name: chaos-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaos-admin
labels:
name: chaos-admin
rules:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaos-admin
labels:
name: chaos-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaos-admin
subjects:
- kind: ServiceAccount
name: chaos-admin
namespace: default

View File

@ -0,0 +1,20 @@
# Pre-requisite
_In Namespace Changes_
- This experiment assume that you are using AWS with kubernetes
- This experiment assume your namespace has right role for aws to make aws api calls
- This experiment also assume you are using Instance group for your name space or aware that if you are using share node group, it will impact other pods running on this ec2 instance
# Procedure
- Apply experiments for k8 - `kubectl apply -f experiments.yaml`
- Validate the experiments for k8 - `kubectl get chaosexperiment`
- Setup RBAC - for pod delete RBAC - `kubectl apply -f rbac.yaml`
- Create pod Experiment - for health experiment -`kubectl create -f engine.yaml`
- Validate experiment - `kubectl get pods -o wide`
- Validate logs - `kubectl logs -f <delete pod>`
- Clean up chaosexperiment -`kubectl delete -f engine.yaml`
- Clean up rbac -`kubectl delete -f rbac.yaml`

View File

@ -0,0 +1,51 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-aws-ec2-terminate
namespace: default
spec:
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
# It can be delete/retain
jobCleanUpPolicy: 'retain'
engineState: 'active'
chaosServiceAccount: chaos-admin
components:
runner:
runnerannotation:
iam.amazonaws.com/role: "k8s-chaosec2access"
experiments:
- name: k8-aws-ec2-terminate
spec:
components:
experimentannotation:
iam.amazonaws.com/role: "k8s-chaosec2access"
env:
- name: NAME_SPACE
value: default
- name: LABEL_NAME
value: app=nginx
- name: APP_ENDPOINT
value: localhost
- name: FILE
value: 'ec2-delete.json'
- name: AWS_ROLE
value: 'chaosec2access'
- name: AWS_ACCOUNT
value: '0000000000'
- name: AWS_REGION
value: 'us-west-2'
- name: AWS_AZ
value: 'us-west-2c'
- name: AWS_RESOURCE
value: 'ec2-iks'
- name: AWS_SSL
value: 'false'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,112 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Deletes an aws instance belonging to a deployment/statefulset/daemonset
kind: ChaosExperiment
metadata:
name: k8-aws-ec2-terminate
labels:
name: k8-aws-ec2-terminate
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: 1.13.6
spec:
definition:
scope: Namespaced
permissions:
- apiGroups:
- ""
- "apps"
- "batch"
- "litmuschaos.io"
resources:
- "deployments"
- "jobs"
- "pods"
- "configmaps"
- "chaosengines"
- "chaosexperiments"
- "chaosresults"
verbs:
- "create"
- "list"
- "get"
- "patch"
- "update"
- "delete"
- apiGroups:
- ""
resources:
- "nodes"
verbs :
- "get"
- "list"
image: "litmuschaos/py-runner:1.13.8"
args:
- -c
- python /litmus/byoc/chaostest/chaostest/aws/aws_wrapper.py ; exit 0
command:
- /bin/bash
env:
- name: CHAOSTOOLKIT_IN_POD
value: 'true'
- name: FILE
value: 'ec2-delete.json'
- name: NAME_SPACE
value: 'default'
- name: LABEL_NAME
value: 'app=nginx'
- name: APP_ENDPOINT
value: 'localhost'
# Period to wait before injection of chaos in sec
- name: PERCENTAGE
value: '50'
# Variable to set for custom report upload
- name: REPORT
value: 'false'
# Variable to set for report upload endpoint
- name: REPORT_ENDPOINT
value: 'none'
# Variable to set for AWS account
- name: AWS_ACCOUNT
value: '000000000000'
# Variable to set for AWS role, Make sure you have created this role and have give access
- name: AWS_ROLE
value: 'chaosec2access'
# Variable to set for AWS region
- name: AWS_REGION
value: 'us-west-2'
# Variable to set for AWS AZ
- name: AWS_AZ
value: 'us-west-2c'
# Variable to set for AWS RESOURCE
- name: AWS_RESOURCE
value: 'ec2-iks'
# Variable to set for AWS SSL
- name: AWS_SSL
value: 'false'
# Variable which indicates where the test results CRs will be persisted
- name: TEST_NAMESPACE
value: 'default'
labels:
name: k8-aws-ec2-terminate
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: 1.13.6

View File

@ -0,0 +1,38 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
name: k8-aws-ec2-terminate
version: 0.0.1
annotations:
categories: Kubernetes
vendor: CNCF
createdAt: 2020-02-24T10:28:08Z
support: https://slack.kubernetes.io/
spec:
displayName: k8-aws-ec2-terminate
categoryDescription: |
AWS EC2 terminate contains chaos to disrupt state of aws resources running part of kuberntes cluster workload. It uses chaostoolkit to inject ec2 instance termination against a specified applications
keywords:
- Kubernetes
- AWS
- EC2
- State
platforms:
- Minikube
maturity: alpha
maintainers:
- name: sumit
email: sumit_nagal@intuit.com
minKubeVersion: 1.12.0
provider:
name: Intuit
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: 1.13.6
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-python/tree/master/chaos-test
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/byoc/kube-aws/k8-aws-ec2-terminate/experiment.yaml

View File

@ -0,0 +1,38 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8-aws-ec2-terminate-sa
labels:
name: k8-aws-ec2-terminate-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k8-aws-ec2-terminate-sa
labels:
name: k8-aws-ec2-terminate-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: ["","apps","batch","extensions","litmuschaos.io","openebs.io","storage.k8s.io"]
resources: ["chaosengines","chaosexperiments","chaosresults","configmaps","cstorpools","cstorvolumereplicas","events","jobs","persistentvolumeclaims","persistentvolumes","pods","pods/exec","pods/log","secrets","storageclasses","chaosengines","chaosexperiments","chaosresults","configmaps","cstorpools","cstorvolumereplicas","daemonsets","deployments","events","jobs","persistentvolumeclaims","persistentvolumes","pods","pods/eviction","pods/exec","pods/log","replicasets","secrets","services","statefulsets","storageclasses"]
verbs: ["create","delete","get","list","patch","update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list","patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8-aws-ec2-terminate-sa
labels:
name: k8-aws-ec2-terminate-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: k8-aws-ec2-terminate-sa
subjects:
- kind: ServiceAccount
name: k8-aws-ec2-terminate-sa
namespace: default

View File

@ -0,0 +1,60 @@
# Generic Chaos experiment for Application team, who want to participate in Game Day
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Deletes a pod belonging to a deployment/statefulset/daemonset
kind: ChaosExperiment
metadata:
name: k8-pod-delete
spec:
definition:
scope: Namespaced
permissions:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
labels:
name: k8-pod-delete
app.kubernetes.io/part-of: litmus
image: "litmuschaos/py-runner:1.13.8"
args:
- -c
- python /litmus/byoc/chaostest/chaostest/kubernetes/k8_wrapper.py ; exit 0
command:
- /bin/bash
env:
- name: CHAOSTOOLKIT_IN_POD
value: 'true'
- name: FILE
value: 'pod-app-kill-count.json'
- name: NAME_SPACE
value: ''
- name: LABEL_NAME
value: ''
- name: APP_ENDPOINT
value: ''
- name: PERCENTAGE
value: '50'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'
---

View File

Before

Width:  |  Height:  |  Size: 959 B

After

Width:  |  Height:  |  Size: 959 B

View File

Before

Width:  |  Height:  |  Size: 959 B

After

Width:  |  Height:  |  Size: 959 B

View File

Before

Width:  |  Height:  |  Size: 959 B

After

Width:  |  Height:  |  Size: 959 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 959 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 959 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 959 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 959 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 959 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

View File

@ -0,0 +1,10 @@
# Remote namespace
* navigate to current directory `charts/generic/k8-alb-ingress-controller/`
* Apply experiments for K8 - `kubectl apply -f experiment.yaml`
* Validate the experiments for k8 - `kubectl get chaosexperiments`
* Setup RBAC as admin mode - `kubectl apply -f rbac-admin.yaml`
* Create pod Experiment - for health experiment -`kubectl create -f engine.yaml`
* Validate experiment - `kubectl get pods -w`
* Validate logs - `kubectl logs -f <delete pod>`
* Clean up chaosexperiment -`kubectl delete -f engine.yaml`
* Clean up rbac-admin -`kubectl delete -f rbac-admin.yaml`

View File

@ -0,0 +1,37 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-alb-ingress-controller
namespace: default
spec:
appinfo:
appns: 'default'
applabel: "app=alb-ingress-controller"
appkind: deployment
annotationCheck: 'false'
engineState: 'active'
chaosServiceAccount: chaos-admin
experiments:
- name: k8-pod-delete
spec:
components:
env:
# set chaos namespace
- name: NAME_SPACE
value: addon-alb-ingress-controller-ns
# set chaos label name
- name: LABEL_NAME
value: app=alb-ingress-controller
# pod endpoint
- name: APP_ENDPOINT
value: 'localhost'
- name: FILE
value: 'pod-custom-kill-health.json'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,58 @@
# Generic Chaos experiment for Application team, who want to participate in Game Day
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Deletes a pod belonging to a deployment/statefulset/daemonset
kind: ChaosExperiment
metadata:
name: k8-pod-delete
spec:
definition:
scope: Namespaced
permissions:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
labels:
name: k8-pod-delete
app.kubernetes.io/part-of: litmus
image: "litmuschaos/py-runner:1.13.8"
args:
- -c
- python /litmus/byoc/chaostest/chaostest/kubernetes/k8_wrapper.py; exit 0
command:
- /bin/bash
env:
- name: CHAOSTOOLKIT_IN_POD
value: 'true'
- name: FILE
value: 'pod-app-kill-count.json'
- name: NAME_SPACE
value: ''
- name: LABEL_NAME
value: ''
- name: APP_ENDPOINT
value: ''
- name: PERCENTAGE
value: '50'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,34 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
name: k8-alb-ingress-controller
version: 0.1.0
annotations:
categories: Kubernetes
vendor: CNCF
createdAt: 2020-02-24T10:28:08Z
support: https://slack.kubernetes.io/
spec:
displayName: k8-alb-ingress-controller
categoryDescription: |
k8-alb-ingress-controller contains chaos to disrupt state of ingress controller. It uses chaostoolkit to inject random pod delete failures against ingress controller
keywords:
- Kubernetes
- State
- Ingress
platforms:
- Minikube
maturity: alpha
maintainers:
- name: Navin
email: navin_kumarj@intuit.com
minKubeVersion: 1.12.0
provider:
name: Intuit
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-python/tree/master/chaos-test
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/byoc/kube-components/k8-alb-ingress-controller/experiment.yaml

View File

@ -0,0 +1,38 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: chaos-admin
labels:
name: chaos-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaos-admin
labels:
name: chaos-admin
rules:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaos-admin
labels:
name: chaos-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaos-admin
subjects:
- kind: ServiceAccount
name: chaos-admin
namespace: default

View File

@ -0,0 +1,46 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: k8-pod-delete-sa
subjects:
- kind: ServiceAccount
name: k8-pod-delete-sa
namespace: default

View File

@ -0,0 +1,9 @@
# Remote namespace
* Apply experiments for K8 - `kubectl apply -f experiments.yaml`
* Validate the experiments for k8 - `kubectl get chaosexperiments`
* Setup RBAC as admin mode - `kubectl apply -f rbac-admin.yaml`
* Create pod Experiment - for health experiment -`kubectl create -f engine.yaml`
* Validate experiment - `kubectl get pods -w`
* Validate logs - `kubectl logs -f <delete pod>`
* Clean up chaosexperiment -`kubectl delete -f engine.yaml`
* Clean up rbac-admin -`kubectl delete -f rbac-admin.yaml`

View File

@ -0,0 +1,37 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-calico-node
namespace: default
spec:
appinfo:
appns: 'default'
applabel: "k8s-app=calico-node"
appkind: deployment
annotationCheck: 'false'
engineState: 'active'
chaosServiceAccount: chaos-admin
experiments:
- name: k8-pod-delete
spec:
components:
env:
# set chaos namespace
- name: NAME_SPACE
value: kube-system
# set chaos label name
- name: LABEL_NAME
value: k8s-app=calico-node
# pod endpoint
- name: APP_ENDPOINT
value: 'localhost'
- name: FILE
value: 'pod-custom-kill-health.json'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,58 @@
# Generic Chaos experiment for Application team, who want to participate in Game Day
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Deletes a pod belonging to a deployment/statefulset/daemonset
kind: ChaosExperiment
metadata:
name: k8-pod-delete
spec:
definition:
scope: Namespaced
permissions:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
labels:
name: k8-pod-delete
app.kubernetes.io/part-of: litmus
image: "litmuschaos/py-runner:1.13.8"
args:
- -c
- python /litmus/byoc/chaostest/chaostest/kubernetes/k8_wrapper.py ; exit 0
command:
- /bin/bash
env:
- name: CHAOSTOOLKIT_IN_POD
value: 'true'
- name: FILE
value: 'pod-app-kill-count.json'
- name: NAME_SPACE
value: ''
- name: LABEL_NAME
value: ''
- name: APP_ENDPOINT
value: ''
- name: PERCENTAGE
value: '50'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,34 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
name: k8-calico-node
version: 0.1.0
annotations:
categories: Kubernetes
vendor: CNCF
createdAt: 2020-02-24T10:28:08Z
support: https://slack.kubernetes.io/
spec:
displayName: k8-calico-node
categoryDescription: |
k8-calico-node contains chaos to disrupt state of calico pod. It uses chaostoolkit to inject random pod delete failures against calico node pod.
keywords:
- Kubernetes
- State
- Calico
platforms:
- Minikube
maturity: alpha
maintainers:
- name: sumit
email: sumit_nagal@intuit.com
minKubeVersion: 1.12.0
provider:
name: Intuit
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-python/tree/master/chaos-test
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/byoc/kube-components/k8-calico-node/experiment.yaml

View File

@ -0,0 +1,38 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: chaos-admin
labels:
name: chaos-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaos-admin
labels:
name: chaos-admin
rules:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaos-admin
labels:
name: chaos-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaos-admin
subjects:
- kind: ServiceAccount
name: chaos-admin
namespace: default

View File

@ -0,0 +1,46 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: k8-pod-delete-sa
subjects:
- kind: ServiceAccount
name: k8-pod-delete-sa
namespace: default

View File

@ -0,0 +1,37 @@
# chaosengine.yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-kiam-count
namespace: default
spec:
#ex. values: ns1:name=percona,ns2:run=nginx
appinfo:
appns: kube-system
# FYI, To see app label, apply kubectl get pods --show-labels
#applabel: "app=nginx"
applabel: "app=kiam"
appkind: deployment
jobCleanUpPolicy: retain
engineState: 'active'
chaosServiceAccount: chaos-admin
experiments:
- name: k8-pod-delete
spec:
components:
env:
- name: NAME_SPACE
value: kube-system
- name: LABEL_NAME
value: kiam
- name: APP_ENDPOINT
value: 'localhost'
- name: FILE
value: 'pod-app-kill-count.json'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,37 @@
# chaosengine.yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-kiam-custom-count
namespace: default
spec:
#ex. values: ns1:name=percona,ns2:run=nginx
appinfo:
appns: kube-system
# FYI, To see app label, apply kubectl get pods --show-labels
#applabel: "app=nginx"
applabel: "app=kiam"
appkind: deployment
jobCleanUpPolicy: retain
engineState: 'active'
chaosServiceAccount: chaos-admin
experiments:
- name: k8-pod-delete
spec:
components:
env:
- name: NAME_SPACE
value: kube-system
- name: LABEL_NAME
value: app=kiam
- name: APP_ENDPOINT
value: 'localhost'
- name: FILE
value: 'pod-custom-kill-count.json'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,37 @@
# chaosengine.yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-kiam-custom-health
namespace: default
spec:
#ex. values: ns1:name=percona,ns2:run=nginx
appinfo:
appns: kube-system
# FYI, To see app label, apply kubectl get pods --show-labels
#applabel: "app=nginx"
applabel: "app=kiam"
appkind: deployment
jobCleanUpPolicy: retain
engineState: 'active'
chaosServiceAccount: chaos-admin
experiments:
- name: k8-pod-delete
spec:
components:
env:
- name: NAME_SPACE
value: kube-system
- name: LABEL_NAME
value: app=kiam
- name: APP_ENDPOINT
value: 'localhost'
- name: FILE
value: 'pod-custom-kill-health.json'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,37 @@
# chaosengine.yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-kiam-health
namespace: default
spec:
#ex. values: ns1:name=percona,ns2:run=nginx
appinfo:
appns: kube-system
# FYI, To see app label, apply kubectl get pods --show-labels
#applabel: "app=nginx"
applabel: "app=kiam"
appkind: deployment
jobCleanUpPolicy: retain
engineState: 'active'
chaosServiceAccount: chaos-admin
experiments:
- name: k8-pod-delete
spec:
components:
env:
- name: NAME_SPACE
value: kube-system
- name: LABEL_NAME
value: kiam
- name: APP_ENDPOINT
value: 'localhost'
- name: FILE
value: 'pod-app-kill-health.json'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,38 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: chaos-admin
labels:
name: chaos-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaos-admin
labels:
name: chaos-admin
rules:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaos-admin
labels:
name: chaos-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaos-admin
subjects:
- kind: ServiceAccount
name: chaos-admin
namespace: default

View File

@ -0,0 +1,9 @@
# Remote namespace
* Apply experiments for K8 - `kubectl apply -f experiments.yaml`
* Validate the experiments for k8 - `kubectl get chaosexperiments`
* Setup RBAC as admin mode - `kubectl apply -f rbac-admin.yaml`
* Create pod Experiment - for health experiment -`kubectl create -f engine.yaml`
* Validate experiment - `kubectl get pods -w`
* Validate logs - `kubectl logs -f <delete pod>`
* Clean up chaosexperiment -`kubectl delete -f engine.yaml`
* Clean up rbac-admin -`kubectl delete -f rbac-admin.yaml`

View File

@ -0,0 +1,38 @@
# Generic Chaos engine for Application team, who want to participate in Game Day
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-calico-node
namespace: default
spec:
appinfo:
appns: 'default'
applabel: "app=kiam"
appkind: deployment
annotationCheck: 'false'
engineState: 'active'
chaosServiceAccount: chaos-admin
experiments:
- name: k8-pod-delete
spec:
components:
env:
# set chaos namespace
- name: NAME_SPACE
value: kube-system
# set chaos label name
- name: LABEL_NAME
value: kiam
# pod endpoint
- name: APP_ENDPOINT
value: 'localhost'
- name: FILE
value: 'pod-app-kill-health.json'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,58 @@
# Generic Chaos experiment for Application team, who want to participate in Game Day
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Deletes a pod belonging to a deployment/statefulset/daemonset
kind: ChaosExperiment
metadata:
name: k8-pod-delete
spec:
definition:
scope: Namespaced
permissions:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
labels:
name: k8-pod-delete
app.kubernetes.io/part-of: litmus
image: "litmuschaos/py-runner:1.13.8"
args:
- -c
- python /litmus/byoc/chaostest/chaostest/kubernetes/k8_wrapper.py ; exit 0
command:
- /bin/bash
env:
- name: CHAOSTOOLKIT_IN_POD
value: 'true'
- name: FILE
value: 'pod-app-kill-count.json'
- name: NAME_SPACE
value: ''
- name: LABEL_NAME
value: ''
- name: APP_ENDPOINT
value: ''
- name: PERCENTAGE
value: '50'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,34 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
name: k8-kiam
version: 0.1.0
annotations:
categories: Kubernetes
vendor: CNCF
createdAt: 2020-02-24T10:28:08Z
support: https://slack.kubernetes.io/
spec:
displayName: k8-kiam
categoryDescription: |
k8-kiam contains chaos to disrupt state of kiam. It uses chaostoolkit to inject random pod delete failures against kiam pod.
keywords:
- Kubernetes
- State
- Kiam
platforms:
- Minikube
maturity: alpha
maintainers:
- name: sumit
email: sumit_nagal@intuit.com
minKubeVersion: 1.12.0
provider:
name: Intuit
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-python/tree/master/chaos-test
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/byoc/kube-components/k8-kiam/experiment.yaml

View File

@ -0,0 +1,38 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: chaos-admin
labels:
name: chaos-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaos-admin
labels:
name: chaos-admin
rules:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaos-admin
labels:
name: chaos-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaos-admin
subjects:
- kind: ServiceAccount
name: chaos-admin
namespace: default

View File

@ -0,0 +1,46 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: k8-pod-delete-sa
subjects:
- kind: ServiceAccount
name: k8-pod-delete-sa
namespace: default

View File

@ -0,0 +1,10 @@
# Remote namespace
* navigate to current directory `charts/generic/k8-kube-proxy/`
* Apply experiments for K8 - `kubectl apply -f experiment.yaml`
* Validate the experiments for k8 - `kubectl get chaosexperiments`
* Setup RBAC as admin mode - `kubectl apply -f rbac-admin.yaml`
* Create pod Experiment - for health experiment -`kubectl create -f engine.yaml`
* Validate experiment - `kubectl get pods -w`
* Validate logs - `kubectl logs -f <delete pod>`
* Clean up chaosexperiment -`kubectl delete -f engine.yaml`
* Clean up rbac-admin -`kubectl delete -f rbac-admin.yaml`

View File

@ -0,0 +1,37 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-kube-proxy
namespace: default
spec:
appinfo:
appns: 'default'
applabel: "k8s-app=kube-proxy"
appkind: deployment
annotationCheck: 'false'
engineState: 'active'
chaosServiceAccount: chaos-admin
experiments:
- name: k8-pod-delete
spec:
components:
env:
# set chaos namespace
- name: NAME_SPACE
value: kube-system
# set chaos label name
- name: LABEL_NAME
value: k8s-app=kube-proxy
# pod endpoint
- name: APP_ENDPOINT
value: 'localhost'
- name: FILE
value: 'pod-custom-kill-health.json'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,58 @@
# Generic Chaos experiment for Application team, who want to participate in Game Day
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Deletes a pod belonging to a deployment/statefulset/daemonset
kind: ChaosExperiment
metadata:
name: k8-pod-delete
spec:
definition:
scope: Namespaced
permissions:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
labels:
name: k8-pod-delete
app.kubernetes.io/part-of: litmus
image: "litmuschaos/py-runner:1.13.8"
args:
- -c
- python /litmus/byoc/chaostest/chaostest/kubernetes/k8_wrapper.py ; exit 0
command:
- /bin/bash
env:
- name: CHAOSTOOLKIT_IN_POD
value: 'true'
- name: FILE
value: 'pod-app-kill-count.json'
- name: NAME_SPACE
value: ''
- name: LABEL_NAME
value: ''
- name: APP_ENDPOINT
value: ''
- name: PERCENTAGE
value: '50'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,34 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
name: k8-kube-proxy
version: 0.1.0
annotations:
categories: Kubernetes
vendor: CNCF
createdAt: 2020-02-24T10:28:08Z
support: https://slack.kubernetes.io/
spec:
displayName: k8-kube-proxy
categoryDescription: |
k8-kube-proxy contains chaos to disrupt state of kube-proxy. It uses chaostoolkit to inject random pod delete failures against kube-proxy.
keywords:
- Kubernetes
- State
- Kube-proxy
platforms:
- Minikube
maturity: alpha
maintainers:
- name: Navin
email: navin_kumarj@intuit.com
minKubeVersion: 1.12.0
provider:
name: Intuit
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-python/tree/master/chaos-test
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/byoc/kube-components/k8-kube-proxy/experiment.yaml

View File

@ -0,0 +1,38 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: chaos-admin
labels:
name: chaos-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaos-admin
labels:
name: chaos-admin
rules:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaos-admin
labels:
name: chaos-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaos-admin
subjects:
- kind: ServiceAccount
name: chaos-admin
namespace: default

View File

@ -0,0 +1,46 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: k8-pod-delete-sa
subjects:
- kind: ServiceAccount
name: k8-pod-delete-sa
namespace: default

View File

@ -0,0 +1,10 @@
# Remote namespace
* navigate to current directory `charts/generic/k8-prometheus-k8s-prometheus/`
* Apply experiments for K8 - `kubectl apply -f experiment.yaml`
* Validate the experiments for k8 - `kubectl get chaosexperiments`
* Setup RBAC as admin mode - `kubectl apply -f rbac-admin.yaml`
* Create pod Experiment - for health experiment -`kubectl create -f engine.yaml`
* Validate experiment - `kubectl get pods -w`
* Validate logs - `kubectl logs -f <delete pod>`
* Clean up chaosexperiment -`kubectl delete -f engine.yaml`
* Clean up rbac-admin -`kubectl delete -f rbac-admin.yaml`

View File

@ -0,0 +1,37 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-prometheus-k8s-prometheus
namespace: default
spec:
appinfo:
appns: 'default'
applabel: "app=prometheus"
appkind: deployment
annotationCheck: 'false'
engineState: 'active'
chaosServiceAccount: chaos-admin
experiments:
- name: k8-pod-delete
spec:
components:
env:
# set chaos namespace, we assume you are using the addon-metricset-ns if not modify the below namespace
- name: NAME_SPACE
value: addon-metricset-ns
# set chaos label name
- name: LABEL_NAME
value: prometheus
# pod endpoint
- name: APP_ENDPOINT
value: 'localhost'
- name: FILE
value: 'pod-app-kill-health.json'
- name: REPORT
value: 'false'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,58 @@
# Generic Chaos experiment for Application team, who want to participate in Game Day
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Deletes a pod belonging to a deployment/statefulset/daemonset
kind: ChaosExperiment
metadata:
name: k8-pod-delete
spec:
definition:
scope: Namespaced
permissions:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
labels:
name: k8-pod-delete
app.kubernetes.io/part-of: litmus
image: "litmuschaos/py-runner:1.13.8"
args:
- -c
- python /litmus/byoc/chaostest/chaostest/kubernetes/k8_wrapper.py ; exit 0
command:
- /bin/bash
env:
- name: CHAOSTOOLKIT_IN_POD
value: 'true'
- name: FILE
value: 'pod-app-kill-count.json'
- name: NAME_SPACE
value: ''
- name: LABEL_NAME
value: ''
- name: APP_ENDPOINT
value: ''
- name: PERCENTAGE
value: '50'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,34 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
name: k8-prometheus-k8s-prometheus
version: 0.1.0
annotations:
categories: Kubernetes
vendor: CNCF
createdAt: 2020-02-24T10:28:08Z
support: https://slack.kubernetes.io/
spec:
displayName: k8-prometheus-k8s-prometheus
categoryDescription: |
k8-prometheus-k8s-prometheus contains chaos to disrupt state of prometheus. It uses chaostoolkit to inject random pod delete failures against prometheus application.
keywords:
- Kubernetes
- State
- Prometheus
platforms:
- Minikube
maturity: alpha
maintainers:
- name: Anushya
email: anushya_dharmarajan@intuit.com
minKubeVersion: 1.12.0
provider:
name: Intuit
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-python/tree/master/chaos-test
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/byoc/kube-components/k8-prometheus-k8s-prometheus/experiment.yaml

View File

@ -0,0 +1,38 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: chaos-admin
labels:
name: chaos-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaos-admin
labels:
name: chaos-admin
rules:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaos-admin
labels:
name: chaos-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaos-admin
subjects:
- kind: ServiceAccount
name: chaos-admin
namespace: default

View File

@ -0,0 +1,46 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: k8-pod-delete-sa
subjects:
- kind: ServiceAccount
name: k8-pod-delete-sa
namespace: default

View File

@ -0,0 +1,10 @@
# Remote namespace
* navigate to current directory `charts/generic/k8-prometheus-operator/`
* Apply experiments for K8 - `kubectl apply -f experiment.yaml`
* Validate the experiments for k8 - `kubectl get chaosexperiments`
* Setup RBAC as admin mode - `kubectl apply -f rbac-admin.yaml`
* Create pod Experiment - for health experiment -`kubectl create -f engine.yaml`
* Validate experiment - `kubectl get pods -w`
* Validate logs - `kubectl logs -f <delete pod>`
* Clean up chaosexperiment -`kubectl delete -f engine.yaml`
* Clean up rbac-admin -`kubectl delete -f rbac-admin.yaml`

View File

@ -0,0 +1,37 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-prometheus-operator
namespace: default
spec:
appinfo:
appns: 'default'
applabel: "k8s-app=prometheus-operator"
appkind: deployment
annotationCheck: 'false'
engineState: 'active'
chaosServiceAccount: chaos-admin
experiments:
- name: k8-pod-delete
spec:
components:
env:
# set chaos namespace, we assume you are using the addon-metricset-ns if not modify the below namespace
- name: NAME_SPACE
value: addon-metricset-ns
# set chaos label name
- name: LABEL_NAME
value: k8s-app=prometheus-operator
# pod endpoint
- name: APP_ENDPOINT
value: 'localhost'
- name: FILE
value: 'pod-custom-kill-health.json'
- name: REPORT
value: 'false'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,58 @@
# Generic Chaos experiment for Application team, who want to participate in Game Day
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Deletes a pod belonging to a deployment/statefulset/daemonset
kind: ChaosExperiment
metadata:
name: k8-pod-delete
spec:
definition:
scope: Namespaced
permissions:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
labels:
name: k8-pod-delete
app.kubernetes.io/part-of: litmus
image: "litmuschaos/py-runner:1.13.8"
args:
- -c
- python /litmus/byoc/chaostest/chaostest/kubernetes/k8_wrapper.py ; exit 0
command:
- /bin/bash
env:
- name: CHAOSTOOLKIT_IN_POD
value: 'true'
- name: FILE
value: 'pod-app-kill-count.json'
- name: NAME_SPACE
value: ''
- name: LABEL_NAME
value: ''
- name: APP_ENDPOINT
value: ''
- name: PERCENTAGE
value: '50'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,34 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
name: k8-prometheus-operator
version: 0.1.0
annotations:
categories: Kubernetes
vendor: CNCF
createdAt: 2020-02-24T10:28:08Z
support: https://slack.kubernetes.io/
spec:
displayName: k8-prometheus-operator
categoryDescription: |
k8-prometheus-operator contains chaos to disrupt state of prometheus operator. It uses chaostoolkit to inject random pod delete failures against prometheus operator.
keywords:
- Kubernetes
- State
- Prometheus
platforms:
- Minikube
maturity: alpha
maintainers:
- name: Anushya
email: anushya_dharmarajan@intuit.com
minKubeVersion: 1.12.0
provider:
name: Intuit
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-python/tree/master/chaos-test
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/byoc/kube-components/k8-prometheus-operator/experiment.yaml

View File

@ -0,0 +1,38 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: chaos-admin
labels:
name: chaos-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaos-admin
labels:
name: chaos-admin
rules:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaos-admin
labels:
name: chaos-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaos-admin
subjects:
- kind: ServiceAccount
name: chaos-admin
namespace: default

View File

@ -0,0 +1,46 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: k8-pod-delete-sa
subjects:
- kind: ServiceAccount
name: k8-pod-delete-sa
namespace: default

View File

@ -0,0 +1,10 @@
# Remote namespace
* navigate to current directory `charts/generic/k8-prometheus-pushgateway/`
* Apply experiments for K8 - `kubectl apply -f experiment.yaml`
* Validate the experiments for k8 - `kubectl get chaosexperiments`
* Setup RBAC as admin mode - `kubectl apply -f rbac-admin.yaml`
* Create pod Experiment - for health experiment -`kubectl create -f engine.yaml`
* Validate experiment - `kubectl get pods -w`
* Validate logs - `kubectl logs -f <delete pod>`
* Clean up chaosexperiment -`kubectl delete -f engine.yaml`
* Clean up rbac-admin -`kubectl delete -f rbac-admin.yaml`

View File

@ -0,0 +1,37 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-prometheus-pushgateway
namespace: default
spec:
appinfo:
appns: 'default'
applabel: "k8s-app=prometheus-pushgateway"
appkind: deployment
annotationCheck: 'false'
engineState: 'active'
chaosServiceAccount: chaos-admin
experiments:
- name: k8-pod-delete
spec:
components:
env:
# set chaos namespace, we assume you are using the addon-metricset-ns if not modify the below namespace
- name: NAME_SPACE
value: addon-metricset-ns
# set chaos label name
- name: LABEL_NAME
value: k8s-app=prometheus-pushgateway
# pod endpoint
- name: APP_ENDPOINT
value: 'localhost'
- name: FILE
value: 'pod-custom-kill-health.json'
- name: REPORT
value: 'false'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,58 @@
# Generic Chaos experiment for Application team, who want to participate in Game Day
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Deletes a pod belonging to a deployment/statefulset/daemonset
kind: ChaosExperiment
metadata:
name: k8-pod-delete
spec:
definition:
scope: Namespaced
permissions:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
labels:
name: k8-pod-delete
app.kubernetes.io/part-of: litmus
image: "litmuschaos/py-runner:1.13.8"
args:
- -c
- python /litmus/byoc/chaostest/chaostest/kubernetes/k8_wrapper.py ; exit 0
command:
- /bin/bash
env:
- name: CHAOSTOOLKIT_IN_POD
value: 'true'
- name: FILE
value: 'pod-app-kill-count.json'
- name: NAME_SPACE
value: ''
- name: LABEL_NAME
value: ''
- name: APP_ENDPOINT
value: ''
- name: PERCENTAGE
value: '50'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,34 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
name: k8-prometheus-pushgateway
version: 0.1.0
annotations:
categories: Kubernetes
vendor: CNCF
createdAt: 2020-02-24T10:28:08Z
support: https://slack.kubernetes.io/
spec:
displayName: k8-prometheus-pushgateway
categoryDescription: |
k8-prometheus-pushgateway contains chaos to disrupt state of prometheus pushgateway. It uses chaostoolkit to inject random pod delete failures against prometheus pushgateway.
keywords:
- Kubernetes
- State
- Prometheus
platforms:
- Minikube
maturity: alpha
maintainers:
- name: Anushya
email: anushya_dharmarajan@intuit.com
minKubeVersion: 1.12.0
provider:
name: Intuit
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-python/tree/master/chaos-test
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/byoc/kube-components/k8-prometheus-pushgateway/experiment.yaml

View File

@ -0,0 +1,38 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: chaos-admin
labels:
name: chaos-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaos-admin
labels:
name: chaos-admin
rules:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaos-admin
labels:
name: chaos-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaos-admin
subjects:
- kind: ServiceAccount
name: chaos-admin
namespace: default

View File

@ -0,0 +1,46 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: k8-pod-delete-sa
subjects:
- kind: ServiceAccount
name: k8-pod-delete-sa
namespace: default

View File

@ -0,0 +1,10 @@
# Remote namespace
# wavefront collector information - https://github.com/wavefrontHQ/wavefront-collector
* Apply experiments for K8 - `kubectl apply -f experiments.yaml`
* Validate the experiments for k8 - `kubectl get chaosexperiments`
* Setup RBAC as admin mode - `kubectl apply -f rbac-admin.yaml`
* Create pod Experiment - for health experiment -`kubectl create -f engine.yaml`
* Validate experiment - `kubectl get pods -w`
* Validate logs - `kubectl logs -f <delete pod>`
* Clean up chaosexperiment -`kubectl delete -f engine.yaml`
* Clean up rbac-admin -`kubectl delete -f rbac-admin.yaml`

View File

@ -0,0 +1,37 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: k8-calico-node
namespace: default
spec:
appinfo:
appns: 'default'
applabel: "k8s-app=wavefront-collector"
appkind: deployment
annotationCheck: 'false'
engineState: 'active'
chaosServiceAccount: chaos-admin
experiments:
- name: k8-pod-delete
spec:
components:
env:
# set chaos namespace, we assume you are using the kube-system if not modify the below namespace
- name: NAME_SPACE
value: kube-system
# set chaos label name
- name: LABEL_NAME
value: k8s-app=wavefront-collector
# pod endpoint
- name: APP_ENDPOINT
value: 'localhost'
- name: FILE
value: 'pod-custom-kill-health.json'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,58 @@
# Generic Chaos experiment for Application team, who want to participate in Game Day
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Deletes a pod belonging to a deployment/statefulset/daemonset
kind: ChaosExperiment
metadata:
name: k8-pod-delete
spec:
definition:
scope: Namespaced
permissions:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
labels:
name: k8-pod-delete
app.kubernetes.io/part-of: litmus
image: "litmuschaos/py-runner:1.13.8"
args:
- -c
- python /litmus/byoc/chaostest/chaostest/kubernetes/k8_wrapper.py ; exit 0
command:
- /bin/bash
env:
- name: CHAOSTOOLKIT_IN_POD
value: 'true'
- name: FILE
value: 'pod-app-kill-count.json'
- name: NAME_SPACE
value: ''
- name: LABEL_NAME
value: ''
- name: APP_ENDPOINT
value: ''
- name: PERCENTAGE
value: '50'
- name: REPORT
value: 'true'
- name: REPORT_ENDPOINT
value: 'none'
- name: TEST_NAMESPACE
value: 'default'

View File

@ -0,0 +1,34 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
name: k8-wavefront-collector
version: 0.1.0
annotations:
categories: Kubernetes
vendor: CNCF
createdAt: 2020-02-24T10:28:08Z
support: https://slack.kubernetes.io/
spec:
displayName: k8-wavefront-collector
categoryDescription: |
k8-wavefront-collector contains chaos to disrupt state of wavefront collector. It uses chaostoolkit to inject random pod delete failures against wavefront collector.
keywords:
- Kubernetes
- State
- Wavefront
platforms:
- Minikube
maturity: alpha
maintainers:
- name: sumit
email: sumit_nagal@intuit.com
minKubeVersion: 1.12.0
provider:
name: Intuit
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-python/tree/master/chaos-test
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/byoc/kube-components/k8-wavefront-collector/experiment.yaml

View File

@ -0,0 +1,38 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: chaos-admin
labels:
name: chaos-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaos-admin
labels:
name: chaos-admin
rules:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaos-admin
labels:
name: chaos-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaos-admin
subjects:
- kind: ServiceAccount
name: chaos-admin
namespace: default

View File

@ -0,0 +1,46 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: ["","apps","batch"]
resources: ["jobs","deployments","daemonsets"]
verbs: ["create","list","get","patch","delete"]
- apiGroups: ["","litmuschaos.io"]
resources: ["pods","configmaps","events","services","chaosengines","chaosexperiments","chaosresults","deployments","jobs"]
verbs: ["get","create","update","patch","delete","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs : ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: k8-pod-delete-sa
namespace: default
labels:
name: k8-pod-delete-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: k8-pod-delete-sa
subjects:
- kind: ServiceAccount
name: k8-pod-delete-sa
namespace: default

View File

@ -0,0 +1,45 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2020-11-9T10:28:08Z
name: kube-components
version: 0.1.0
annotations:
categories: kube-components
chartDescription: Injects chaos on kube components. It uses chaostoolkit.
spec:
displayName: kube-components
categoryDescription: >
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easier management and discovery. It will install all the experiments which can be used to inject chaos into containerized applications.
experiments:
- k8-alb-ingress-controller
- k8-kiam
- k8-prometheus-operator
- k8-kube-proxy
- k8-prometheus-pushgateway
- k8-calico-node
- k8-prometheus-k8s-prometheus
- k8-wavefront-collector
keywords:
- Kubernetes
- Container
- Pod
- WaveFront
- Prometheus
maintainers:
- name: sumit
email: sumit_nagal@intuit.com
minKubeVersion: 1.12.0
provider:
name: Intuit
links:
- name: Kubernetes Website
url: https://kubernetes.io
- name: Source Code
url: https://github.com/kubernetes/kubernetes
- name: Kubernetes Slack
url: https://slack.kubernetes.io/
icon:
- url: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/byoc/kube-components/icons/kube-components.png
mediatype: image/png
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/byoc/kube-components/experiments.yaml

View File

@ -0,0 +1,26 @@
packageName: kube-components
experiments:
- name: k8-kiam
CSV: k8-kiam.chartserviceversion.yaml
desc: "k8-kiam"
- name: k8-prometheus-operator
CSV: k8-prometheus-operator.chartserviceversion.yaml
desc: "k8-prometheus-operator"
- name: k8-alb-ingress-controller
CSV: k8-alb-ingress-controller.chartserviceversion.yaml
desc: "k8-alb-ingress-controller"
- name: k8-kube-proxy
CSV: k8-kube-proxy.chartserviceversion.yaml
desc: "k8-kube-proxy"
- name: k8-prometheus-pushgateway
CSV: k8-prometheus-pushgateway.chartserviceversion.yaml
desc: "k8-prometheus-pushgateway"
- name: k8-calico-node
CSV: k8-calico-node.chartserviceversion.yaml
desc: "k8-calico-node"
- name: k8-prometheus-k8s-prometheus
CSV: k8-prometheus-k8s-prometheus.chartserviceversion.yaml
desc: "k8-prometheus-k8s-prometheus"
- name: k8-wavefront-collector
CSV: k8-wavefront-collector.chartserviceversion.yaml
desc: "k8-wavefront-collector"

View File

@ -0,0 +1,43 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2021-06-10T10:28:08Z
name: aws-ssm-chaos-by-id
version: 0.1.0
annotations:
categories: Kubernetes
vendor: CNCF
support: https://slack.kubernetes.io/
spec:
displayName: aws-ssm-chaos-by-id
categoryDescription: |
AWS SSM Chaos By ID contains chaos to disrupt the state of infra resources. The experiment can induce chaos on AWS resources using Amazon SSM Run Command This is carried out by using SSM Docs that defines the actions performed by Systems Manager on your managed instances (having SSM agent installed) which let us perform chaos experiments on resources.
- Causes chaos on AWS ec2 instances with given instance ID(s) using SSM docs for total chaos duration with the specified chaos interval.
- Tests deployment sanity (replica availability & uninterrupted service) and recovery workflows of the target application pod(if provided).
keywords:
- SSM
- AWS
- EC2
platforms:
- AWS
maturity: alpha
chaosType: infra
maintainers:
- name: Udit Gaurav
email: udit@chaosnative.com
provider:
name: ChaosNative
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: 2.2.0
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/aws-ssm/aws-ssm-chaos-by-id
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/aws-ssm/aws-ssm-chaos-by-id/
- name: Video
url:
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/aws-ssm/aws-ssm-chaos-by-id/experiment.yaml

View File

@ -0,0 +1,62 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
engineState: 'active'
chaosServiceAccount: aws-ssm-chaos-by-id-sa
experiments:
- name: aws-ssm-chaos-by-id
spec:
components:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '60'
# set chaos duration (in sec) as desired
- name: CHAOS_INTERVAL
value: '60'
# Instance ID of the target ec2 instance
# Multiple IDs can also be provided as comma separated values ex: id1,id2
- name: EC2_INSTANCE_ID
value: ''
# provide the region name of the target instances
- name: REGION
value: ''
# provide the percentage of available memory to stress
- name: MEMORY_PERCENTAGE
value: '80'
# provide the CPU chores to be comsumed
# 0 will consume all the available cpu cores
- name: CPU_CORE
value: '0'
# Provide the name of ssm doc
# if not using the default stress docs
- name: DOCUMENT_NAME
value: ''
# Provide the type of ssm doc
# if not using the default stress docs
- name: DOCUMENT_TYPE
value: ''
# Provide the format of ssm doc
# if not using the default stress docs
- name: DOCUMENT_FORMAT
value: ''
# Provide the path of ssm doc
# if not using the default stress docs
- name: DOCUMENT_PATH
value: ''
# if you want to install dependencies to run default ssm docs
- name: INSTALL_DEPENDENCIES
value: 'True'

View File

@ -0,0 +1,123 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Execute AWS SSM Chaos on given ec2 instance IDs
kind: ChaosExperiment
metadata:
name: aws-ssm-chaos-by-id
labels:
name: aws-ssm-chaos-by-id
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: 2.2.0
spec:
definition:
scope: Cluster
permissions:
- apiGroups:
- ""
- "batch"
- "litmuschaos.io"
resources:
- "jobs"
- "pods"
- "events"
- "pods/log"
- "pods/exec"
- "secrets"
- "configmaps"
- "chaosengines"
- "chaosexperiments"
- "chaosresults"
verbs:
- "create"
- "list"
- "get"
- "patch"
- "update"
- "delete"
image: "litmuschaos/go-runner:2.2.0"
imagePullPolicy: Always
args:
- -c
- ./experiments -name aws-ssm-chaos-by-id
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '60'
- name: CHAOS_INTERVAL
value: '60'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# Instance ID of the target ec2 instance
# Multiple IDs can also be provided as comma separated values ex: id1,id2
- name: EC2_INSTANCE_ID
value: ''
- name: REGION
value: ''
# it defines the sequence of chaos execution for multiple target instances
# supported values: serial, parallel
- name: SEQUENCE
value: 'parallel'
# Provide the path of aws credentials mounted from secret
- name: AWS_SHARED_CREDENTIALS_FILE
value: '/tmp/cloud_config.yml'
# Provide the name of ssm doc
# if not using the default stress docs
- name: DOCUMENT_NAME
value: ''
# Provide the type of ssm doc
# if not using the default stress docs
- name: DOCUMENT_TYPE
value: ''
# Provide the format of ssm doc
# if not using the default stress docs
- name: DOCUMENT_FORMAT
value: ''
# Provide the path of ssm doc
# if not using the default stress docs
- name: DOCUMENT_PATH
value: ''
# if you want to install dependencies to run default ssm docs
- name: INSTALL_DEPENDENCIES
value: 'True'
# provide the number of workers for memory stress
- name: NUMBER_OF_WORKERS
value: '1'
# provide the percentage of available memory to stress
- name: MEMORY_PERCENTAGE
value: '80'
# provide the CPU chores to be comsumed
# 0 will consume all the available cpu cores
- name: CPU_CORE
value: '0'
# provide the LIB
# only litmus supported
- name: LIB
value: 'litmus'
labels:
name: aws-ssm-chaos-by-id
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: 2.2.0
secrets:
- name: cloud-secret
mountPath: /tmp/

View File

@ -0,0 +1,46 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: aws-ssm-chaos-by-id-sa
namespace: default
labels:
name: aws-ssm-chaos-by-id-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: aws-ssm-chaos-by-id-sa
labels:
name: aws-ssm-chaos-by-id-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: [""]
resources: ["pods","events","secrets","configmaps"]
verbs: ["create","list","get","patch","update","delete","deletecollection"]
- apiGroups: [""]
resources: ["pods/exec","pods/log"]
verbs: ["create","list","get"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: aws-ssm-chaos-by-id-sa
labels:
name: aws-ssm-chaos-by-id-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: aws-ssm-chaos-by-id-sa
subjects:
- kind: ServiceAccount
name: aws-ssm-chaos-by-id-sa
namespace: default

View File

@ -0,0 +1,43 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2021-06-10T10:28:08Z
name: aws-ssm-chaos-by-tag
version: 0.1.0
annotations:
categories: Kubernetes
vendor: CNCF
support: https://slack.kubernetes.io/
spec:
displayName: aws-ssm-chaos-by-tag
categoryDescription: |
AWS SSM Chaos By ID contains chaos to disrupt the state of infra resources. The experiment can induce chaos on AWS resources using Amazon SSM Run Command This is carried out by using SSM Docs that defines the actions performed by Systems Manager on your managed instances (having SSM agent installed) which let us perform chaos experiments on resources.
- Causes chaos on AWS ec2 instances with given instance tag using SSM docs for total chaos duration with the specified chaos interval.
- Tests deployment sanity (replica availability & uninterrupted service) and recovery workflows of the target application pod(if provided).
keywords:
- SSM
- AWS
- EC2
platforms:
- AWS
maturity: alpha
chaosType: infra
maintainers:
- name: Udit Gaurav
email: udit@chaosnative.com
provider:
name: ChaosNative
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: 2.2.0
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/aws-ssm/aws-ssm-chaos-by-tag
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/aws-ssm/aws-ssm-chaos-by-tag/
- name: Video
url:
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/aws-ssm/aws-ssm-chaos-by-tag/experiment.yaml

View File

@ -0,0 +1,62 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
engineState: 'active'
chaosServiceAccount: aws-ssm-chaos-by-tag-sa
experiments:
- name: aws-ssm-chaos-by-tag
spec:
components:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '60'
# set chaos duration (in sec) as desired
- name: CHAOS_INTERVAL
value: '60'
# provide tag of the target ec2 instances
# ex: team:devops (key:value)
- name: EC2_INSTANCE_TAG
value: ''
# provide the region name of the target instances
- name: REGION
value: ''
# provide the percentage of available memory to stress
- name: MEMORY_PERCENTAGE
value: '80'
# provide the CPU chores to comsumed
# 0 will consume all the available cpu cores
- name: CPU_CORE
value: '0'
# Provide the name of ssm doc
# if not using the default stress docs
- name: DOCUMENT_NAME
value: ''
# Provide the type of ssm doc
# if not using the default stress docs
- name: DOCUMENT_TYPE
value: ''
# Provide the format of ssm doc
# if not using the default stress docs
- name: DOCUMENT_FORMAT
value: ''
# Provide the path of ssm doc
# if not using the default stress docs
- name: DOCUMENT_PATH
value: ''
# if you want to install dependencies to run default ssm docs
- name: INSTALL_DEPENDENCIES
value: 'True'

View File

@ -0,0 +1,127 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Execute AWS SSM Chaos on given ec2 instance Tag
kind: ChaosExperiment
metadata:
name: aws-ssm-chaos-by-tag
labels:
name: aws-ssm-chaos-by-tag
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: 2.2.0
spec:
definition:
scope: Cluster
permissions:
- apiGroups:
- ""
- "batch"
- "litmuschaos.io"
resources:
- "jobs"
- "pods"
- "events"
- "pods/log"
- "pods/exec"
- "secrets"
- "configmaps"
- "chaosengines"
- "chaosexperiments"
- "chaosresults"
verbs:
- "create"
- "list"
- "get"
- "patch"
- "update"
- "delete"
image: "litmuschaos/go-runner:2.2.0"
imagePullPolicy: Always
args:
- -c
- ./experiments -name aws-ssm-chaos-by-tag
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '60'
- name: CHAOS_INTERVAL
value: '60'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# provide tag of the target ec2 instances
# ex: team:devops (key:value)
- name: EC2_INSTANCE_TAG
value: ''
- name: REGION
value: ''
# it defines the sequence of chaos execution for multiple target instances
# supported values: serial, parallel
- name: SEQUENCE
value: 'parallel'
# Provide the path of aws credentials mounted from secret
- name: AWS_SHARED_CREDENTIALS_FILE
value: '/tmp/cloud_config.yml'
# percentage of total instance to target
- name: INSTANCE_AFFECTED_PERC
value: ''
# Provide the name of ssm doc
# if not using the default stress docs
- name: DOCUMENT_NAME
value: ''
# Provide the type of ssm doc
# if not using the default stress docs
- name: DOCUMENT_TYPE
value: ''
# Provide the format of ssm doc
# if not using the default stress docs
- name: DOCUMENT_FORMAT
value: ''
# Provide the path of ssm doc
# if not using the default stress docs
- name: DOCUMENT_PATH
value: ''
# if you want to install dependencies to run default ssm docs
- name: INSTALL_DEPENDENCIES
value: 'True'
# provide the number of workers for memory stress
- name: NUMBER_OF_WORKERS
value: '1'
# provide the percentage of available memory to stress
- name: MEMORY_PERCENTAGE
value: '80'
# provide the CPU chores to comsumed
# 0 will consume all the available cpu cores
- name: CPU_CORE
value: '0'
# provide the LIB
# only litmus supported
- name: LIB
value: 'litmus'
labels:
name: aws-ssm-chaos-by-tag
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: 2.2.0
secrets:
- name: cloud-secret
mountPath: /tmp/

View File

@ -0,0 +1,46 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: aws-ssm-chaos-by-tag-sa
namespace: default
labels:
name: aws-ssm-chaos-by-tag-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: aws-ssm-chaos-by-tag-sa
labels:
name: aws-ssm-chaos-by-tag-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: [""]
resources: ["pods","events","secrets","configmaps"]
verbs: ["create","list","get","patch","update","delete","deletecollection"]
- apiGroups: [""]
resources: ["pods/exec","pods/log"]
verbs: ["create","list","get"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: aws-ssm-chaos-by-tag-sa
labels:
name: aws-ssm-chaos-by-tag-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: aws-ssm-chaos-by-tag-sa
subjects:
- kind: ServiceAccount
name: aws-ssm-chaos-by-tag-sa
namespace: default

View File

@ -0,0 +1,36 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2021-06-11T10:28:08Z
name: aws-ssm
version: 0.1.0
annotations:
categories: Kubernetes
chartDescription: Injects aws ssm chaos
spec:
displayName: AWS SSM
categoryDescription: >
aws ssm contains chaos to disrupt state of aws resources by litmus aws ssm docs
experiments:
- aws-ssm-chaos-by-id
- aws-ssm-chaos-by-tag
keywords:
- AWS
- SSM
- EC2
maintainers:
- name: ksatchit
email: karthik@chaosnative.com
provider:
name: ChaosNative
links:
- name: Kubernetes Website
url: https://kubernetes.io
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/aws-ssm
- name: Kubernetes Slack
url: https://slack.kubernetes.io/
icon:
- url: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/aws-ssm/icons/aws-ssm.png
mediatype: image/png
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/aws-ssm/experiments.yaml

View File

@ -0,0 +1,8 @@
packageName: aws-ssm
experiments:
- name: aws-ssm-chaos-by-id
CSV: aws-ssm-chaos-by-id.chartserviceversion.yaml
desc: "aws-ssm-chaos-by-id"
- name: aws-ssm-chaos-by-tag
CSV: aws-ssm-chaos-by-tag.chartserviceversion.yaml
desc: "aws-ssm-chaos-by-tag"

View File

@ -0,0 +1,254 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Execute AWS SSM Chaos on given ec2 instance IDs
kind: ChaosExperiment
metadata:
name: aws-ssm-chaos-by-id
labels:
name: aws-ssm-chaos-by-id
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: 2.2.0
spec:
definition:
scope: Cluster
permissions:
- apiGroups:
- ""
- "batch"
- "litmuschaos.io"
resources:
- "jobs"
- "pods"
- "events"
- "pods/log"
- "pods/exec"
- "secrets"
- "configmaps"
- "chaosengines"
- "chaosexperiments"
- "chaosresults"
verbs:
- "create"
- "list"
- "get"
- "patch"
- "update"
- "delete"
image: "litmuschaos/go-runner:2.2.0"
imagePullPolicy: Always
args:
- -c
- ./experiments -name aws-ssm-chaos-by-id
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '60'
- name: CHAOS_INTERVAL
value: '60'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# Instance ID of the target ec2 instance
# Multiple IDs can also be provided as comma separated values ex: id1,id2
- name: EC2_INSTANCE_ID
value: ''
- name: REGION
value: ''
# it defines the sequence of chaos execution for multiple target instances
# supported values: serial, parallel
- name: SEQUENCE
value: 'parallel'
# Provide the path of aws credentials mounted from secret
- name: AWS_SHARED_CREDENTIALS_FILE
value: '/tmp/cloud_config.yml'
# Provide the name of ssm doc
# if not using the default stress docs
- name: DOCUMENT_NAME
value: ''
# Provide the type of ssm doc
# if not using the default stress docs
- name: DOCUMENT_TYPE
value: ''
# Provide the format of ssm doc
# if not using the default stress docs
- name: DOCUMENT_FORMAT
value: ''
# Provide the path of ssm doc
# if not using the default stress docs
- name: DOCUMENT_PATH
value: ''
# if you want to install dependencies to run default ssm docs
- name: INSTALL_DEPENDENCIES
value: 'True'
# provide the number of workers for memory stress
- name: NUMBER_OF_WORKERS
value: '1'
# provide the percentage of available memory to stress
- name: MEMORY_PERCENTAGE
value: '80'
# provide the CPU chores to be comsumed
# 0 will consume all the available cpu cores
- name: CPU_CORE
value: '0'
# provide the LIB
# only litmus supported
- name: LIB
value: 'litmus'
labels:
name: aws-ssm-chaos-by-id
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: 2.2.0
secrets:
- name: cloud-secret
mountPath: /tmp/
---
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Execute AWS SSM Chaos on given ec2 instance Tag
kind: ChaosExperiment
metadata:
name: aws-ssm-chaos-by-tag
labels:
name: aws-ssm-chaos-by-tag
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: 2.2.0
spec:
definition:
scope: Cluster
permissions:
- apiGroups:
- ""
- "batch"
- "litmuschaos.io"
resources:
- "jobs"
- "pods"
- "events"
- "pods/log"
- "pods/exec"
- "secrets"
- "configmaps"
- "chaosengines"
- "chaosexperiments"
- "chaosresults"
verbs:
- "create"
- "list"
- "get"
- "patch"
- "update"
- "delete"
image: "litmuschaos/go-runner:2.2.0"
imagePullPolicy: Always
args:
- -c
- ./experiments -name aws-ssm-chaos-by-tag
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '60'
- name: CHAOS_INTERVAL
value: '60'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# provide tag of the target ec2 instances
# ex: team:devops (key:value)
- name: EC2_INSTANCE_TAG
value: ''
- name: REGION
value: ''
# it defines the sequence of chaos execution for multiple target instances
# supported values: serial, parallel
- name: SEQUENCE
value: 'parallel'
# Provide the path of aws credentials mounted from secret
- name: AWS_SHARED_CREDENTIALS_FILE
value: '/tmp/cloud_config.yml'
# percentage of total instance to target
- name: INSTANCE_AFFECTED_PERC
value: ''
# Provide the name of ssm doc
# if not using the default stress docs
- name: DOCUMENT_NAME
value: ''
# Provide the type of ssm doc
# if not using the default stress docs
- name: DOCUMENT_TYPE
value: ''
# Provide the format of ssm doc
# if not using the default stress docs
- name: DOCUMENT_FORMAT
value: ''
# Provide the path of ssm doc
# if not using the default stress docs
- name: DOCUMENT_PATH
value: ''
# if you want to install dependencies to run default ssm docs
- name: INSTALL_DEPENDENCIES
value: 'True'
# provide the number of workers for memory stress
- name: NUMBER_OF_WORKERS
value: '1'
# provide the percentage of available memory to stress
- name: MEMORY_PERCENTAGE
value: '80'
# provide the CPU chores to comsumed
# 0 will consume all the available cpu cores
- name: CPU_CORE
value: '0'
# provide the LIB
# only litmus supported
- name: LIB
value: 'litmus'
labels:
name: aws-ssm-chaos-by-tag
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: 2.2.0
secrets:
- name: cloud-secret
mountPath: /tmp/
---

View File

Before

Width:  |  Height:  |  Size: 3.1 KiB

After

Width:  |  Height:  |  Size: 3.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.1 KiB

View File

@ -0,0 +1,42 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
name: azure-disk-loss
version: 0.1.0
annotations:
categories: Azure
vendor: ChaosNative
support: https://app.slack.com/client/T09NY5SBT/CNXNB0ZTN
spec:
displayName: azure-disk-loss
categoryDescription: |
This experiment causes the detachment of the disk from the VM for a certain chaos duration
- Causes detachment of the disk from the VM and then reattachment of the disk to the VM
- It helps to check the performance of the application on the instance.
keywords:
- Azure
- Disk
- AKS
platforms:
- Azure
maturity: alpha
maintainers:
- name: avaakash
email: akash@chaosnative.com
minKubeVersion: 1.12.0
provider:
name: ChaosNative
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: latest
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/azure/disk-loss/experiment
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/azure/azure-disk-loss/
# - name: Video
# url:
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/azure/azure-disk-loss/experiment.yaml

View File

@ -1,12 +1,11 @@
---
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: azure-chaos
name: nginx-chaos
spec:
# It can be active/stop
engineState: 'active'
chaosServiceAccount: litmus-admin
chaosServiceAccount: azure-disk-loss-sa
experiments:
- name: azure-disk-loss
spec:
@ -19,15 +18,15 @@ spec:
# set chaos interval (in sec) as desired
- name: CHAOS_INTERVAL
value: '30'
# provide the resource group of the instance
- name: RESOURCE_GROUP
value: ''
# accepts enable/disable, default is disable
- name: SCALE_SET
value: ''
# provide the virtual disk names (comma separated if multiple)
- name: VIRTUAL_DISK_NAMES
value: ''
value: ''

View File

@ -0,0 +1,92 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Detaches disk from the VM and then re-attaches disk to the VM
kind: ChaosExperiment
metadata:
name: azure-disk-loss
labels:
name: azure-disk-loss
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: latest
spec:
definition:
scope: Cluster
permissions:
- apiGroups:
- ""
- "batch"
- "apps"
- "litmuschaos.io"
resources:
- "jobs"
- "pods"
- "pods/log"
- "events"
- "deployments"
- "replicasets"
- "pods/exec"
- "chaosengines"
- "chaosexperiments"
- "chaosresults"
- "secrets"
verbs:
- "create"
- "list"
- "get"
- "patch"
- "update"
- "delete"
- "deletecollection"
image: "litmuschaos/go-runner:2.2.0"
imagePullPolicy: Always
args:
- -c
- ./experiments -name azure-disk-loss
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '30'
- name: CHAOS_INTERVAL
value: '30'
- name: LIB
value: 'litmus'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# provide the resource group of the instance
- name: RESOURCE_GROUP
value: ''
# accepts enable/disable, default is disable
- name: SCALE_SET
value: ''
# provide the virtual disk names (comma separated if multiple)
- name: VIRTUAL_DISK_NAMES
value: ''
# provide the sequence type for the run. Options: serial/parallel
- name: SEQUENCE
value: 'parallel'
# provide the path to aks credentials mounted from secret
- name: AZURE_AUTH_LOCATION
value: '/tmp/azure.auth'
labels:
name: azure-disk-loss
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: latest
secrets:
- name: cloud-secret
mountPath: /tmp/

View File

@ -0,0 +1,48 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: azure-disk-loss-sa
namespace: default
labels:
name: azure-disk-loss-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: azure-disk-loss-sa
namespace: default
labels:
name: azure-disk-loss-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: [""]
resources: ["pods","events","secrets"]
verbs: ["create","list","get","patch","update","delete","deletecollection"]
- apiGroups: [""]
resources: ["pods/exec","pods/log"]
verbs: ["create","list","get"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: azure-disk-loss-sa
namespace: default
labels:
name: azure-disk-loss-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: azure-disk-loss-sa
subjects:
- kind: ServiceAccount
name: azure-disk-loss-sa
namespace: default

View File

@ -0,0 +1,44 @@
apiVersion: litmuchaos.io/v1alpha1
kind: ChartServiceVersion
metadata:
createdAt: 2021-02-20T10:28:08Z
name: azure-instance-stop
version: 0.1.0
annotations:
categories: Azure
vendor: ChaosNative
support: https://app.slack.com/client/T09NY5SBT/CNXNB0ZTN
spec:
displayName: azure-instance-stop
categoryDescription: |
This experiment causes the power off of an azure instance for a certain chaos duration.
- Causes termination of an azure instance before bringing it back to running state after the specified chaos duration.
- It helps to check the performance of the application on the instance.
keywords:
- Azure
- Scaleset
- AKS
platforms:
- Azure
maturity: alpha
chaosType: infra
maintainers:
- name: Udit Gaurav
email: udit@chaosnative.com
provider:
name: Chaos Native
labels:
app.kubernetes.io/component: chartserviceversion
app.kubernetes.io/version: 2.2.0
links:
- name: Source Code
url: https://github.com/litmuschaos/litmus-go/tree/master/experiments/azure/instance-stop/experiment
- name: Documentation
url: https://litmuschaos.github.io/litmus/experiments/categories/azure/azure-instance-stop/
# - name: Video
# url:
icon:
- url:
mediatype: ""
chaosexpcrdlink: https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/azure/azure-instance-stop/experiment.yaml

View File

@ -1,33 +1,34 @@
---
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
annotationCheck: 'false'
engineState: 'active'
chaosServiceAccount: litmus-admin
chaosServiceAccount: azure-instance-stop-sa
experiments:
- name: azure-instance-stop
spec:
components:
env:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '30'
# set chaos intreval (in sec) as desired
- name: CHAOS_INTERVAL
value: '30'
value: '30'
# provide the target instance name(s) (comma separated if multiple)
- name: AZURE_INSTANCE_NAMES
- name: AZURE_INSTANCE_NAME
value: ''
# provide the resource group of the instance
- name: RESOURCE_GROUP
value: ''
# accepts enable/disable, default is disable
- name: SCALE_SET
value: ''

View File

@ -0,0 +1,89 @@
apiVersion: litmuschaos.io/v1alpha1
description:
message: |
Terminating azure VM instance
kind: ChaosExperiment
metadata:
name: azure-instance-stop
labels:
name: azure-instance-stop
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: chaosexperiment
app.kubernetes.io/version: 2.2.0
spec:
definition:
scope: Cluster
permissions:
- apiGroups:
- ""
- "batch"
- "litmuschaos.io"
resources:
- "jobs"
- "pods"
- "events"
- "pods/log"
- "pods/exec"
- "secrets"
- "chaosengines"
- "chaosexperiments"
- "chaosresults"
verbs:
- "create"
- "list"
- "get"
- "patch"
- "update"
- "delete"
image: "litmuschaos/go-runner:2.2.0"
imagePullPolicy: Always
args:
- -c
- ./experiments -name azure-instance-stop
command:
- /bin/bash
env:
- name: TOTAL_CHAOS_DURATION
value: '30'
- name: CHAOS_INTERVAL
value: '30'
# Period to wait before and after injection of chaos in sec
- name: RAMP_TIME
value: ''
# provide the target instance name(s) (comma separated if multiple)
- name: AZURE_INSTANCE_NAME
value: ''
# provide the resource group of the instance
- name: RESOURCE_GROUP
value: ''
# accepts enable/disable, default is disable
- name: SCALE_SET
value: ''
# Provide the path of aks credentials mounted from secret
- name: AZURE_AUTH_LOCATION
value: '/tmp/azure.auth'
- name: SEQUENCE
value: 'parallel'
# provide the LIB
# only litmus supported
- name: LIB
value: 'litmus'
labels:
name: azure-instance-stop
app.kubernetes.io/part-of: litmus
app.kubernetes.io/component: experiment-job
app.kubernetes.io/version: 2.2.0
secrets:
- name: cloud-secret
mountPath: /tmp/

Some files were not shown because too many files have changed in this diff Show More