litmus-docs/website/versioned_docs/version-1.8.0/node-io-stress.md

8.5 KiB

id title sidebar_label original_id
node-io-stress Node IO Stress Experiment Details Node IO Stress node-io-stress

Experiment Metadata

Type Description Tested K8s Platform
Generic Give IO Disk Stress on the Kubernetes Node GKE, EKS, Minikube

Prerequisites

  • Ensure that the Litmus Chaos Operator is running by executing kubectl get pods in operator namespace (typically, litmus). If not, install from here
  • Ensure that the node-io-stress experiment resource is available in the cluster by executing kubectl get chaosexperiments in the desired namespace. If not, install from here

Entry Criteria

  • Application pods are healthy on the respective Nodes before chaos injection

Exit Criteria

  • Application pods may or may not be healthy post chaos injection

Details

  • This experiment causes disk stress on the Kubernetes node. The experiment aims to verify the resiliency of applications that share this disk resource for ephemeral or persistent storage purposes.
  • The amount of disk stress can be either specifed as the size in percentage of the total free space on the file system or simply in Gigabytes(GB). When provided both it will execute with the utilization percentage specified and non of them are provided it will execute with default value of 10%.
  • Tests application resiliency upon replica evictions caused due IO stress on the available Disk space.

Integrations

  • Node IO Stress can be injected using the chaos library: litmus
  • This can be provided under under LIB variable

Steps to Execute the Chaos Experiment

  • This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer Getting Started

  • Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.

Prepare chaosServiceAccount

  • Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.

Sample Rbac Manifest

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: node-io-stress-sa
  namespace: default
  labels:
    name: node-io-stress-sa
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: node-io-stress-sa
  labels:
    name: node-io-stress-sa
rules:
  - apiGroups: ["", "litmuschaos.io", "batch", "apps"]
    resources:
      [
        "pods",
        "jobs",
        "pods/log",
        "events",
        "chaosengines",
        "chaosexperiments",
        "chaosresults",
      ]
    verbs: ["create", "list", "get", "patch", "update", "delete"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: node-io-stress-sa
  labels:
    name: node-io-stress-sa
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: node-io-stress-sa
subjects:
  - kind: ServiceAccount
    name: node-io-stress-sa
    namespace: default

Prepare ChaosEngine

  • Provide the application info in spec.appinfo
  • Provide the auxiliary applications info (ns & labels) in spec.auxiliaryAppInfo
  • Override the experiment tunables if desired in experiments.spec.components.env
  • To understand the values to provided in a ChaosEngine specification, refer ChaosEngine Concepts

Supported Experiment Tunables

Variables Description Specify in ChaosEngine Notes
TOTAL_CHAOS_DURATION The time duration for chaos (seconds) Optional Default to 120
FILESYSTEM_UTILIZATION_PERCENTAGE Specify the size as percentage of free space on the file system Optional Default to 10%
FILESYSTEM_UTILIZATION_BYTES Specify the size in GigaBytes(GB). FILESYSTEM_UTILIZATION_PERCENTAGE & FILESYSTEM_UTILIZATION_BYTES are mutually exclusive. If both are provided, FILESYSTEM_UTILIZATION_PERCENTAGE is prioritized. Optional
NUMBER_OF_WORKERS It is the number of IO workers involved in IO disk stress Optional Default to 4
APP_NODE Name of the node subjected to IO stress Optional If not provided. It will select the app node from appinfo randomly
LIB The chaos lib used to inject the chaos Optional Default to litmus
LIB_IMAGE Image used to run the stress command Optional Default to litmuschaos/go-runner:latest
RAMP_TIME Period to wait before and after injection of chaos in sec Optional
INSTANCE_ID A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name. Optional Ensure that the overall length of the chaosresult CR is still < 64 characters

Sample ChaosEngine Manifest

apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
  name: nginx-chaos
  namespace: default
spec:
  # It can be true/false
  annotationCheck: "false"
  # It can be active/stop
  engineState: "active"
  #ex. values: ns1:name=percona,ns2:run=nginx
  auxiliaryAppInfo: ""
  appinfo:
    appns: "default"
    applabel: "app=nginx"
    appkind: "deployment"
  chaosServiceAccount: node-io-stress-sa
  monitoring: false
  # It can be delete/retain
  jobCleanUpPolicy: "delete"
  experiments:
    - name: node-io-stress
      spec:
        components:
          env:
            # set chaos duration (in sec) as desired
            - name: TOTAL_CHAOS_DURATION
              value: "120"

            ## specify the size as percentage of free space on the file system
            - name: FILESYSTEM_UTILIZATION_PERCENTAGE
              value: "10"

              ## enter the name of the desired node
            - name: APP_NODE
              value: ""

Create the ChaosEngine Resource

  • Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.

    kubectl apply -f chaosengine.yml

  • If the chaos experiment is not executed, refer to the troubleshooting section to identify the root cause and fix the issues.

Watch Chaos progress

  • View the status of the pods as they are subjected to IO disk stress.

    watch -n 1 kubectl get pods -n <application-namespace>

  • Monitor the capacity filled up on the host filesystem

    watch du -h

Abort/Restart the Chaos Experiment

  • To stop the pod-io-stress experiment immediately, either delete the ChaosEngine resource or execute the following command:

    kubectl patch chaosengine <chaosengine-name> -n <namespace> --type merge --patch '{"spec":{"engineState":"stop"}}'

  • To restart the experiment, either re-apply the ChaosEngine YAML or execute the following command:

    kubectl patch chaosengine <chaosengine-name> -n <namespace> --type merge --patch '{"spec":{"engineState":"active"}}'

Check Chaos Experiment Result

  • Check whether the application is resilient to the io stress, once the experiment (job) is completed. The ChaosResult resource name is derived like this: <ChaosEngine-Name>-<ChaosExperiment-Name>.

    kubectl describe chaosresult nginx-chaos-node-io-stress -n <application-namespace>

Node IO Stress Experiment Demo

  • The Demo Video will be Added soon.