litmus-docs/website/versioned_docs/version-1.9.0/pod-delete.md

9.8 KiB

id title sidebar_label original_id
pod-delete Pod Delete Experiment Details Pod Delete pod-delete

Experiment Metadata

Type Description Tested K8s Platform
Generic Fail the application pod GKE, Konvoy(AWS), Packet(Kubeadm), Minikube, EKS, AKS

Prerequisites

  • Ensure that the Litmus Chaos Operator is running by executing kubectl get pods in operator namespace (typically, litmus).If not, install from here
  • Ensure that the pod-delete experiment resource is available in the cluster by executing kubectl get chaosexperiments in the desired namespace. If not, install from here

Entry Criteria

  • Application pods are healthy before chaos injection

Exit Criteria

  • Application pods are healthy post chaos injection

Details

  • Causes (forced/graceful) pod failure of specific/random replicas of an application resources
  • Tests deployment sanity (replica availability & uninterrupted service) and recovery workflow of the application
  • The pod delete by Powerfulseal is only supporting single pod failure (kill_count = 1).

Integrations

  • Pod failures can be effected using one of these chaos libraries: litmus, powerfulseal
  • The desired chaos library can be selected by setting one of the above options as value for the env variable LIB

Steps to Execute the Chaos Experiment

  • This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer Getting Started

  • Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.

Prepare chaosServiceAccount

  • Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
  • The RBAC sample manifest is different for both LIB (litmus, powerseal). Use the respective rbac sample manifest on the basis of LIB ENV.

Sample Rbac Manifest for litmus LIB

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: pod-delete-sa
  namespace: default
  labels:
    name: pod-delete-sa
    app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-delete-sa
  namespace: default
  labels:
    name: pod-delete-sa
    app.kubernetes.io/part-of: litmus
rules:
  - apiGroups: ["", "litmuschaos.io", "batch", "apps"]
    resources:
      [
        "pods",
        "deployments",
        "pods/log",
        "events",
        "jobs",
        "chaosengines",
        "chaosexperiments",
        "chaosresults",
      ]
    verbs:
      ["create", "list", "get", "patch", "update", "delete", "deletecollection"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: pod-delete-sa
  namespace: default
  labels:
    name: pod-delete-sa
    app.kubernetes.io/part-of: litmus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: pod-delete-sa
subjects:
  - kind: ServiceAccount
    name: pod-delete-sa
    namespace: default

Sample Rbac Manifest for powerfulseal LIB

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: pod-delete-sa
  namespace: default
  labels:
    name: pod-delete-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: pod-delete-sa
  labels:
    name: pod-delete-sa
rules:
  - apiGroups: ["", "litmuschaos.io", "batch", "apps"]
    resources:
      [
        "pods",
        "deployments",
        "pods/log",
        "events",
        "jobs",
        "configmaps",
        "chaosengines",
        "chaosexperiments",
        "chaosresults",
      ]
    verbs: ["create", "list", "get", "patch", "update", "delete"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: pod-delete-sa
  labels:
    name: pod-delete-sa
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: pod-delete-sa
subjects:
  - kind: ServiceAccount
    name: pod-delete-sa
    namespace: default

Prepare ChaosEngine

  • Provide the application info in spec.appinfo
  • Override the experiment tunables if desired in experiments.spec.components.env
  • To understand the values to provided in a ChaosEngine specification, refer ChaosEngine Concepts

Supported Experiment Tunables

Variables Description Specify In ChaosEngine Notes
TOTAL_CHAOS_DURATION The time duration for chaos insertion (in sec) Optional Defaults to 15s, NOTE: Overall run duration of the experiment may exceed the TOTAL_CHAOS_DURATION by a few min
CHAOS_INTERVAL Time interval b/w two successive pod failures (in sec) Optional Defaults to 5s
LIB The chaos lib used to inject the chaos Optional Defaults to `litmus`. Supported: `litmus`, `powerfulseal`. In case of powerfulseal use the powerfulseal experiment CR.
FORCE Application Pod deletion mode. `False` indicates graceful deletion with default termination period of 30s. 'True' indicates an immediate forceful deletion with 0s grace period Optional Default to `true`, With `terminationGracePeriodSeconds=0`
TARGET_POD Name of the application pod subjected to pod delete chaos Optional If not provided it will select from the appLabel provided
PODS_AFFECTED_PERC The Percentage of total pods to target Optional Defaults to 0% (corresponds to 1 replica)
RAMP_TIME Period to wait before and after injection of chaos in sec Optional
SEQUENCE It defines sequence of chaos execution for multiple target pods Optional Default value: parallel. Supported: serial, parallel
INSTANCE_ID A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name. Optional Ensure that the overall length of the chaosresult CR is still < 64 characters

Sample ChaosEngine Manifest

apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
  name: nginx-chaos
  namespace: default
spec:
  appinfo:
    appns: "default"
    applabel: "app=nginx"
    appkind: "deployment"
  # It can be true/false
  annotationCheck: "true"
  # It can be active/stop
  engineState: "active"
  #ex. values: ns1:name=percona,ns2:run=nginx
  auxiliaryAppInfo: ""
  chaosServiceAccount: pod-delete-sa
  monitoring: false
  # It can be delete/retain
  jobCleanUpPolicy: "delete"
  experiments:
    - name: pod-delete
      spec:
        components:
          env:
            # set chaos duration (in sec) as desired
            - name: TOTAL_CHAOS_DURATION
              value: "30"

            # set chaos interval (in sec) as desired
            - name: CHAOS_INTERVAL
              value: "10"

            # pod failures without '--force' & default terminationGracePeriodSeconds
            - name: FORCE
              value: "false"

Create the ChaosEngine Resource

  • Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.

    kubectl apply -f chaosengine.yml

  • If the chaos experiment is not executed, refer to the troubleshooting section to identify the root cause and fix the issues.

Watch Chaos progress

  • View pod terminations & recovery by setting up a watch on the pods in the application namespace

    watch -n 1 kubectl get pods -n <application-namespace>

Abort/Restart the Chaos Experiment

  • To stop the pod-delete experiment immediately, either delete the ChaosEngine resource or execute the following command:

    kubectl patch chaosengine <chaosengine-name> -n <namespace> --type merge --patch '{"spec":{"engineState":"stop"}}'

  • To restart the experiment, either re-apply the ChaosEngine YAML or execute the following command:

    kubectl patch chaosengine <chaosengine-name> -n <namespace> --type merge --patch '{"spec":{"engineState":"active"}}'

Check Chaos Experiment Result

  • Check whether the application is resilient to the pod failure, once the experiment (job) is completed. The ChaosResult resource name is derived like this: <ChaosEngine-Name>-<ChaosExperiment-Name>.

    kubectl describe chaosresult nginx-chaos-pod-delete -n <application-namespace>

Application Pod Failure Demo

  • A sample recording of this experiment execution is provided here