--- id: pod-memory-hog title: Pod Memory Hog Details sidebar_label: Pod Memory Hog original_id: pod-memory-hog --- --- ## Experiment Metadata
Type Description Tested K8s Platform
Generic Consume memory resources on the application container GKE, Packet(Kubeadm), Minikube, EKS, AKS
## Prerequisites - Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus) - Ensure that the `pod-memory-hog` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.11.0?file=charts/generic/pod-memory-hog/experiment.yaml) - Cluster must run docker container runtime ## Entry Criteria - Application pods are healthy on the respective nodes before chaos injection ## Exit Criteria - Application pods are healthy on the respective nodes post chaos injection ## Details - This experiment consumes the Memory resources on the application container on specified memory in megabytes. - It simulates conditions where app pods experience Memory spikes either due to expected/undesired processes thereby testing how the overall application stack behaves when this occurs. ## Integrations - Pod Memory can be effected using the chaos library: `litmus` ## Steps to Execute the Chaos Experiment - This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer [Getting Started](getstarted.md/#prepare-chaosengine) - Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment. ### Prepare chaosServiceAccount Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment. #### Sample Rbac Manifest [embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/v1.11.x/charts/generic/pod-memory-hog/rbac.yaml yaml" ```yaml --- apiVersion: v1 kind: ServiceAccount metadata: name: pod-memory-hog-sa namespace: default labels: name: pod-memory-hog-sa app.kubernetes.io/part-of: litmus --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: pod-memory-hog-sa namespace: default labels: name: pod-memory-hog-sa app.kubernetes.io/part-of: litmus rules: - apiGroups: ["", "litmuschaos.io", "batch"] resources: [ "pods", "jobs", "events", "pods/log", "pods/exec", "chaosengines", "chaosexperiments", "chaosresults", ] verbs: ["create", "list", "get", "patch", "update", "delete", "deletecollection"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: pod-memory-hog-sa namespace: default labels: name: pod-memory-hog-sa app.kubernetes.io/part-of: litmus roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: pod-memory-hog-sa subjects: - kind: ServiceAccount name: pod-memory-hog-sa namespace: default ``` **_Note:_** In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The `chaosServiceAccount` can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found [here](https://docs.litmuschaos.io/docs/next/litmus-psp/). ### Prepare ChaosEngine - Provide the application info in `spec.appinfo` - Provide the auxiliary applications info (ns & labels) in `spec.auxiliaryAppInfo` - Override the experiment tunables if desired in `experiments.spec.components.env` - To understand the values to provided in a ChaosEngine specification, refer [ChaosEngine Concepts](chaosengine-concepts.md) #### Supported Experiment Tunables
Variables Description Type Notes
TARGET_CONTAINER Name of the container subjected to Memory stress Mandatory
MEMORY_CONSUMPTION The amount of memory used of hogging a Kubernetes pod (megabytes) Optional Defaults to 500MB (Up to 2000MB)
TOTAL_CHAOS_DURATION The time duration for chaos insertion (seconds) Optional Defaults to 60s
LIB The chaos lib used to inject the chaos. Available libs are litmus and pumba Optional Defaults to litmus
LIB_IMAGE Image used to run the stress command. Only used in LIB pumba Optional Defaults to gaiaadm/pumba
TARGET_PODS Comma separated list of application pod name subjected to pod memory hog chaos Optional If not provided, it will select target pods randomly based on provided appLabels
CHAOS_KILL_COMMAND The command to kill the chaos process Optional Default to kill $(find /proc -name exe -lname '*/dd' 2>&1 | grep -v 'Permission denied' | awk -F/ '{'{'}print $(NF-1){'}'}' | head -n 1
PODS_AFFECTED_PERC The Percentage of total pods to target Optional Defaults to 0 (corresponds to 1 replica), provide numeric value only
RAMP_TIME Period to wait before and after injection of chaos in sec Optional
SEQUENCE It defines sequence of chaos execution for multiple target pods Optional Default value: parallel. Supported: serial, parallel
INSTANCE_ID A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name. Optional Ensure that the overall length of the chaosresult CR is still < 64 characters
#### Sample ChaosEngine Manifest [embedmd]: # "https://raw.githubusercontent.com/litmuschaos/chaos-charts/v1.11.x/charts/generic/pod-memory-hog/engine.yaml yaml" ```yaml apiVersion: litmuschaos.io/v1alpha1 kind: ChaosEngine metadata: name: nginx-chaos namespace: default spec: # It can be true/false annotationCheck: "true" # It can be active/stop engineState: "active" appinfo: appns: "default" applabel: "app=nginx" appkind: "deployment" chaosServiceAccount: pod-memory-hog-sa monitoring: false # It can be delete/retain jobCleanUpPolicy: "delete" experiments: - name: pod-memory-hog spec: components: env: # Provide name of target container # where chaos has to be injected - name: TARGET_CONTAINER value: "nginx" # Enter the amount of memory in megabytes to be consumed by the application pod - name: MEMORY_CONSUMPTION value: "500" - name: TOTAL_CHAOS_DURATION value: "60" # in seconds - name: CHAOS_KILL_COMMAND value: "kill -9 $(ps afx | grep \"[dd] if /dev/zero\" | awk '{print $1}' | tr '\n' ' ')" ``` ### Create the ChaosEngine Resource - Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos. `kubectl apply -f chaosengine.yml` - If the chaos experiment is not executed, refer to the [troubleshooting](https://docs.litmuschaos.io/docs/faq-troubleshooting/) section to identify the root cause and fix the issues. ### Watch Chaos progress - Set up a watch on the applications interacting/dependent on the affected pods and verify whether they are running `watch kubectl get pods -n ` ### Abort/Restart the Chaos Experiment - To stop the pod-memory-hog experiment immediately, either delete the ChaosEngine resource or execute the following command: `kubectl patch chaosengine -n --type merge --patch '{"spec":{"engineState":"stop"}}'` - To restart the experiment, either re-apply the ChaosEngine YAML or execute the following command: `kubectl patch chaosengine -n --type merge --patch '{"spec":{"engineState":"active"}}'` ### Check Chaos Experiment Result - Check whether the application stack is resilient to Memory spikes on the app replica, once the experiment (job) is completed. The ChaosResult resource name is derived like this: `-`. `kubectl describe chaosresult nginx-chaos-pod-memory-hog -n ` ## Pod Memory Hog Experiment Demo - A sample recording of this experiment execution is provided [here](https://www.youtube.com/watch?v=HuAXg8W5Tzo)