Update to Pod Network Docs - Removed Ingress references (#3790)

* removed ingress references from network exp

Signed-off-by: avaakash <as86414@gmail.com>

* updated readme with embeded manifests

Signed-off-by: avaakash <as86414@gmail.com>

Signed-off-by: avaakash <as86414@gmail.com>
This commit is contained in:
Akash Shrivastava 2022-11-14 11:03:48 +05:30 committed by GitHub
parent aa2523e287
commit 895dcf996b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
18 changed files with 130 additions and 127 deletions

View File

@ -3,29 +3,29 @@
- It injects packet corruption on the specified container by starting a traffic control (tc) process with netem rules to add egress packet corruption
- It can test the application's resilience to lossy/flaky network
!!! tip "Scenario: Corrupt the network packets of target pod"
!!! tip "Scenario: Corrupt the network packets of target pod"
![Pod Network Corruption](../../images/network-chaos.png)
## Uses
??? info "View the uses of the experiment"
??? info "View the uses of the experiment"
coming soon
## Prerequisites
??? info "Verify the prerequisites"
- Ensure that Kubernetes Version > 1.16
??? info "Verify the prerequisites"
- Ensure that Kubernetes Version > 1.16
- Ensure that the Litmus Chaos Operator is running by executing <code>kubectl get pods</code> in operator namespace (typically, <code>litmus</code>).If not, install from <a href="https://v1-docs.litmuschaos.io/docs/getstarted/#install-litmus">here</a>
- Ensure that the <code>pod-network-corruption</code> experiment resource is available in the cluster by executing <code>kubectl get chaosexperiments</code> in the desired namespace. If not, install from <a href="https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-network-corruption/experiment.yaml">here</a>
- Ensure that the <code>pod-network-corruption</code> experiment resource is available in the cluster by executing <code>kubectl get chaosexperiments</code> in the desired namespace. If not, install from <a href="https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-network-corruption/experiment.yaml">here</a>
## Default Validations
??? info "View the default validations"
??? info "View the default validations"
The application pods should be in running state before and after chaos injection.
## Minimal RBAC configuration example (optional)
!!! tip "NOTE"
!!! tip "NOTE"
If you are using this experiment as part of a litmus workflow scheduled constructed & executed from chaos-center, then you may be making use of the [litmus-admin](https://litmuschaos.github.io/litmus/litmus-admin-rbac.yaml) RBAC, which is pre installed in the cluster as part of the agent setup.
??? note "View the Minimal RBAC permissions"
@ -63,10 +63,10 @@
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
@ -75,7 +75,7 @@
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","replicasets", "daemonsets"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: ["apps.openshift.io"]
resources: ["deploymentconfigs"]
verbs: ["list","get"]
@ -160,22 +160,22 @@
<td> TARGET_PODS </td>
<td> Comma separated list of application pod name subjected to pod network corruption chaos</td>
<td> If not provided, it will select target pods randomly based on provided appLabels</td>
</tr>
</tr>
<tr>
<td> DESTINATION_IPS </td>
<td> IP addresses of the services or pods or the CIDR blocks(range of IPs), the accessibility to which is impacted </td>
<td> comma separated IP(S) or CIDR(S) can be provided. if not provided, it will induce network chaos for all ips/destinations</td>
</tr>
</tr>
<tr>
<td> DESTINATION_HOSTS </td>
<td> DNS Names/FQDN names of the services, the accessibility to which, is impacted </td>
<td> if not provided, it will induce network chaos for all ips/destinations or DESTINATION_IPS if already defined</td>
</tr>
</tr>
<tr>
<td> PODS_AFFECTED_PERC </td>
<td> The Percentage of total pods to target </td>
<td> Defaults to 0 (corresponds to 1 replica), provide numeric value only </td>
</tr>
</tr>
<tr>
<td> LIB </td>
<td> The chaos lib used to inject the chaos </td>
@ -203,21 +203,21 @@
</tr>
</table>
## Experiment Examples
## Experiment Examples
### Common and Pod specific tunables
Refer the [common attributes](../common/common-tunables-for-all-experiments.md) and [Pod specific tunable](common-tunables-for-pod-experiments.md) to tune the common tunables for all experiments and pod specific tunables.
Refer the [common attributes](../common/common-tunables-for-all-experiments.md) and [Pod specific tunable](common-tunables-for-pod-experiments.md) to tune the common tunables for all experiments and pod specific tunables.
### Network Packet Corruption
It defines the network packet corruption percentage to be injected in the targeted application. It can be tuned via `NETWORK_PACKET_CORRUPTION_PERCENTAGE` ENV.
It defines the network packet corruption percentage to be injected in the targeted application. It can be tuned via `NETWORK_PACKET_CORRUPTION_PERCENTAGE` ENV.
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-corruption/network-corruption.yaml yaml)
[embedmd]:# (pod-network-corruption/network-corruption.yaml yaml)
```yaml
# it inject the network-corruption for the ingrees and egress traffic
# it inject the network-corruption for the egress traffic
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
@ -251,9 +251,9 @@ The network experiments interrupt traffic for all the IPs/hosts by default. The
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-corruption/destination-ips-and-hosts.yaml yaml)
[embedmd]:# (pod-network-corruption/destination-ips-and-hosts.yaml yaml)
```yaml
# it inject the chaos for the ingrees and egress traffic for specific ips/hosts
# it inject the chaos for the egress traffic for specific ips/hosts
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
@ -287,7 +287,7 @@ The defined name of the ethernet interface, which is considered for shaping traf
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-corruption/network-interface.yaml yaml)
[embedmd]:# (pod-network-corruption/network-interface.yaml yaml)
```yaml
# provide the network interface
apiVersion: litmuschaos.io/v1alpha1
@ -307,7 +307,7 @@ spec:
spec:
components:
env:
# name of the network interface
# name of the network interface
- name: NETWORK_INTERFACE
value: 'eth0'
- name: TOTAL_CHAOS_DURATION
@ -323,7 +323,7 @@ It defines the `CONTAINER_RUNTIME` and `SOCKET_PATH` ENV to set the container ru
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-corruption/container-runtime-and-socket-path.yaml yaml)
[embedmd]:# (pod-network-corruption/container-runtime-and-socket-path.yaml yaml)
```yaml
## provide the container runtime and socket file path
apiVersion: litmuschaos.io/v1alpha1
@ -361,7 +361,7 @@ Provide the traffic control image via `TC_IMAGE` ENV for the pumba library.
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-corruption/pumba-lib.yaml yaml)
[embedmd]:# (pod-network-corruption/pumba-lib.yaml yaml)
```yaml
# use pumba chaoslib for the network chaos
apiVersion: litmuschaos.io/v1alpha1

View File

@ -1,4 +1,4 @@
# it inject the chaos for the ingrees and egress traffic for specific ips/hosts
# it inject the chaos for the egress traffic for specific ips/hosts
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:

View File

@ -1,4 +1,4 @@
# it inject the network-corruption for the ingrees and egress traffic
# it inject the network-corruption for the egress traffic
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:

View File

@ -16,7 +16,7 @@ spec:
spec:
components:
env:
# name of the network interface
# name of the network interface
- name: NETWORK_INTERFACE
value: 'eth0'
- name: TOTAL_CHAOS_DURATION

View File

@ -3,29 +3,29 @@
- It injects chaos to disrupt network connectivity to kubernetes pods.
- It causes Injection of network duplication on the specified container by starting a traffic control (tc) process with netem rules to add egress delays. It Can test the application's resilience to duplicate network.
!!! tip "Scenario: Duplicate the network packets of target pod"
!!! tip "Scenario: Duplicate the network packets of target pod"
![Pod Network Duplication](../../images/network-chaos.png)
## Uses
??? info "View the uses of the experiment"
??? info "View the uses of the experiment"
coming soon
## Prerequisites
??? info "Verify the prerequisites"
- Ensure that Kubernetes Version > 1.16
??? info "Verify the prerequisites"
- Ensure that Kubernetes Version > 1.16
- Ensure that the Litmus Chaos Operator is running by executing <code>kubectl get pods</code> in operator namespace (typically, <code>litmus</code>).If not, install from <a href="https://v1-docs.litmuschaos.io/docs/getstarted/#install-litmus">here</a>
- Ensure that the <code>pod-network-duplication</code> experiment resource is available in the cluster by executing <code>kubectl get chaosexperiments</code> in the desired namespace. If not, install from <a href="https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-network-duplication/experiment.yaml">here</a>
- Ensure that the <code>pod-network-duplication</code> experiment resource is available in the cluster by executing <code>kubectl get chaosexperiments</code> in the desired namespace. If not, install from <a href="https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-network-duplication/experiment.yaml">here</a>
## Default Validations
??? info "View the default validations"
??? info "View the default validations"
The application pods should be in running state before and after chaos injection.
## Minimal RBAC configuration example (optional)
!!! tip "NOTE"
!!! tip "NOTE"
If you are using this experiment as part of a litmus workflow scheduled constructed & executed from chaos-center, then you may be making use of the [litmus-admin](https://litmuschaos.github.io/litmus/litmus-admin-rbac.yaml) RBAC, which is pre installed in the cluster as part of the agent setup.
??? note "View the Minimal RBAC permissions"
@ -62,10 +62,10 @@
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
@ -74,7 +74,7 @@
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","replicasets", "daemonsets"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: ["apps.openshift.io"]
resources: ["deploymentconfigs"]
verbs: ["list","get"]
@ -161,22 +161,22 @@
<td> TARGET_PODS </td>
<td> Comma separated list of application pod name subjected to pod network corruption chaos</td>
<td> If not provided, it will select target pods randomly based on provided appLabels</td>
</tr>
</tr>
<tr>
<td> DESTINATION_IPS </td>
<td> IP addresses of the services or pods or the CIDR blocks(range of IPs), the accessibility to which is impacted </td>
<td> comma separated IP(S) or CIDR(S) can be provided. if not provided, it will induce network chaos for all ips/destinations</td>
</tr>
</tr>
<tr>
<td> DESTINATION_HOSTS </td>
<td> DNS Names/FQDN names of the services, the accessibility to which, is impacted </td>
<td> if not provided, it will induce network chaos for all ips/destinations or DESTINATION_IPS if already defined</td>
</tr>
</tr>
<tr>
<td> PODS_AFFECTED_PERC </td>
<td> The Percentage of total pods to target </td>
<td> Defaults to 0 (corresponds to 1 replica), provide numeric value only </td>
</tr>
</tr>
<tr>
<td> LIB </td>
<td> The chaos lib used to inject the chaos </td>
@ -204,21 +204,21 @@
</tr>
</table>
## Experiment Examples
## Experiment Examples
### Common and Pod specific tunables
Refer the [common attributes](../common/common-tunables-for-all-experiments.md) and [Pod specific tunable](common-tunables-for-pod-experiments.md) to tune the common tunables for all experiments and pod specific tunables.
Refer the [common attributes](../common/common-tunables-for-all-experiments.md) and [Pod specific tunable](common-tunables-for-pod-experiments.md) to tune the common tunables for all experiments and pod specific tunables.
### Network Packet Duplication
It defines the network packet duplication percentage to be injected in the targeted application. It can be tuned via `NETWORK_PACKET_DUPLICATION_PERCENTAGE` ENV.
It defines the network packet duplication percentage to be injected in the targeted application. It can be tuned via `NETWORK_PACKET_DUPLICATION_PERCENTAGE` ENV.
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-duplication/network-duplication.yaml yaml)
[embedmd]:# (pod-network-duplication/network-duplication.yaml yaml)
```yaml
# it inject the network-duplication for the ingrees and egress traffic
# it inject the network-duplication for the egress traffic
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
@ -252,9 +252,9 @@ The network experiments interrupt traffic for all the IPs/hosts by default. The
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-duplication/destination-ips-and-hosts.yaml yaml)
[embedmd]:# (pod-network-duplication/destination-ips-and-hosts.yaml yaml)
```yaml
# it inject the chaos for the ingrees and egress traffic for specific ips/hosts
# it inject the chaos for the egress traffic for specific ips/hosts
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
@ -288,7 +288,7 @@ The defined name of the ethernet interface, which is considered for shaping traf
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-duplication/network-interface.yaml yaml)
[embedmd]:# (pod-network-duplication/network-interface.yaml yaml)
```yaml
# provide the network interface
apiVersion: litmuschaos.io/v1alpha1
@ -308,7 +308,7 @@ spec:
spec:
components:
env:
# name of the network interface
# name of the network interface
- name: NETWORK_INTERFACE
value: 'eth0'
- name: TOTAL_CHAOS_DURATION
@ -324,7 +324,7 @@ It defines the `CONTAINER_RUNTIME` and `SOCKET_PATH` ENV to set the container ru
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-duplication/container-runtime-and-socket-path.yaml yaml)
[embedmd]:# (pod-network-duplication/container-runtime-and-socket-path.yaml yaml)
```yaml
## provide the container runtime and socket file path
apiVersion: litmuschaos.io/v1alpha1
@ -362,7 +362,7 @@ Provide the traffic control image via `TC_IMAGE` ENV for the pumba library.
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-duplication/pumba-lib.yaml yaml)
[embedmd]:# (pod-network-duplication/pumba-lib.yaml yaml)
```yaml
# use pumba chaoslib for the network chaos
apiVersion: litmuschaos.io/v1alpha1

View File

@ -1,4 +1,4 @@
# it inject the chaos for the ingrees and egress traffic for specific ips/hosts
# it inject the chaos for the egress traffic for specific ips/hosts
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:

View File

@ -1,4 +1,4 @@
# it inject the network-duplication for the ingrees and egress traffic
# it inject the network-duplication for the egress traffic
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:

View File

@ -16,7 +16,7 @@ spec:
spec:
components:
env:
# name of the network interface
# name of the network interface
- name: NETWORK_INTERFACE
value: 'eth0'
- name: TOTAL_CHAOS_DURATION

View File

@ -3,13 +3,13 @@
- It injects latency on the specified container by starting a traffic control (tc) process with netem rules to add egress delays
- It can test the application's resilience to lossy/flaky network
!!! tip "Scenario: Induce letency in the network of target pod"
!!! tip "Scenario: Induce letency in the network of target pod"
![Pod Network Latency](../../images/network-chaos.png)
## Uses
??? info "View the uses of the experiment"
The experiment causes network degradation without the pod being marked unhealthy/unworthy of traffic by kube-proxy (unless you have a liveness probe of sorts that measures latency and restarts/crashes the container). The idea of this experiment is to simulate issues within your pod network OR microservice communication across services in different availability zones/regions etc.
??? info "View the uses of the experiment"
The experiment causes network degradation without the pod being marked unhealthy/unworthy of traffic by kube-proxy (unless you have a liveness probe of sorts that measures latency and restarts/crashes the container). The idea of this experiment is to simulate issues within your pod network OR microservice communication across services in different availability zones/regions etc.
Mitigation (in this case keep the timeout i.e., access latency low) could be via some middleware that can switch traffic based on some SLOs/perf parameters. If such an arrangement is not available the next best thing would be to verify if such a degradation is highlighted via notification/alerts etc,. so the admin/SRE has the opportunity to investigate and fix things. Another utility of the test would be to see what the extent of impact caused to the end-user OR the last point in the app stack on account of degradation in access to a downstream/dependent microservice. Whether it is acceptable OR breaks the system to an unacceptable degree. The experiment provides DESTINATION_IPS or DESTINATION_HOSTS so that you can control the chaos against specific services within or outside the cluster.
@ -18,19 +18,19 @@
## Prerequisites
??? info "Verify the prerequisites"
- Ensure that Kubernetes Version > 1.16
??? info "Verify the prerequisites"
- Ensure that Kubernetes Version > 1.16
- Ensure that the Litmus Chaos Operator is running by executing <code>kubectl get pods</code> in operator namespace (typically, <code>litmus</code>).If not, install from <a href="https://v1-docs.litmuschaos.io/docs/getstarted/#install-litmus">here</a>
- Ensure that the <code>pod-network-latency</code> experiment resource is available in the cluster by executing <code>kubectl get chaosexperiments</code> in the desired namespace. If not, install from <a href="https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-network-latency/experiment.yaml">here</a>
- Ensure that the <code>pod-network-latency</code> experiment resource is available in the cluster by executing <code>kubectl get chaosexperiments</code> in the desired namespace. If not, install from <a href="https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-network-latency/experiment.yaml">here</a>
## Default Validations
??? info "View the default validations"
??? info "View the default validations"
The application pods should be in running state before and after chaos injection.
## Minimal RBAC configuration example (optional)
!!! tip "NOTE"
!!! tip "NOTE"
If you are using this experiment as part of a litmus workflow scheduled constructed & executed from chaos-center, then you may be making use of the [litmus-admin](https://litmuschaos.github.io/litmus/litmus-admin-rbac.yaml) RBAC, which is pre installed in the cluster as part of the agent setup.
??? note "View the Minimal RBAC permissions"
@ -68,10 +68,10 @@
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
@ -80,7 +80,7 @@
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","replicasets", "daemonsets"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: ["apps.openshift.io"]
resources: ["deploymentconfigs"]
verbs: ["list","get"]
@ -150,7 +150,7 @@
<td> JITTER </td>
<td> The network jitter value in ms </td>
<td> Default 0, provide numeric value only </td>
</tr>
</tr>
<tr>
<td> CONTAINER_RUNTIME </td>
<td> container runtime interface for the cluster</td>
@ -170,22 +170,22 @@
<td> TARGET_PODS </td>
<td> Comma separated list of application pod name subjected to pod network corruption chaos</td>
<td> If not provided, it will select target pods randomly based on provided appLabels</td>
</tr>
</tr>
<tr>
<td> DESTINATION_IPS </td>
<td> IP addresses of the services or pods or the CIDR blocks(range of IPs), the accessibility to which is impacted </td>
<td> comma separated IP(S) or CIDR(S) can be provided. if not provided, it will induce network chaos for all ips/destinations</td>
</tr>
</tr>
<tr>
<td> DESTINATION_HOSTS </td>
<td> DNS Names/FQDN names of the services, the accessibility to which, is impacted </td>
<td> if not provided, it will induce network chaos for all ips/destinations or DESTINATION_IPS if already defined</td>
</tr>
</tr>
<tr>
<td> PODS_AFFECTED_PERC </td>
<td> The Percentage of total pods to target </td>
<td> Defaults to 0 (corresponds to 1 replica), provide numeric value only </td>
</tr>
</tr>
<tr>
<td> LIB </td>
<td> The chaos lib used to inject the chaos </td>
@ -213,21 +213,21 @@
</tr>
</table>
## Experiment Examples
## Experiment Examples
### Common and Pod specific tunables
Refer the [common attributes](../common/common-tunables-for-all-experiments.md) and [Pod specific tunable](common-tunables-for-pod-experiments.md) to tune the common tunables for all experiments and pod specific tunables.
Refer the [common attributes](../common/common-tunables-for-all-experiments.md) and [Pod specific tunable](common-tunables-for-pod-experiments.md) to tune the common tunables for all experiments and pod specific tunables.
### Network Latency
It defines the network latency(in ms) to be injected in the targeted application. It can be tuned via `NETWORK_LATENCY` ENV.
It defines the network latency(in ms) to be injected in the targeted application. It can be tuned via `NETWORK_LATENCY` ENV.
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-latency/network-latency.yaml yaml)
[embedmd]:# (pod-network-latency/network-latency.yaml yaml)
```yaml
# it inject the network-latency for the ingrees and egress traffic
# it inject the network-latency for the egress traffic
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
@ -261,9 +261,9 @@ The network experiments interrupt traffic for all the IPs/hosts by default. The
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-latency/destination-ips-and-hosts.yaml yaml)
[embedmd]:# (pod-network-latency/destination-ips-and-hosts.yaml yaml)
```yaml
# it inject the chaos for the ingrees and egress traffic for specific ips/hosts
# it inject the chaos for the egress traffic for specific ips/hosts
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
@ -297,7 +297,7 @@ The defined name of the ethernet interface, which is considered for shaping traf
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-latency/network-interface.yaml yaml)
[embedmd]:# (pod-network-latency/network-interface.yaml yaml)
```yaml
# provide the network interface
apiVersion: litmuschaos.io/v1alpha1
@ -317,7 +317,7 @@ spec:
spec:
components:
env:
# name of the network interface
# name of the network interface
- name: NETWORK_INTERFACE
value: 'eth0'
- name: TOTAL_CHAOS_DURATION
@ -330,7 +330,7 @@ It defines the jitter (in ms), a parameter that allows introducing a network del
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-latency/network-latency-jitter.yaml yaml)
[embedmd]:# (pod-network-latency/network-latency-jitter.yaml yaml)
```yaml
# provide the network latency jitter
apiVersion: litmuschaos.io/v1alpha1
@ -350,7 +350,7 @@ spec:
spec:
components:
env:
# value of the network latency jitter (in ms)
# value of the network latency jitter (in ms)
- name: JITTER
value: '200'
```
@ -364,7 +364,7 @@ It defines the `CONTAINER_RUNTIME` and `SOCKET_PATH` ENV to set the container ru
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-latency/container-runtime-and-socket-path.yaml yaml)
[embedmd]:# (pod-network-latency/container-runtime-and-socket-path.yaml yaml)
```yaml
## provide the container runtime and socket file path
apiVersion: litmuschaos.io/v1alpha1
@ -402,7 +402,7 @@ Provide the traffic control image via `TC_IMAGE` ENV for the pumba library.
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-latency/pumba-lib.yaml yaml)
[embedmd]:# (pod-network-latency/pumba-lib.yaml yaml)
```yaml
# use pumba chaoslib for the network chaos
apiVersion: litmuschaos.io/v1alpha1

View File

@ -1,4 +1,4 @@
# it inject the chaos for the ingrees and egress traffic for specific ips/hosts
# it inject the chaos for the egress traffic for specific ips/hosts
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:

View File

@ -16,7 +16,7 @@ spec:
spec:
components:
env:
# name of the network interface
# name of the network interface
- name: NETWORK_INTERFACE
value: 'eth0'
- name: TOTAL_CHAOS_DURATION

View File

@ -16,6 +16,6 @@ spec:
spec:
components:
env:
# value of the network latency jitter (in ms)
# value of the network latency jitter (in ms)
- name: JITTER
value: '200'

View File

@ -1,4 +1,4 @@
# it inject the network-latency for the ingrees and egress traffic
# it inject the network-latency for the egress traffic
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:

View File

@ -1,31 +1,31 @@
## Introduction
- It injects packet loss on the specified container by starting a traffic control (tc) process with netem rules to add egress/ingress loss
- It injects packet loss on the specified container by starting a traffic control (tc) process with netem rules to add egress loss
- It can test the application's resilience to lossy/flaky network
!!! tip "Scenario: Induce network loss of the target pod"
!!! tip "Scenario: Induce network loss of the target pod"
![Pod Network Loss](../../images/network-chaos.png)
## Uses
??? info "View the uses of the experiment"
??? info "View the uses of the experiment"
coming soon
## Prerequisites
??? info "Verify the prerequisites"
- Ensure that Kubernetes Version > 1.16
??? info "Verify the prerequisites"
- Ensure that Kubernetes Version > 1.16
- Ensure that the Litmus Chaos Operator is running by executing <code>kubectl get pods</code> in operator namespace (typically, <code>litmus</code>).If not, install from <a href="https://v1-docs.litmuschaos.io/docs/getstarted/#install-litmus">here</a>
- Ensure that the <code>pod-network-loss</code> experiment resource is available in the cluster by executing <code>kubectl get chaosexperiments</code> in the desired namespace. If not, install from <a href="https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-network-loss/experiment.yaml">here</a>
- Ensure that the <code>pod-network-loss</code> experiment resource is available in the cluster by executing <code>kubectl get chaosexperiments</code> in the desired namespace. If not, install from <a href="https://hub.litmuschaos.io/api/chaos/master?file=charts/generic/pod-network-loss/experiment.yaml">here</a>
## Default Validations
??? info "View the default validations"
??? info "View the default validations"
The application pods should be in running state before and after chaos injection.
## Minimal RBAC configuration example (optional)
!!! tip "NOTE"
!!! tip "NOTE"
If you are using this experiment as part of a litmus workflow scheduled constructed & executed from chaos-center, then you may be making use of the [litmus-admin](https://litmuschaos.github.io/litmus/litmus-admin-rbac.yaml) RBAC, which is pre installed in the cluster as part of the agent setup.
??? note "View the Minimal RBAC permissions"
@ -62,10 +62,10 @@
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list",]
# Track and get the runner, experiment, and helper pods log
# Track and get the runner, experiment, and helper pods log
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
verbs: ["get","list","watch"]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: ["pods/exec"]
@ -74,7 +74,7 @@
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","replicasets", "daemonsets"]
verbs: ["list","get"]
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
# deriving the parent/owner details of the pod(if parent is deploymentConfig)
- apiGroups: ["apps.openshift.io"]
resources: ["deploymentconfigs"]
verbs: ["list","get"]
@ -161,22 +161,22 @@
<td> TARGET_PODS </td>
<td> Comma separated list of application pod name subjected to pod network corruption chaos</td>
<td> If not provided, it will select target pods randomly based on provided appLabels</td>
</tr>
</tr>
<tr>
<td> DESTINATION_IPS </td>
<td> IP addresses of the services or pods or the CIDR blocks(range of IPs), the accessibility to which is impacted </td>
<td> comma separated IP(S) or CIDR(S) can be provided. if not provided, it will induce network chaos for all ips/destinations</td>
</tr>
</tr>
<tr>
<td> DESTINATION_HOSTS </td>
<td> DNS Names/FQDN names of the services, the accessibility to which, is impacted </td>
<td> if not provided, it will induce network chaos for all ips/destinations or DESTINATION_IPS if already defined</td>
</tr>
</tr>
<tr>
<td> PODS_AFFECTED_PERC </td>
<td> The Percentage of total pods to target </td>
<td> Defaults to 0 (corresponds to 1 replica), provide numeric value only </td>
</tr>
</tr>
<tr>
<td> LIB </td>
<td> The chaos lib used to inject the chaos </td>
@ -204,21 +204,21 @@
</tr>
</table>
## Experiment Examples
## Experiment Examples
### Common and Pod specific tunables
Refer the [common attributes](../common/common-tunables-for-all-experiments.md) and [Pod specific tunable](common-tunables-for-pod-experiments.md) to tune the common tunables for all experiments and pod specific tunables.
Refer the [common attributes](../common/common-tunables-for-all-experiments.md) and [Pod specific tunable](common-tunables-for-pod-experiments.md) to tune the common tunables for all experiments and pod specific tunables.
### Network Packet Loss
It defines the network packet loss percentage to be injected in the targeted application. It can be tuned via `NETWORK_PACKET_LOSS_PERCENTAGE` ENV.
It defines the network packet loss percentage to be injected in the targeted application. It can be tuned via `NETWORK_PACKET_LOSS_PERCENTAGE` ENV.
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-loss/network-loss.yaml yaml)
[embedmd]:# (pod-network-loss/network-loss.yaml yaml)
```yaml
# it inject the network-loss for the ingrees and egress traffic
# it inject the network-loss for the egress traffic
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
@ -242,6 +242,7 @@ spec:
- name: TOTAL_CHAOS_DURATION
value: '60'
```
### Destination IPs And Destination Hosts
The network experiments interrupt traffic for all the IPs/hosts by default. The interruption of specific IPs/Hosts can be tuned via `DESTINATION_IPS` and `DESTINATION_HOSTS` ENV.
@ -251,9 +252,9 @@ The network experiments interrupt traffic for all the IPs/hosts by default. The
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-loss/destination-ips-and-hosts.yaml yaml)
[embedmd]:# (pod-network-loss/destination-ips-and-hosts.yaml yaml)
```yaml
# it inject the chaos for the ingrees and egress traffic for specific ips/hosts
# it inject the chaos for the egress traffic for specific ips/hosts
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
@ -287,7 +288,7 @@ The defined name of the ethernet interface, which is considered for shaping traf
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-loss/network-interface.yaml yaml)
[embedmd]:# (pod-network-loss/network-interface.yaml yaml)
```yaml
# provide the network interface
apiVersion: litmuschaos.io/v1alpha1
@ -307,7 +308,7 @@ spec:
spec:
components:
env:
# name of the network interface
# name of the network interface
- name: NETWORK_INTERFACE
value: 'eth0'
- name: TOTAL_CHAOS_DURATION
@ -323,7 +324,7 @@ It defines the `CONTAINER_RUNTIME` and `SOCKET_PATH` ENV to set the container ru
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-loss/container-runtime-and-socket-path.yaml yaml)
[embedmd]:# (pod-network-loss/container-runtime-and-socket-path.yaml yaml)
```yaml
## provide the container runtime and socket file path
apiVersion: litmuschaos.io/v1alpha1
@ -361,7 +362,7 @@ Provide the traffic control image via `TC_IMAGE` ENV for the pumba library.
Use the following example to tune this:
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/pods/pod-network-loss/pumba-lib.yaml yaml)
[embedmd]:# (pod-network-loss/pumba-lib.yaml yaml)
```yaml
# use pumba chaoslib for the network chaos
apiVersion: litmuschaos.io/v1alpha1

View File

@ -1,4 +1,4 @@
# it inject the chaos for the ingrees and egress traffic for specific ips/hosts
# it inject the chaos for the egress traffic for specific ips/hosts
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:

View File

@ -16,7 +16,7 @@ spec:
spec:
components:
env:
# name of the network interface
# name of the network interface
- name: NETWORK_INTERFACE
value: 'eth0'
- name: TOTAL_CHAOS_DURATION

View File

@ -1,4 +1,4 @@
# it inject the network-loss for the ingrees and egress traffic
# it inject the network-loss for the egress traffic
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:

View File

@ -1,6 +1,6 @@
It contains AUT and auxiliary applications details provided at `spec.appinfo` and `spec.auxiliaryAppInfo` respectively inside chaosengine.
??? info "View the application specification schema"
??? info "View the application specification schema"
<table>
<tr>
@ -114,7 +114,7 @@ It contains AUT and auxiliary applications details provided at `spec.appinfo` an
It defines the `appns`, `applabel`, and `appkind` to set the namespace, labels, and kind of the application under test.
- `appkind`: It supports `deployment`, `statefulset`, `daemonset`, `deploymentconfig`, and `rollout`.
- `appkind`: It supports `deployment`, `statefulset`, `daemonset`, `deploymentconfig`, and `rollout`.
It is mandatory for the pod-level experiments and optional for the rest of the experiments.
Use the following example to tune this:
@ -140,14 +140,16 @@ spec:
chaosServiceAccount: pod-delete-sa
experiments:
- name: pod-delete
```
## Auxiliary Application Info
The contains a (comma-separated) list of namespace-label pairs for downstream (dependent) apps of the primary app specified in `.spec.appInfo` in case of pod-level chaos experiments. In the case of infra-level chaos experiments, this flag specifies those apps that may be directly impacted by chaos and upon which health checks are necessary.
It can be tuned via `auxiliaryAppInfo` field. It supports input the below format:
- `auxiliaryAppInfo`: `<key1>=<value1>:<namespace1>,<key2>=<value2>:<namespace2>`
- `auxiliaryAppInfo`: `<key1>=<value1>:<namespace1>,<key2>=<value2>:<namespace2>`
#### Note: Auxiliary application check is only supported for node-level experiments.
Use the following example to tune this:
@ -171,5 +173,5 @@ spec:
chaosServiceAccount: pod-delete-sa
experiments:
- name: pod-delete
```