This commit is contained in:
NehaGujar1 2025-09-29 15:32:38 +00:00 committed by GitHub
commit 491f7d4d2f
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
5 changed files with 20 additions and 20 deletions

View File

@ -10,26 +10,26 @@ sidebar_label: Core principles
Cloud Native Chaos Engineering, defined as engineering practices focused on (and built on) Kubernetes environments, applications, microservices, and infrastructure follows these core principles - Cloud Native Chaos Engineering, defined as engineering practices focused on (and built on) Kubernetes environments, applications, microservices, and infrastructure follows these core principles -
## Driven by Open Source **Driven by Open Source**
Cloud-native software provides the ideal platform for multi-cloud deployments because it is rooted in open-source standards established by the World Wide Web Consortium (W3C). Digital transformation requires real-time, event-driven data collection and the W3C “One Web” vision defines an ideal architecture for any data to run with any app across any W3C-compliant cloud. Cloud-native software provides the ideal platform for multi-cloud deployments because it is rooted in open-source standards established by the World Wide Web Consortium (W3C). Digital transformation requires real-time, event-driven data collection and the W3C “One Web” vision defines an ideal architecture for any data to run with any app across any W3C-compliant cloud.
This principle focuses on the framework to be completely open-source under the Apache2 License to encourage broader community participation and inspection. The number of applications moving to the Kubernetes platform is limitless. At such a large scale, only the Open Chaos model will thrive and get the required adoption. This principle focuses on the framework to be completely open-source under the Apache2 License to encourage broader community participation and inspection. The number of applications moving to the Kubernetes platform is limitless. At such a large scale, only the Open Chaos model will thrive and get the required adoption.
## CRDs for Chaos Management **CRDs for Chaos Management**
Custom Resource Definition(CRD) is what you use to define a Custom Resource. This is a powerful way to extend Kubernetes capabilities beyond the default installation. These Kubernetes native CRDs defined here should be used as APIs for both Developers and SREs to build and orchestrate chaos testing. The CRDs act as standard APIs to provision and manage the chaos. Custom Resource Definition(CRD) is what you use to define a Custom Resource. This is a powerful way to extend Kubernetes capabilities beyond the default installation. These Kubernetes native CRDs defined here should be used as APIs for both Developers and SREs to build and orchestrate chaos testing. The CRDs act as standard APIs to provision and manage the chaos.
## Extensible and Pluggable **Extensible and Pluggable**
One lesson learned why cloud native approaches are winning is that their components can be relatively easily swapped out and new ones introduced as needed. Any standard chaos library or functionality developed by other open-source developers should be able to be integrated into and orchestrated for testing via this pluggable framework. One lesson learned why cloud native approaches are winning is that their components can be relatively easily swapped out and new ones introduced as needed. Any standard chaos library or functionality developed by other open-source developers should be able to be integrated into and orchestrated for testing via this pluggable framework.
## Broad Community Adoption **Broad Community Adoption**
Once we have the APIs, Operator, and plugin framework, we have all the ingredients needed for a common way of injecting chaos. The chaos will be run against a well-known infrastructure like Kubernetes or applications like databases or other infrastructure components like storage or networking. These chaos experiments can be reused, and a broad-based community is useful for identifying and contributing to other high-value scenarios. Hence a Chaos Engineering framework should provide a central hub or forge where open-source chaos experiments are shared, and collaboration via code is enabled. Once we have the APIs, Operator, and plugin framework, we have all the ingredients needed for a common way of injecting chaos. The chaos will be run against a well-known infrastructure like Kubernetes or applications like databases or other infrastructure components like storage or networking. These chaos experiments can be reused, and a broad-based community is useful for identifying and contributing to other high-value scenarios. Hence a Chaos Engineering framework should provide a central hub or forge where open-source chaos experiments are shared, and collaboration via code is enabled.
[Learn more about our community adoption](community.md) [Learn more about our community adoption](community.md)
## GitOps for Chaos Management **GitOps for Chaos Management**
Use GitOps as an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD, and applies them to infrastructure automation. With the demands made on todays infrastructure, it has become increasingly crucial to implement infrastructure automation. Modern infrastructure needs to be elastic so that it can effectively manage cloud resources that are needed for continuous deployments. Use GitOps as an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD, and applies them to infrastructure automation. With the demands made on todays infrastructure, it has become increasingly crucial to implement infrastructure automation. Modern infrastructure needs to be elastic so that it can effectively manage cloud resources that are needed for continuous deployments.

View File

@ -20,7 +20,7 @@ Kubernetes is being run on a variety of infrastructure, ranging from virtual mac
Your application resilience really depends more on the underlying stack than your application itself. It is possible that once your application is stabilized, the resilience of your service that runs on Kubernetes depends on other components and infrastructure more than 90% of the time. Your application resilience really depends more on the underlying stack than your application itself. It is possible that once your application is stabilized, the resilience of your service that runs on Kubernetes depends on other components and infrastructure more than 90% of the time.
Thus it is important to verify your application resilience whenever a change has happened in the underlying stack. **Keep verifying** is the key. Robust testing before upgrades is not good enough, mainly because you cannot possibly consider all sorts of faults during upgrade testing. This introduces the concept of Chaos Engineering. The process of "**continuously verifying** if your service is resilient against faults" is called Chaos Engineering. Thus it is important to verify your application resilience whenever a change has happened in the underlying stack. **Keep verifying** is the key. Robust testing before upgrades is not good enough, mainly because you cannot possibly consider all sorts of faults during upgrade testing. This introduces the concept of Chaos Engineering. The process of **continuously verifying if your service is resilient against faults** is called Chaos Engineering.
## What is a Chaos Experiment ## What is a Chaos Experiment

View File

@ -10,26 +10,26 @@ sidebar_label: Core principles
Cloud Native Chaos Engineering, defined as engineering practices focused on (and built on) Kubernetes environments, applications, microservices, and infrastructure follows these core principles - Cloud Native Chaos Engineering, defined as engineering practices focused on (and built on) Kubernetes environments, applications, microservices, and infrastructure follows these core principles -
## Driven by Open Source **Driven by Open Source**
Cloud-native software provides the ideal platform for multi-cloud deployments because it is rooted in open-source standards established by the World Wide Web Consortium (W3C). Digital transformation requires real-time, event-driven data collection and the W3C “One Web” vision defines an ideal architecture for any data to run with any app across any W3C-compliant cloud. Cloud-native software provides the ideal platform for multi-cloud deployments because it is rooted in open-source standards established by the World Wide Web Consortium (W3C). Digital transformation requires real-time, event-driven data collection and the W3C “One Web” vision defines an ideal architecture for any data to run with any app across any W3C-compliant cloud.
This principle focuses on the framework to be completely open-source under the Apache2 License to encourage broader community participation and inspection. The number of applications moving to the Kubernetes platform is limitless. At such a large scale, only the Open Chaos model will thrive and get the required adoption. This principle focuses on the framework to be completely open-source under the Apache2 License to encourage broader community participation and inspection. The number of applications moving to the Kubernetes platform is limitless. At such a large scale, only the Open Chaos model will thrive and get the required adoption.
## CRDs for Chaos Management **CRDs for Chaos Management**
Custom Resource Definition(CRD) is what you use to define a Custom Resource. This is a powerful way to extend Kubernetes capabilities beyond the default installation. These Kubernetes native CRDs defined here should be used as APIs for both Developers and SREs to build and orchestrate chaos testing. The CRDs act as standard APIs to provision and manage the chaos. Custom Resource Definition(CRD) is what you use to define a Custom Resource. This is a powerful way to extend Kubernetes capabilities beyond the default installation. These Kubernetes native CRDs defined here should be used as APIs for both Developers and SREs to build and orchestrate chaos testing. The CRDs act as standard APIs to provision and manage the chaos.
## Extensible and Pluggable **Extensible and Pluggable**
One lesson learned why cloud native approaches are winning is that their components can be relatively easily swapped out and new ones introduced as needed. Any standard chaos library or functionality developed by other open-source developers should be able to be integrated into and orchestrated for testing via this pluggable framework. One lesson learned why cloud native approaches are winning is that their components can be relatively easily swapped out and new ones introduced as needed. Any standard chaos library or functionality developed by other open-source developers should be able to be integrated into and orchestrated for testing via this pluggable framework.
## Broad Community Adoption **Broad Community Adoption**
Once we have the APIs, Operator, and plugin framework, we have all the ingredients needed for a common way of injecting chaos. The chaos will be run against a well-known infrastructure like Kubernetes or applications like databases or other infrastructure components like storage or networking. These chaos experiments can be reused, and a broad-based community is useful for identifying and contributing to other high-value scenarios. Hence a Chaos Engineering framework should provide a central hub or forge where open-source chaos experiments are shared, and collaboration via code is enabled. Once we have the APIs, Operator, and plugin framework, we have all the ingredients needed for a common way of injecting chaos. The chaos will be run against a well-known infrastructure like Kubernetes or applications like databases or other infrastructure components like storage or networking. These chaos experiments can be reused, and a broad-based community is useful for identifying and contributing to other high-value scenarios. Hence a Chaos Engineering framework should provide a central hub or forge where open-source chaos experiments are shared, and collaboration via code is enabled.
[Learn more about our community adoption](community.md) [Learn more about our community adoption](community.md)
## GitOps for Chaos Management **GitOps for Chaos Management**
Use GitOps as an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD, and applies them to infrastructure automation. With the demands made on todays infrastructure, it has become increasingly crucial to implement infrastructure automation. Modern infrastructure needs to be elastic so that it can effectively manage cloud resources that are needed for continuous deployments. Use GitOps as an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD, and applies them to infrastructure automation. With the demands made on todays infrastructure, it has become increasingly crucial to implement infrastructure automation. Modern infrastructure needs to be elastic so that it can effectively manage cloud resources that are needed for continuous deployments.

View File

@ -20,7 +20,7 @@ A variety of infrastructure, ranging from virtual machines to bare metal and a c
Your application's resilience really depends more on the underlying stack than on your application itself. Once your application is stable, the service resilience (which runs on Kubernetes) depends on other components and infrastructure more than 90% of the time. Your application's resilience really depends more on the underlying stack than on your application itself. Once your application is stable, the service resilience (which runs on Kubernetes) depends on other components and infrastructure more than 90% of the time.
Thus, it is important to verify your application's resilience whenever a change has happened in the underlying stack. **Keep verifying** is the key. Robust testing before upgrades is not good enough, mainly because you cannot possibly consider all sorts of faults during upgrade testing. This introduces the concept of chaos engineering. The process of "**continuously verifying** if your service is resilient against faults" is called chaos engineering. Thus it is important to verify your application resilience whenever a change has happened in the underlying stack. **Keep verifying** is the key. Robust testing before upgrades is not good enough, mainly because you cannot possibly consider all sorts of faults during upgrade testing. This introduces the concept of Chaos Engineering. The process of **continuously verifying if your service is resilient against faults** is called Chaos Engineering.
## What is a chaos experiment? ## What is a chaos experiment?

View File

@ -27,7 +27,7 @@ Installation of Litmus can be done using either of the below methods
- [Helm3](#install-litmus-using-helm) chart - [Helm3](#install-litmus-using-helm) chart
- [Kubectl](#install-litmus-using-kubectl) yaml spec file - [Kubectl](#install-litmus-using-kubectl) yaml spec file
### **Install Litmus using Helm ** ### Install Litmus using Helm
The helm chart will install all the required service account configuration and ChaosCenter. The helm chart will install all the required service account configuration and ChaosCenter.
@ -97,9 +97,9 @@ Visit https://docs.litmuschaos.io/ to find more info.
> **Note:** Litmus uses Kubernetes CRDs to define chaos intent. Helm3 handles CRDs better than Helm2. Before you start running a chaos experiment, verify if Litmus is installed correctly. > **Note:** Litmus uses Kubernetes CRDs to define chaos intent. Helm3 handles CRDs better than Helm2. Before you start running a chaos experiment, verify if Litmus is installed correctly.
### **Install Litmus using kubectl ** ### Install Litmus using kubectl
#### **Set the namespace on which you want to install Litmus ChaosCenter** #### Set the namespace on which you want to install Litmus ChaosCenter
> Create a namespace `kubectl create ns <Your Namespace>` > Create a namespace `kubectl create ns <Your Namespace>`
@ -114,7 +114,7 @@ NAME STATUS AGE
litmus Active 2s litmus Active 2s
``` ```
#### **Install the required Litmus CRDs** #### Install the required Litmus CRDs
The cluster-admin or an equivalent user with the right permissions are required to install the CRDs upfront. The cluster-admin or an equivalent user with the right permissions are required to install the CRDs upfront.
@ -136,7 +136,7 @@ customresourcedefinition.apiextensions.k8s.io/chaosresults.litmuschaos.io create
customresourcedefinition.apiextensions.k8s.io/eventtrackerpolicies.eventtracker.litmuschaos.io created customresourcedefinition.apiextensions.k8s.io/eventtrackerpolicies.eventtracker.litmuschaos.io created
``` ```
#### **Install Litmus ChaosCenter** #### Install Litmus ChaosCenter
Applying the manifest file will install all the required service account configuration and ChaosCenter. Applying the manifest file will install all the required service account configuration and ChaosCenter.
@ -178,11 +178,11 @@ service/mongo-service created
service/mongo-headless-service created service/mongo-headless-service created
``` ```
## **Verify your installation** ## Verify your installation
--- ---
#### **Verify if the frontend, server, and database pods are running** #### Verify if the frontend, server, and database pods are running
- Check the pods in the namespace where you installed Litmus: - Check the pods in the namespace where you installed Litmus:
@ -227,7 +227,7 @@ kubectl set env deployment/litmusportal-server -n litmus --containers="graphql-s
--- ---
#### **Verify Successful Registration of the Self Chaos Delegate post [Account Configuration](setup-without-ingress)** #### Verify Successful Registration of the Self Chaos Delegate post [Account Configuration](setup-without-ingress)
Once the project is created, the cluster is automatically registered as a chaos target via installation of [Chaos Delegate](../getting-started/resources.md#chaosagents). This is represented as [Self Chaos Delegate](../getting-started/resources.md#types-of-chaosagents) in [ChaosCenter](../getting-started/resources.md#chaosagents). Once the project is created, the cluster is automatically registered as a chaos target via installation of [Chaos Delegate](../getting-started/resources.md#chaosagents). This is represented as [Self Chaos Delegate](../getting-started/resources.md#types-of-chaosagents) in [ChaosCenter](../getting-started/resources.md#chaosagents).