Merge branch 'master' into feature/update-chaosexperimentCR-create-time

This commit is contained in:
Saranya Jena 2024-11-22 12:58:54 +05:30 committed by GitHub
commit f40a57dbd3
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
98 changed files with 112472 additions and 235 deletions

View File

@ -1,86 +1,85 @@
This is a list of organizations that have publicly acknowledged usage of LitmusChaos and shared details of how they are leveraging it for chaos engineering.
Please send a PR to this file (along with details in a respective [org](./adopters/organizations) folder) to add/remove entries. If you are an independent user
and wish to to share your adoption story, please raise a PR against the [users](USERS.md) file.
This is a list of organizations that have publicly acknowledged usage of LitmusChaos and shared details of how they are leveraging it for chaos engineering.
Please send a PR to this file (along with details in a respective [org](./adopters/organizations) folder) to add/remove entries. If you are an independent user
and wish to to share your adoption story, please raise a PR against the [users](USERS.md) file.
These organizations have been broadly classified on the basis of how they contribute to the ecosystem: as vendors, as solution providers or as pure end-users of
cloud-native technologies. Also included in this list are CNCF (or other) open-source projects that have integrated with Litmus or use it as part of their release/delivery process.
These organizations have been broadly classified on the basis of how they contribute to the ecosystem: as vendors, as solution providers or as pure end-users of
cloud-native technologies. Also included in this list are CNCF (or other) open-source projects that have integrated with Litmus or use it as part of their release/delivery process.
### Cloud-Native End Users
### Cloud-Native End Users
The companies listed here conform to [CNCF's definition of end-users](https://github.com/cncf/enduser-public#cncf-end-user-community).
The companies listed here conform to [CNCF's definition of end-users](https://github.com/cncf/enduser-public#cncf-end-user-community).
| Organization | Usecase | Details |
| :--- | :--- | :--- |
|[AnutaNetworks](https://www.anutanetworks.com/)|Chaos Engineering as part of SRE practices in QA environments |[Our Story](adopters/organizations/anutanetworks.md)|
|[AkriData](https://www.akridata.com/)|Pod Chaos Experiments in AWS & Azure|[Our Story](adopters/organizations/akridata.md)|
|[Halodoc](https://www.halodoc.com/)|Resiliency Validation of Kubernetes Workloads and Infra on AWS |[Our Story](adopters/organizations/halodoc.md)|
|[Intuit](https://www.intuit.com?utm_source=github&utm_campaign=litmuschaos_repo)|[Argo Based Chaos Workflows](https://youtu.be/Uwqop-s99LA?t=720)|[Our Story](adopters/organizations/intuit.md)|
|[Kitopi](https://www.kitopi.com/)|Chaos Engineering as part of SRE practice|[Our Story](adopters/organizations/kitopi.md)|
|[Lenskart](https://www.lenskart.com/)|Chaos Engineering for better Resiliency | [Our Story](adopters/organizations/lenskart.md)|
|[Mercedes](https://www.mercedes-benz.com/)|Resiliency validation for applications|[Our Story](adopters/organizations/mercedes.md)|
|[Orange](https://www.orange.com)|[Cloud Infra Resiliency](https://youtu.be/UOhjFbCrncw?list=PLBuYBMjBLBzHPuPsvdbJvKu1KxSowWDYl&t=186...a)|[Our Story](adopters/organizations/orange.md)|
|[Pôle Emploi](https://www.pole-emploi.fr)|Chaos Engineering as part of SRE practice|[Our Story](adopters/organizations/pole_emploi.md)|
|[iFood](https://www.ifood.com.br/)|Chaos Engineering for a Food Delivery Platform|[Our Story](adopters/organizations/ifood.md)|
|[FIS](https://www.fisglobal.com/en/)|Larger SRE Transformation with Chaos Engineering|[Our Story](adopters/organizations/fis.md)|
|[Adidas](https://adidas.com/)|Implementing Chaos Engineering as a practice at Adidas|[Our Story](adopters/organizations/adidas.md)|
|[Cyren](https://www.cyren.com/)|Implementing Chaos Engineering as a practice at Cyren|[Our Story](https://www.infoq.com/articles/chaos-engineering-cloud-native/)|
|[AB-Inbev](https://www.ab-inbev.com/)|Implementing Chaos Engineering as a practice at AB-Inbev|[Our Story](adopters/organizations/abinbev.md)|
|[Group Baobab](https://baobab.com/en/home/)| Orchestrating Chaos using LitmusChaos at Baobab|[Our Story](https://github.com/litmuschaos/litmus/issues/2191#issuecomment-1647648343)|
|[Flipkart](https://www.flipkart.com/)|Chaos Engineering at Flipkart|[Our Story](https://github.com/litmuschaos/litmus/issues/2191#issuecomment-1966904935)|
|[Talend](https://www.talend.com/)|Chaos Engineering for our pipelines and weekly checks|[Our Story](https://github.com/litmuschaos/litmus/issues/2191#issuecomment-2005254600)|
|[Delivery Hero](https://www.deliveryhero.com/)|Enhancing Resiliency of Our Services|[Our Story](https://github.com/litmuschaos/litmus/issues/2191#issuecomment-1997465958)|
|[Wingie Enuygun Company](https://www.wingie.com/)|Chaos Engineering for an Online Travel and Finance Platform|[Our Story](https://github.com/litmuschaos/litmus/issues/2191#issuecomment-2331265698)|
| Organization | Usecase | Details |
| :------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------- |
| [AnutaNetworks](https://www.anutanetworks.com/) | Chaos Engineering as part of SRE practices in QA environments | [Our Story](adopters/organizations/anutanetworks.md) |
| [AkriData](https://www.akridata.com/) | Pod Chaos Experiments in AWS & Azure | [Our Story](adopters/organizations/akridata.md) |
| [Halodoc](https://www.halodoc.com/) | Resiliency Validation of Kubernetes Workloads and Infra on AWS | [Our Story](adopters/organizations/halodoc.md) |
| [Intuit](https://www.intuit.com?utm_source=github&utm_campaign=litmuschaos_repo) | [Argo Based Chaos Workflows](https://youtu.be/Uwqop-s99LA?t=720) | [Our Story](adopters/organizations/intuit.md) |
| [Kitopi](https://www.kitopi.com/) | Chaos Engineering as part of SRE practice | [Our Story](adopters/organizations/kitopi.md) |
| [Lenskart](https://www.lenskart.com/) | Chaos Engineering for better Resiliency | [Our Story](adopters/organizations/lenskart.md) |
| [Mercedes](https://www.mercedes-benz.com/) | Resiliency validation for applications | [Our Story](adopters/organizations/mercedes.md) |
| [Orange](https://www.orange.com) | [Cloud Infra Resiliency](https://youtu.be/UOhjFbCrncw?list=PLBuYBMjBLBzHPuPsvdbJvKu1KxSowWDYl&t=186...a) | [Our Story](adopters/organizations/orange.md) |
| [Pôle Emploi](https://www.pole-emploi.fr) | Chaos Engineering as part of SRE practice | [Our Story](adopters/organizations/pole_emploi.md) |
| [iFood](https://www.ifood.com.br/) | Chaos Engineering for a Food Delivery Platform | [Our Story](adopters/organizations/ifood.md) |
| [FIS](https://www.fisglobal.com/en/) | Larger SRE Transformation with Chaos Engineering | [Our Story](adopters/organizations/fis.md) |
| [Adidas](https://adidas.com/) | Implementing Chaos Engineering as a practice at Adidas | [Our Story](adopters/organizations/adidas.md) |
| [Cyren](https://www.cyren.com/) | Implementing Chaos Engineering as a practice at Cyren | [Our Story](https://www.infoq.com/articles/chaos-engineering-cloud-native/) |
| [AB-Inbev](https://www.ab-inbev.com/) | Implementing Chaos Engineering as a practice at AB-Inbev | [Our Story](adopters/organizations/abinbev.md) |
| [Group Baobab](https://baobab.com/en/home/) | Orchestrating Chaos using LitmusChaos at Baobab | [Our Story](https://github.com/litmuschaos/litmus/issues/2191#issuecomment-1647648343) |
| [Flipkart](https://www.flipkart.com/) | Chaos Engineering at Flipkart | [Our Story](https://github.com/litmuschaos/litmus/issues/2191#issuecomment-1966904935) |
| [Talend](https://www.talend.com/) | Chaos Engineering for our pipelines and weekly checks | [Our Story](https://github.com/litmuschaos/litmus/issues/2191#issuecomment-2005254600) |
| [Delivery Hero](https://www.deliveryhero.com/) | Enhancing Resiliency of Our Services | [Our Story](https://github.com/litmuschaos/litmus/issues/2191#issuecomment-1997465958) |
| [Wingie Enuygun Company](https://www.wingie.com/) | Chaos Engineering for an Online Travel and Finance Platform | [Our Story](https://github.com/litmuschaos/litmus/issues/2191#issuecomment-2331265698) |
| [EmiratesNBD](https://www.emiratesnbd.com) | Chaos Engineering for Government Owned Bank | [Our Story](adopters/organizations/emirates-nbd.md) |
### Cloud-Native Vendors
The companies listed here sell cloud-native products/technologies. They use LitmusChaos as part of the resiliency validation of these products OR as part of other
devops/reliability pipelines (such as for customer portals/websites etc.,) within the company.
| Organization | Usecase | Details |
| :--- | :--- | :--- |
|[KubeSphere](https://kubesphere.io/)|Chaos Engineering|To Be Added|
|[Kublr](https://kublr.com/)|Identify the weak spots and components prone to failures under stress|[Our Story](adopters/organizations/kublr.md)|
|[MayaData](https://mayadata.io)|[Director Online](https://director.mayadata.io/)|[Our Story](adopters/organizations/mayadata.md)|
|[NetApp](https://www.netapp.com)|[Chaos Engineering](https://www.netapp.com/us/index.aspx)|[Our Story](adopters/organizations/netapp.md)|
|[Okteto](https://okteto.com)|[Okteto-Litmus Integration](https://okteto.com/blog/chaos-engineering-with-litmus/)| [Our Story](adopters/organizations/okteto.md)|
|[RedHat](https://www.redhat.com/en)|[RedHat Openshift Virtualization Maturity](https://www.youtube.com/watch?v=VITGHJ47gx8&list=PLBuYBMjBLBzHPuPsvdbJvKu1KxSowWDYl&index=7)|[Our Story](adopters/organizations/redhat.md)|
|[VMWare](https://www.vmware.com/)|Chaos Engineering in CD|[Our Story](adopters/organizations/vmware.md)|
|[Zebrium](https://www.zebrium.com?utm_source=github&utm_campaign=litmuschaos_repo)|[Zebrium K8s Chaos Project](https://github.com/zebrium/zebrium-kubernetes-demo)|[Our Story](adopters/organizations/zebrium.md)|
|[Container Solutions](https://www.container-solutions.com/)|Building Chaos Engineering for E-Commerce Customers|[Our Story](adopters/organizations/containersolutions.md)|
|[Infracloud Technologies](https://www.infracloud.io/)|Developing Resiliency Framework at Infracloud|[Our Story](adopters/organizations/infracloud.md)|
|[IFS](https://www.ifs.com/)|Checking Resiliency with LitmusChaos at IFS|[Our Story](https://github.com/litmuschaos/litmus/issues/2191#issuecomment-1966428068)|
|[Ericsson](https://www.ericsson.com/en)|Chaos Engineering with Open Source LitmusChaos|[Our Story](https://github.com/litmuschaos/litmus/issues/2191#issuecomment-1985348431)|
devops/reliability pipelines (such as for customer portals/websites etc.,) within the company.
| Organization | Usecase | Details |
| :--------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------- |
| [KubeSphere](https://kubesphere.io/) | Chaos Engineering | To Be Added |
| [Kublr](https://kublr.com/) | Identify the weak spots and components prone to failures under stress | [Our Story](adopters/organizations/kublr.md) |
| [MayaData](https://mayadata.io) | [Director Online](https://director.mayadata.io/) | [Our Story](adopters/organizations/mayadata.md) |
| [NetApp](https://www.netapp.com) | [Chaos Engineering](https://www.netapp.com/us/index.aspx) | [Our Story](adopters/organizations/netapp.md) |
| [Okteto](https://okteto.com) | [Okteto-Litmus Integration](https://okteto.com/blog/chaos-engineering-with-litmus/) | [Our Story](adopters/organizations/okteto.md) |
| [RedHat](https://www.redhat.com/en) | [RedHat Openshift Virtualization Maturity](https://www.youtube.com/watch?v=VITGHJ47gx8&list=PLBuYBMjBLBzHPuPsvdbJvKu1KxSowWDYl&index=7) | [Our Story](adopters/organizations/redhat.md) |
| [VMWare](https://www.vmware.com/) | Chaos Engineering in CD | [Our Story](adopters/organizations/vmware.md) |
| [Zebrium](https://www.zebrium.com?utm_source=github&utm_campaign=litmuschaos_repo) | [Zebrium K8s Chaos Project](https://github.com/zebrium/zebrium-kubernetes-demo) | [Our Story](adopters/organizations/zebrium.md) |
| [Container Solutions](https://www.container-solutions.com/) | Building Chaos Engineering for E-Commerce Customers | [Our Story](adopters/organizations/containersolutions.md) |
| [Infracloud Technologies](https://www.infracloud.io/) | Developing Resiliency Framework at Infracloud | [Our Story](adopters/organizations/infracloud.md) |
| [IFS](https://www.ifs.com/) | Checking Resiliency with LitmusChaos at IFS | [Our Story](https://github.com/litmuschaos/litmus/issues/2191#issuecomment-1966428068) |
| [Ericsson](https://www.ericsson.com/en) | Chaos Engineering with Open Source LitmusChaos | [Our Story](https://github.com/litmuschaos/litmus/issues/2191#issuecomment-1985348431) |
| [OutSystems](https://www.outsystems.com/) | Chaos Engineering for Low-Code Platform | [Our Story](adopters/organizations/outsystems.md) |
### Cloud-Native Solutions & Service Providers
The companies listed here provide solutions around cloud-native technologies to other organizations/clients and are often involved in their implementation/offer services.
They use LitmusChaos as the tool of choice for carrying out chaos experiments in a client environment or in some cases use it as a building block of a larger bespoke software/devops platform.
They use LitmusChaos as the tool of choice for carrying out chaos experiments in a client environment or in some cases use it as a building block of a larger bespoke software/devops platform.
| Organization | Usecase | Details |
| :--- | :--- | :--- |
|[Klanik](https://www.klanik.com)|Chaos Engineering as part of SRE practice|[Our Story](adopters/organizations/klanik.md)|
| [Neudesic](https://www.neudesic.com/) | Chaos Engineering | [Our Story](adopters/organizations/neudesic.md) |
|[WeScale](https://www.wescale.fr)|[Chaos Engineering](https://blog.wescale.fr/2020/03/19/le-guide-de-chaos-engineering-partie-2/)|[Our Story](adopters/organizations/wescale.md)|
|[Wipro](https://www.wipro.com/en-IN/infrastructure/wipros-appanywhere/?utm_source=github&utm_campaign=litmuschaos_repo)|[Wipro AppAnywhere](https://www.wipro.com/en-IN/infrastructure/wipros-appanywhere/?utm_source=github&utm_campaign=litmuschaos_repo)|[Our Story](adopters/organizations/wipro.md)|
|[HCL Cloud Native Labs](https://www.hcltech.com/)|SRE Enablement Service|[Our Story(TBA)]|
|[CI&T](https://ciandt.com/us/en-us)|Chaos Engineering Implementation|[Our Story](adopters/organizations/ci&t.md)|
| Organization | Usecase | Details |
| :---------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------- |
| [Klanik](https://www.klanik.com) | Chaos Engineering as part of SRE practice | [Our Story](adopters/organizations/klanik.md) |
| [Neudesic](https://www.neudesic.com/) | Chaos Engineering | [Our Story](adopters/organizations/neudesic.md) |
| [WeScale](https://www.wescale.fr) | [Chaos Engineering](https://blog.wescale.fr/2020/03/19/le-guide-de-chaos-engineering-partie-2/) | [Our Story](adopters/organizations/wescale.md) |
| [Wipro](https://www.wipro.com/en-IN/infrastructure/wipros-appanywhere/?utm_source=github&utm_campaign=litmuschaos_repo) | [Wipro AppAnywhere](https://www.wipro.com/en-IN/infrastructure/wipros-appanywhere/?utm_source=github&utm_campaign=litmuschaos_repo) | [Our Story](adopters/organizations/wipro.md) |
| [HCL Cloud Native Labs](https://www.hcltech.com/) | SRE Enablement Service | [Our Story(TBA)] |
| [CI&T](https://ciandt.com/us/en-us) | Chaos Engineering Implementation | [Our Story](adopters/organizations/ci&t.md) |
### Cloud-Native OSS Projects
The projects listed here, in most cases use LitmusChaos for testing the resilience of the respective opensource framework/platform
(in a manual or automated fashion) or in other cases integrate with it via a plugin/service to provide add resilience validation capability to their
existing functions.
| Organization | Usecase | Details |
| :--- | :--- | :--- |
|[Keptn](https://keptn.sh)|[Chaos Engineering integration in CD](https://www.youtube.com/watch?v=aa5SzQmv4EQ)|[Our Story](https://medium.com/keptn/part-2-evaluating-application-resiliency-with-keptn-and-litmuschaos-use-case-and-demo-f43b264a2294)|
|[KubeFlare](https://github.com/raspbernetes)|Resilience of microservices on ARM64 (Raspberry Pi) based clusters|[Our Story](adopters/organizations/raspbernetes.md)|
|[OpenEBS](https://openebs.io/)|[Openebs-CI](https://openebs.ci/)|[Our Story](adopters/organizations/openebs.md)|
|[Pravega](https://pravega.io/)|To inject faults while exercising quality tests on our product|[Our Story](adopters/organizations/pravega.md)|
|[Red Hat](https://www.redhat.com/en)|[Chaos Engineering with Kraken](https://github.com/cloud-bulldozer/kraken)|[Our Story](adopters/organizations/redhat_kraken.md)|
|[Iter8](https://iter8.tools)|[SLO validation with chaos injection](https://iter8.tools/0.7/tutorials/deployments/slo-validation-chaos/)|To Be Added|
|[CNF Test Suite](https://github.com/cncf/cnf-testsuite)|To validate the resilience of Cloud Native Network Functions (CNFs)|[Our Story](adopters/organizations/cnftestsuite.md)|
|[APACHE APISIX](https://apisix.apache.org/)|Practicing Chaos Engineering using Litmus in the Apache APISIX Ingress.|[Our Story](adopters/organizations/apisix.md)|
### Cloud-Native OSS Projects
The projects listed here, in most cases use LitmusChaos for testing the resilience of the respective opensource framework/platform
(in a manual or automated fashion) or in other cases integrate with it via a plugin/service to provide add resilience validation capability to their
existing functions.
| Organization | Usecase | Details |
| :------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------- |
| [Keptn](https://keptn.sh) | [Chaos Engineering integration in CD](https://www.youtube.com/watch?v=aa5SzQmv4EQ) | [Our Story](https://medium.com/keptn/part-2-evaluating-application-resiliency-with-keptn-and-litmuschaos-use-case-and-demo-f43b264a2294) |
| [KubeFlare](https://github.com/raspbernetes) | Resilience of microservices on ARM64 (Raspberry Pi) based clusters | [Our Story](adopters/organizations/raspbernetes.md) |
| [OpenEBS](https://openebs.io/) | [Openebs-CI](https://openebs.ci/) | [Our Story](adopters/organizations/openebs.md) |
| [Pravega](https://pravega.io/) | To inject faults while exercising quality tests on our product | [Our Story](adopters/organizations/pravega.md) |
| [Red Hat](https://www.redhat.com/en) | [Chaos Engineering with Kraken](https://github.com/cloud-bulldozer/kraken) | [Our Story](adopters/organizations/redhat_kraken.md) |
| [Iter8](https://iter8.tools) | [SLO validation with chaos injection](https://iter8.tools/0.7/tutorials/deployments/slo-validation-chaos/) | To Be Added |
| [CNF Test Suite](https://github.com/cncf/cnf-testsuite) | To validate the resilience of Cloud Native Network Functions (CNFs) | [Our Story](adopters/organizations/cnftestsuite.md) |
| [APACHE APISIX](https://apisix.apache.org/) | Practicing Chaos Engineering using Litmus in the Apache APISIX Ingress. | [Our Story](adopters/organizations/apisix.md) |

View File

@ -60,7 +60,7 @@ representative at an online or offline event.
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
prithvi.raj@harness.io.
sayan.mondal@harness.io.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the

View File

@ -12,9 +12,8 @@ chaos-sdk |go/python/ansible sdk |litmus-go,litmus-python,litmu
e2e |e2e-suite, e2e-dashboard |litmus-e2e |@uditgaurav, @Jonsy13 |@neelanjan00, @S-ayanide, @avaakash |
integrations |CI/CD plugins, wrappers |chaos-ci-lib, gitlab-templates, github-actions |@uditgaurav, @ksatchit |@ispeakc0de, @Adarshkumar14 |
helm-charts |control-plane, agent, experiments|litmus-helm |@Jasstkn, @ispeakc0de, @imrajdas, @Jonsy13 |@ksatchit, @uditgaurav |
documentation |platform-docs, experiment-docs |litmus-docs, mkdocs |@neelanjan00, @umamukkara, @ispeakc0de |@ksatchit, @ajeshbaby, @amityt, @uditgaurav |
websites |project website, chaoshub, documentation |litmus-website, charthub, litmus-docs |@umamukkara, @arkajyotiMukherjee, @S-ayanide |@SahilKr24, @hrishavjha, @ajeshbaby |
documentation |platform-docs, experiment-docs |litmus-docs, mkdocs |@neelanjan00, @umamukkara, @ispeakc0de |@ksatchit, @ajeshbaby, @amityt, @uditgaurav |websites |project website, chaoshub, documentation |litmus-website, charthub, litmus-docs |@umamukkara, @arkajyotiMukherjee, @S-ayanide |@SahilKr24, @hrishavjha, @ajeshbaby |
websites |project website, chaoshub, documentation |litmus-website, charthub, litmus-docs |@SahilKr24, @hrishavjha, @ajeshbaby |@umamukkara, @S-ayanide |
### Consolidated Maintainers List
```
@ -36,6 +35,9 @@ websites |project website, chaoshub, documentation |litmus-website, cha
"Udit Gaurav",@uditgaurav,udit.gaurav@harness.io
"Vedant Shrotria",@Jonsy13,vedant.shrotria@harness.io
"Uma Mukkara",@umamukkara,umasankar.mukkara@harness.io
"Sahil KR",@SahilKr24,sahil.kumar@harness.io
"Ajesh Baby",@ajeshbaby,ajesh.baby@harness.io
"Hrishav Jha",@hrishavjha,hrishav.jha@harness.io
```
### Consolidated Reviewers List
@ -43,9 +45,6 @@ websites |project website, chaoshub, documentation |litmus-website, cha
```
"Adarsh Kumar",@Adarshkumar14,adarsh.kumar@harness.io
"Akash Srivastava",@avaakash,akash.srivastava@harness.io
"Ajesh Baby",@ajeshbaby,ajesh.baby@harness.io
"Sahil Kumar",@SahilKr24,sahil.kumar@harness.io
"Hrishav Kumar Jha",@hrishavjha,hrishav.kumar@harness.io
```
### Emeritus Maintainers

View File

@ -105,9 +105,18 @@ Fill out the [LitmusChaos Meetings invite form](https://forms.gle/xYZyZ2gTWMqz7x
### Videos
- [What if Your System Experiences an Outage? Let's Build a Resilient Systems with Chaos Engineering](https://www.youtube.com/watch?v=3mjGEh905u4&t=1s) @ [CNCF](https://www.youtube.com/@cncf)
- [Enhancing Cyber Resilience Through Zero Trust Chaos Experiments in Cloud Native Environments](https://youtu.be/BelNIk4Bkng) @ [CNCF](https://www.youtube.com/@cncf)
- [LitmusChaos, with Karthik Satchitanand](https://www.youtube.com/watch?v=ks2R57hhFZk&t=503s) @ [The Kubernetes Podcast from Google](https://www.youtube.com/@TheKubernetesPodcast)
- [Cultural Shifts: Fostering a Chaos First Mindset in Platform Engineering](https://www.youtube.com/watch?v=WUXFKxgZRsk) @ [CNCF](https://www.youtube.com/@cncf)
- [Fire in the Cloud: Bringing Managed Services Under the Ambit of Cloud-Native Chaos Engineering](https://www.youtube.com/watch?v=xCDQp5E3VUs) @ [CNCF](https://www.youtube.com/@cncf)
- [Security Controls for Safe Chaos Experimentation](https://www.youtube.com/watch?v=whCkvLKAw74) @ [CNCF](https://www.youtube.com/@cncf)
- [Chaos Engineering For Hybrid Targets With LitmusChaos](https://www.youtube.com/watch?v=BZL-ngvbpbU&t=751s) @ [CNCF](https://www.youtube.com/@cncf)
- [Cloud Native Live: Litmus Chaos Engine and a microservices demo app](https://youtu.be/hOghvd9qCzI)
- [Chaos Engineering hands-on - An SRE ideating Chaos Experiments and using LitmusChaos | July 2022](https://youtu.be/_x_7SiesjF0)
- [Achieve Digital Product Resiliency with Chaos Engineering](https://youtu.be/PQrmBHgk0ps)
- [Case Study: Bringing Chaos Engineering to the Cloud Native Developers](https://youtu.be/KSl-oKk6TPA) @ [CNCF](https://www.youtube.com/@cncf)
- [Cloud Native Chaos Engineering with LitmusChaos](https://www.youtube.com/watch?v=ItUUqejdXr0) @ [CNCF](https://www.youtube.com/@cncf)
- [How to create Chaos Experiments with Litmus | Litmus Chaos tutorial](https://youtu.be/mwu5eLgUKq4) @ [Is it Observable](https://www.youtube.com/c/IsitObservable)
- [Cloud Native Chaos Engineering Preview With LitmusChaos](https://youtu.be/pMWqhS-F3tQ)
- [Get started with Chaos Engineering with Litmus](https://youtu.be/5CI8d-SKBfc) @ [Containers from the Couch](https://www.youtube.com/c/ContainersfromtheCouch)
@ -129,7 +138,6 @@ Fill out the [LitmusChaos Meetings invite form](https://forms.gle/xYZyZ2gTWMqz7x
Community Blogs:
- Daniyal Rayn: [Do I need Chaos Engineering on my environment? Trust me you need it!](https://maveric-systems.com/blog/do-i-need-chaos-engineering-on-my-environment-trust-me-you-need-it/)
- LiveWyer: [LitmusChaos Showcase: Chaos Experiments in a Helm Chart Test Suite](https://livewyer.io/blog/2021/03/22/litmuschaos-showcase-chaos-experiments-in-a-helm-chart-test-suite/)
- Jessica Cherry: [Test Kubernetes cluster failures and experiments in your terminal](https://opensource.com/article/21/6/kubernetes-litmus-chaos)
- Yang Chuansheng(KubeSphere): [KubeSphere 部署 Litmus 至 Kubernetes 开启混沌实验](https://kubesphere.io/zh/blogs/litmus-kubesphere/)
@ -138,8 +146,6 @@ Community Blogs:
- Akram Riahi(WeScale):[Chaos Engineering : Litmus sous tous les angles](https://blog.wescale.fr/2021/03/11/chaos-engineering-litmus-sous-tous-les-angles/)
- Prashanto Priyanshu(LensKart):[Lenskarts approach to Chaos Engineering-Part 2](https://blog.lenskart.com/lenskarts-approach-to-chaos-engineering-part-2-6290e4f3a74e)
- DevsDay.ru(Russian):[LitmusChaos at Kubecon EU '21](https://devsday.ru/blog/details/40746)
- Ryan Pei(Armory): [LitmusChaos in your Spinnaker Pipeline](https://www.armory.io/blog/litmuschaos-in-your-spinnaker-pipeline/)
- David Gildeh(Zebrium): [Using Autonomous Monitoring with Litmus Chaos Engine on Kubernetes](https://www.zebrium.com/blog/using-autonomous-monitoring-with-litmus-chaos-engine-on-kubernetes)
## Adopters

View File

@ -11,47 +11,62 @@ This document captures only the high level roadmap items. For the detailed backl
- Per-experiment minimal RBAC permissions definition
- Creation of 'scenarios' involving multiple faults via Argo-based Chaos Workflows (with examples for microservices apps like podtato-head and sock-shop)
- Cross-Cloud Control Plane (Litmus Portal) to perform chaos against remote clusters
- Helm3 charts for LitmusChaos (control plane and experiments)
- Helm charts for LitmusChaos control plane
- Helm Chart for LitmusChaos execution Plane
- Support for admin mode (centralized chaos management) as well as namespaced mode (multi-tenant clusters)
- Continuous chaos via flexible schedules, with support to halt/resume or (manual/conditional) abort experiments
- Provide complete workflow termination/abort capability
- Generation of observability data via Prometheus metrics and Kubernetes chaos events for experiments
- Steady-State hypothesis validation before, during and after chaos injection via different probe types
- Support for Docker, Containerd & CRI-O runtime
- Support for scheduling policies (nodeSelector, tolerations) and resource definitions for chaos pods
- ChaosHub refactor for 2.x user flow
- Support for ARM64 nodes
- Minimized role permissions for Chaos Service Accounts
- Scaffolding scripts (SDK) to help bootstrap a new chaos experiment in Go, Python, Ansible
- Support orchestration of non-native chaos libraries via the BYOC (Bring-Your-Own-Chaos) model
- Support for OpenShift platform
- Workflow YAML linter addition
- Integration tests & e2e framework creation for control plane components and chaos experiments
- Documentation (usage guide for chaos operator, resources & developer guide for new experiment creation)
- Improved documentation and tutorials for Litmus Portal based execution flow
- Add architecture details & design resources
- Define community sync up cadence and structure
------
### In-Progress (Under Active Development)
### In-Progress (Under Design OR Active Development)
- Support for all ChaosEngine schema elements within workflow wizard
- Workflow YAML linter addition
- Minimized role permissions for Chaos Service Accounts
- Chaos-center users account to chaosService account map
- Provide complete workflow termination/abort capability
- Cross-hub experiment support within a Chaos Workflow
- Helm Chart for Chaos Execution Plane
- Enhanced CRD schema for ChaosEngine to support advanced CommandProbe configuration
- Support for S3 artifact sink (helps performance/benchmark runs)
- ChaosHub refactor for 2.x user flow
- Chaos experiments against virtual machines and cloud infrastructure (AWS, GCP, Azure, VMWare, Baremetal)
- Improved documentation and tutorials for Litmus Portal based execution flow
- Off the shelf chaos-integrated monitoring dashboards for application chaos categories
- Support for user defined chaos experiment result definition
- Increased fault injection types (IOChaos, HTTPChaos, JVMChaos)
- Special Interest Groups (SIGs) around specific areas in the project to take the roadmap forward
- Native Chaos Workflows with redesigned subscriber to improve resource delegation, enabling seamless and efficient execution of chaos workflows within Kubernetes clusters.
- Introduce transient runners to improve resource efficiency during chaos experiments by dynamically creating and cleaning up chaos runner instances.
- Implement Kubernetes connectors to enable streamlined integration with Kubernetes clusters, providing simplified authentication and configuration management.
- Integrate with tools like K8sGPT to generate insightful reports that identify potential weaknesses in your Kubernetes environment before executing chaos experiments.
- Add Terraform support for defining and executing chaos experiments on infrastructure components, enabling infrastructure-as-code-based chaos engineering.
- Add SDK support for Python and Java, with potential extensions to other programming languages based on community interest.
- Include in-product documentation, such as tooltips, to improve user experience and ease of adoption.
- Implement the litmus-java-sdk with a targeted v1.0.0 release by Q1.
- Integrate distributed tracing by adding attributes or events to spans, and create an OpenTelemetry demo showcasing chaos engineering observability.
- Enhance the exporter to function as an OpenTelemetry collector, providing compatibility with existing observability pipelines.
- Add support for DocumentDB by replacing certain MongoDB operations, improving flexibility for database chaos.
- Upgrade Kubernetes SDK from version 1.21 to 1.26 to stay aligned with the latest Kubernetes features and enhancements.
- Refactor the chaos charts to:
- Replace latest tags with specific, versioned image tags.
- Consolidate multiple images into a single optimized image.
- Update GraphQL and authentication API documentation for improved clarity and user guidance.
- Add comprehensive unit and fuzz tests to enhance code reliability and robustness.
- Implement out-of-the-box Slack integration for better collaboration and monitoring during chaos experiments.
------
### Backlog
- Pre-defined chaos workflows to inject chaos during application benchmark runs
- Support for cloudevents compliant chaos events
- Improved application Chaos Suites for various CNCF projects
- Validation support for all ChaosEngine schema elements within workflow wizard
- Chaos-center users account to chaosService account map
- Cross-hub experiment support within a Chaos Workflow
- Enhanced CRD schema for ChaosEngine to support advanced CommandProbe configuration
- Support for S3 artifact sink (helps performance/benchmark runs)
- Chaos experiments against virtual machines and cloud infrastructure (AWS, GCP, Azure, VMWare, Baremetal)
- Off the shelf chaos-integrated monitoring dashboards for application chaos categories
- Support for user defined chaos experiment result definition
- Increased fault injection types (IOChaos, HTTPChaos, JVMChaos)
- Special Interest Groups (SIGs) around specific areas in the project to take the roadmap forward

View File

@ -0,0 +1,13 @@
## Emirates NBD
[Emirates NBD](https://www.emiratesnbd.com) is Dubai's government-owned bank and is one of the largest banking groups in the Middle East in terms of assets.
### **Why do we use Litmus.**
Resilience is a key aspect in creating fault-tolerant environments, and leveraging tools like Litmus has been instrumental in automating resilience testing. Litmus has enabled us to simulate real-time chaos scenarios, allowing us to thoroughly verify the robustness of both our infrastructure and applications.
### **How do we use Litmus.**
We began with a proof of concept (POC) on a playground cluster. While we explored other tools during this process, Litmus stood out significantly, not only in its capabilities but also due to its excellent user interface. Although we faced a few challenges during the initial setup of Litmus on OpenShift, the team provided timely support, helping us overcome these obstacles and successfully complete the POC.
Now, we've successfully deployed Litmus in a non-production cluster environment, and our SRE team is in the process of transitioning from manual chaos testing to automated chaos tests. This shift will enable us to schedule, automate, and efficiently track the outcomes of these tests, enhancing the resilience of our systems.

View File

@ -0,0 +1,30 @@
## OutSystems
[OutSystems](https://www.outsystems.com/) is a low-code development platform which provides tools for companies to develop, deploy and manage omnichannel enterprise applications. OutSystems was founded in 2001 in Lisbon, Portugal. In June 2018 OutSystems secured a $360M round of funding from KKR and Goldman Sachs and reached the status of Unicorn.
### **Leveraging Litmus Chaos Engineering in Kubernetes Infrastructure:**
We have a Kubernetes-based infrastructure pivotal to our operations, where reliability and resilience are paramount. Recognizing the need for robust testing methodologies, we turned to Litmus Chaos Engineering to fortify our systems against potential failures and to ensure seamless operations even under adverse conditions.
### **Why do we use Litmus:**
Litmus emerged as our tool of choice due to its comprehensive suite of chaos engineering capabilities tailored specifically for Kubernetes environments. Its versatility in orchestrating controlled chaos experiments aligns perfectly with our commitment to enhancing system reliability while maintaining agility.
### **Use Case and Implementation:**
We have seamlessly integrated Litmus Chaos Engineering into various stages of our development and deployment pipeline, spanning from development and testing to staging and production environments. Leveraging Litmus, we meticulously craft and execute chaos experiments, meticulously observing how our infrastructure behaves under stress, and ensuring it meets our predefined Service Level Objectives (SLOs) and Service Level Indicators (SLIs).
### **Achievements:**
Our journey with Litmus Chaos Engineering has been marked by significant milestones:
- Successful deployment of Chaos Center and Litmus Delegate, empowering us with centralized chaos management capabilities.
- Establishment of secure access to Chaos Center through HTTPS, coupled with domain customization for enhanced usability.
- Implementation of WAF ACL to restrict access to Chaos Center, ensuring secure interactions.
- Integration of Azure SSO for streamlined user management and authentication.
- Seamless connectivity between Chaos Center and target nodes, facilitating efficient chaos experimentation.
- Execution of numerous successful experiments, validating the resilience and scalability of our infrastructure.
### **Next Steps:**
As we continue to harness the power of Litmus Chaos Engineering, we remain committed to expanding our chaos engineering initiatives, further refining our chaos experiments, and continually enhancing the resilience of our Kubernetes infrastructure.

View File

@ -325,8 +325,12 @@ func CreateProject(service services.ApplicationService) gin.HandlerFunc {
initialLogin, err := CheckInitialLogin(service, userRequest.UserID)
if err != nil {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrServerError))
} else if initialLogin {
return
}
if initialLogin {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrPasswordNotUpdated))
return
}
// checking if project name is empty
@ -456,8 +460,12 @@ func SendInvitation(service services.ApplicationService) gin.HandlerFunc {
initialLogin, err := CheckInitialLogin(service, c.MustGet("uid").(string))
if err != nil {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrServerError))
} else if initialLogin {
return
}
if initialLogin {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrPasswordNotUpdated))
return
}
// Validating member role
@ -558,8 +566,12 @@ func AcceptInvitation(service services.ApplicationService) gin.HandlerFunc {
initialLogin, err := CheckInitialLogin(service, c.MustGet("uid").(string))
if err != nil {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrServerError))
} else if initialLogin {
return
}
if initialLogin {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrPasswordNotUpdated))
return
}
err = validations.RbacValidator(c.MustGet("uid").(string), member.ProjectID,
@ -614,8 +626,12 @@ func DeclineInvitation(service services.ApplicationService) gin.HandlerFunc {
initialLogin, err := CheckInitialLogin(service, c.MustGet("uid").(string))
if err != nil {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrServerError))
} else if initialLogin {
return
}
if initialLogin {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrPasswordNotUpdated))
return
}
err = validations.RbacValidator(c.MustGet("uid").(string), member.ProjectID,
@ -684,8 +700,12 @@ func LeaveProject(service services.ApplicationService) gin.HandlerFunc {
initialLogin, err := CheckInitialLogin(service, c.MustGet("uid").(string))
if err != nil {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrServerError))
} else if initialLogin {
return
}
if initialLogin {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrPasswordNotUpdated))
return
}
err = validations.RbacValidator(c.MustGet("uid").(string), member.ProjectID,
@ -744,8 +764,12 @@ func RemoveInvitation(service services.ApplicationService) gin.HandlerFunc {
initialLogin, err := CheckInitialLogin(service, c.MustGet("uid").(string))
if err != nil {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrServerError))
} else if initialLogin {
return
}
if initialLogin {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrPasswordNotUpdated))
return
}
err = validations.RbacValidator(c.MustGet("uid").(string), member.ProjectID,
@ -824,8 +848,12 @@ func UpdateProjectName(service services.ApplicationService) gin.HandlerFunc {
initialLogin, err := CheckInitialLogin(service, c.MustGet("uid").(string))
if err != nil {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrServerError))
} else if initialLogin {
return
}
if initialLogin {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrPasswordNotUpdated))
return
}
err = validations.RbacValidator(c.MustGet("uid").(string),

View File

@ -136,13 +136,18 @@ func UpdateUser(service services.ApplicationService) gin.HandlerFunc {
initialLogin, err := CheckInitialLogin(service, uid)
if err != nil {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrServerError))
} else if initialLogin {
return
}
if initialLogin {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrPasswordNotUpdated))
return
}
err = service.UpdateUser(&userRequest)
if err != nil {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrServerError))
return
}
c.JSON(http.StatusOK, gin.H{"message": "User details updated successfully"})
}
@ -554,8 +559,12 @@ func ResetPassword(service services.ApplicationService) gin.HandlerFunc {
initialLogin, err := CheckInitialLogin(service, uid)
if err != nil {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrServerError))
} else if initialLogin {
return
}
if initialLogin {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrPasswordNotUpdated))
return
}
if userPasswordRequest.NewPassword != "" {
@ -610,8 +619,12 @@ func UpdateUserState(service services.ApplicationService) gin.HandlerFunc {
initialLogin, err := CheckInitialLogin(service, adminUser.ID)
if err != nil {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrServerError))
} else if initialLogin {
return
}
if initialLogin {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrPasswordNotUpdated))
return
}
if entities.Role(userRole) != entities.RoleAdmin {
@ -689,8 +702,12 @@ func CreateApiToken(service services.ApplicationService) gin.HandlerFunc {
initialLogin, err := CheckInitialLogin(service, apiTokenRequest.UserID)
if err != nil {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrServerError))
} else if initialLogin {
return
}
if initialLogin {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrPasswordNotUpdated))
return
}
// Checking if user exists
@ -785,8 +802,12 @@ func DeleteApiToken(service services.ApplicationService) gin.HandlerFunc {
initialLogin, err := CheckInitialLogin(service, deleteApiTokenRequest.UserID)
if err != nil {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrServerError))
} else if initialLogin {
return
}
if initialLogin {
c.JSON(utils.ErrorStatusCodes[utils.ErrServerError], presenter.CreateErrorResponse(utils.ErrPasswordNotUpdated))
return
}
token := deleteApiTokenRequest.Token

View File

@ -12,7 +12,6 @@ import (
fuzz "github.com/AdaLogics/go-fuzz-headers"
store "github.com/litmuschaos/litmus/chaoscenter/graphql/server/pkg/data-store"
"github.com/litmuschaos/litmus/chaoscenter/graphql/server/pkg/database/mongodb"
"github.com/stretchr/testify/mock"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/mongo"
@ -47,39 +46,6 @@ func NewMockServices() *MockServices {
}
}
func FuzzProcessExperimentRunDelete(f *testing.F) {
f.Fuzz(func(t *testing.T, data []byte) {
fuzzConsumer := fuzz.NewConsumer(data)
targetStruct := &struct {
Query bson.D
WorkflowRunID *string
ExperimentRun dbChaosExperimentRun.ChaosExperimentRun
Workflow dbChaosExperiment.ChaosExperimentRequest
Username string
StoreStateData *store.StateData
}{}
err := fuzzConsumer.GenerateStruct(targetStruct)
if err != nil {
return
}
mockServices := NewMockServices()
mockServices.MongodbOperator.On("Update", mock.Anything, mongodb.ChaosExperimentRunsCollection, mock.Anything, mock.Anything, mock.Anything).Return(&mongo.UpdateResult{}, nil).Once()
err = mockServices.ChaosExperimentRunService.ProcessExperimentRunDelete(
context.Background(),
targetStruct.Query,
targetStruct.WorkflowRunID,
targetStruct.ExperimentRun,
targetStruct.Workflow,
targetStruct.Username,
targetStruct.StoreStateData,
)
if err != nil {
t.Errorf("ProcessExperimentRunDelete() error = %v", err)
}
})
}
func FuzzProcessExperimentRunStop(f *testing.F) {
f.Fuzz(func(t *testing.T, data []byte) {
fuzzConsumer := fuzz.NewConsumer(data)

View File

@ -1222,7 +1222,7 @@ func (c *ChaosExperimentRunHandler) ChaosExperimentRunEvent(event model.Experime
err = c.chaosExperimentOperator.UpdateChaosExperiment(sessionContext, filter, update)
if err != nil {
logrus.Error("Failed to update experiment collection")
logrus.WithError(err).Error("Failed to update experiment collection")
return err
}
} else if experimentRunCount > 0 {
@ -1257,7 +1257,7 @@ func (c *ChaosExperimentRunHandler) ChaosExperimentRunEvent(event model.Experime
err = c.chaosExperimentOperator.UpdateChaosExperiment(sessionContext, filter, update)
if err != nil {
logrus.Error("Failed to update experiment collection")
logrus.WithError(err).Error("Failed to update experiment collection")
return err
}
}

View File

@ -41,51 +41,6 @@ func FuzzGetChartsPath(f *testing.F) {
})
}
func FuzzReadExperimentFile(f *testing.F) {
f.Fuzz(func(t *testing.T, data []byte, filename string) {
fuzzConsumer := fuzz.NewConsumer(data)
// Create a temporary directory
tmpDir, err := os.MkdirTemp("", "*-fuzztest")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmpDir) // clean up
// Ensure the filename is valid and unique
safeFilename := filepath.Clean(filepath.Base(filename))
if isInvalidFilename(safeFilename) {
safeFilename = "test.yaml"
}
filePath := filepath.Join(tmpDir, safeFilename)
content := ChaosChart{}
err = fuzzConsumer.GenerateStruct(&content)
if err != nil {
return
}
jsonContent, _ := json.Marshal(content)
err = os.WriteFile(filePath, jsonContent, 0644)
if err != nil {
t.Fatal(err)
}
_, err = ReadExperimentFile(filePath)
if err != nil && !isInvalidYAML(jsonContent) {
t.Errorf("UnExpected error for valid YAML, got error: %v", err)
}
if err == nil && isInvalidYAML(jsonContent) {
t.Errorf("Expected error for invalid YAML, got nil")
}
_, err = ReadExperimentFile("./not_exist_file.yaml")
if err == nil {
t.Errorf("Expected error for file does not exist, got nil")
}
})
}
func FuzzGetExperimentData(f *testing.F) {
f.Fuzz(func(t *testing.T, data []byte, filename string) {
fuzzConsumer := fuzz.NewConsumer(data)

View File

@ -6,6 +6,7 @@ import (
"strings"
"github.com/litmuschaos/litmus/chaoscenter/graphql/server/graph/model"
"github.com/litmuschaos/litmus/chaoscenter/graphql/server/utils"
"github.com/go-git/go-git/v5"
"github.com/go-git/go-git/v5/plumbing"
@ -284,7 +285,7 @@ func (c ChaosHubConfig) generateAuthMethod() (transport.AuthMethod, error) {
var auth transport.AuthMethod
if c.AuthType == model.AuthTypeToken {
auth = &http.BasicAuth{
Username: "litmus", // this can be anything except an empty string
Username: utils.Config.GitUsername, // must be a non-empty string or 'x-token-auth' for Bitbucket
Password: *c.Token,
}
} else if c.AuthType == model.AuthTypeBasic {

View File

@ -11,6 +11,8 @@ import (
"strings"
"time"
"github.com/litmuschaos/litmus/chaoscenter/graphql/server/utils"
"github.com/go-git/go-git/v5"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/object"
@ -184,7 +186,7 @@ func (c GitConfig) getAuthMethod() (transport.AuthMethod, error) {
case model.AuthTypeToken:
return &http.BasicAuth{
Username: "litmus", // this can be anything except an empty string
Username: utils.Config.GitUsername, // must be a non-empty string or 'x-token-auth' for Bitbucket
Password: *c.Token,
}, nil

View File

@ -30,6 +30,7 @@ type Configuration struct {
GrpcPort string `split_words:"true" default:"8000"`
InfraCompatibleVersions string `required:"true" split_words:"true"`
DefaultHubGitURL string `required:"true" default:"https://github.com/litmuschaos/chaos-charts"`
GitUsername string `required:"true" split_words:"true" default:"litmus"`
DefaultHubBranchName string `required:"true" split_words:"true"`
CustomChaosHubPath string `split_words:"true" default:"/tmp/"`
DefaultChaosHubPath string `split_words:"true" default:"/tmp/default/"`

View File

@ -10,7 +10,12 @@ export default function blankCanvasTemplate(
switch (infrastructureType) {
case InfrastructureType.KUBERNETES:
return kubernetesBlankCanvasTemplate(experimentName, experiment?.chaosInfrastructure?.namespace);
return kubernetesBlankCanvasTemplate(
experimentName,
experiment?.chaosInfrastructure?.namespace,
undefined,
experiment?.imageRegistry,
);
}
}

View File

@ -215,10 +215,10 @@ const ExperimentDashboardV2Table = ({
);
case ExperimentRunStatus.QUEUED:
return (
<RunExperimentButton
buttonProps={{ disabled: true }}
<StopExperimentButton
experimentID={data.experimentID}
refetchExperiments={refetchExperiments}
infrastructureType={InfrastructureType.KUBERNETES}
/>
);
default:

View File

@ -33,7 +33,9 @@ export const MenuCell = ({
onError: error => showError(error.message)
});
const lastExperimentRunStatus = data.recentExecutions[0]?.experimentRunStatus;
const isDeleteButtonEnabled = !data.recentExecutions.some(
execution => execution?.experimentRunStatus === ExperimentRunStatus.RUNNING
);
// <!-- confirmation dialog boxes -->
const confirmationDialogProps = {
@ -108,10 +110,7 @@ export const MenuCell = ({
icon="main-trash"
text={getString('deleteExperiment')}
onClick={openDeleteDialog}
disabled={
lastExperimentRunStatus === ExperimentRunStatus.RUNNING ||
lastExperimentRunStatus === ExperimentRunStatus.QUEUED
}
disabled={isDeleteButtonEnabled == false}
permission={PermissionGroup.OWNER}
/>
</Menu>

View File

@ -23,6 +23,8 @@ import { ChaosInfrastructureReferenceFieldProps, StudioErrorState, StudioTabs }
import experimentYamlService from 'services/experiment';
import KubernetesChaosInfrastructureReferenceFieldController from '@controllers/KubernetesChaosInfrastructureReferenceField';
import { InfrastructureType } from '@api/entities';
import { getImageRegistry } from '@api/core/ImageRegistry';
import { getScope } from '@utils';
import css from './StudioOverview.module.scss';
interface StudioOverviewViewProps {
@ -52,6 +54,20 @@ export default function StudioOverviewView({
const [currentExperiment, setCurrentExperiment] = React.useState<ExperimentMetadata | undefined>();
const scope = getScope();
// Fetch the image registry data using Apollo's useQuery hook
const { data: getImageRegistryData, loading: imageRegistryLoading } = getImageRegistry({
projectID: scope.projectID,
});
const imageRegistry = getImageRegistryData?.getImageRegistry?{
name: getImageRegistryData.getImageRegistry.imageRegistryInfo.imageRegistryName,
repo: getImageRegistryData.getImageRegistry.imageRegistryInfo.imageRepoName,
secret: getImageRegistryData.getImageRegistry.imageRegistryInfo.secretName,
}
: undefined;
React.useEffect(() => {
experimentHandler?.getExperiment(experimentKey).then(experiment => {
delete experiment?.manifest;
@ -85,6 +101,9 @@ export default function StudioOverviewView({
})
})}
onSubmit={values => {
values.imageRegistry = imageRegistry
if (values.chaosInfrastructure.namespace === undefined) {
delete values.chaosInfrastructure.namespace;
}
@ -144,7 +163,13 @@ export default function StudioOverviewView({
text={getString('cancel')}
onClick={openDiscardDialog}
/>
<Button type="submit" intent="primary" text={getString('next')} rightIcon="chevron-right" />
<Button
type="submit"
intent="primary"
text={getString('next')}
rightIcon="chevron-right"
disabled={imageRegistryLoading}
/>
</Layout.Horizontal>
</Form>
</Layout.Vertical>

View File

@ -231,7 +231,7 @@ spec:
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "3.10.x"
value: "v3.10.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT

View File

@ -257,7 +257,7 @@ spec:
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "3.10.x"
value: "v3.10.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT

View File

@ -248,7 +248,7 @@ spec:
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "3.10.x"
value: "v3.10.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT

View File

@ -231,7 +231,7 @@ spec:
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "3.11.x"
value: "v3.11.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT

View File

@ -257,7 +257,7 @@ spec:
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "3.11.x"
value: "v3.11.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT

View File

@ -248,7 +248,7 @@ spec:
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "3.11.x"
value: "v3.11.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT

View File

@ -0,0 +1,414 @@
---
apiVersion: v1
kind: Secret
metadata:
name: litmus-portal-admin-secret
stringData:
DB_USER: "root"
DB_PASSWORD: "1234"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: litmus-portal-admin-config
data:
DB_SERVER: mongodb://my-release-mongodb-0.my-release-mongodb-headless:27017,my-release-mongodb-1.my-release-mongodb-headless:27017,my-release-mongodb-2.my-release-mongodb-headless:27017/admin
VERSION: "3.12.0"
SKIP_SSL_VERIFY: "false"
# Configurations if you are using dex for OAuth
DEX_ENABLED: "false"
OIDC_ISSUER: "http://<Your Domain>:32000"
DEX_OAUTH_CALLBACK_URL: "http://<litmus-portal frontend exposed URL>:8080/auth/dex/callback"
DEX_OAUTH_CLIENT_ID: "LitmusPortalAuthBackend"
DEX_OAUTH_CLIENT_SECRET: "ZXhhbXBsZS1hcHAtc2VjcmV0"
OAuthJwtSecret: "litmus-oauth@123"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: litmusportal-frontend-nginx-configuration
data:
nginx.conf: |
pid /tmp/nginx.pid;
events {
worker_connections 1024;
}
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off;
include /etc/nginx/mime.types;
gzip on;
gzip_disable "msie6";
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server {
listen 8185 default_server;
root /opt/chaos;
location /health {
return 200;
}
location / {
proxy_http_version 1.1;
add_header Cache-Control "no-cache";
try_files $uri /index.html;
autoindex on;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location /auth/ {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "http://litmusportal-auth-server-service:9003/";
}
location /api/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "http://litmusportal-server-service:9002/";
}
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-frontend
labels:
component: litmusportal-frontend
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-frontend
template:
metadata:
labels:
component: litmusportal-frontend
spec:
automountServiceAccountToken: false
containers:
- name: litmusportal-frontend
image: litmuschaos/litmusportal-frontend:3.12.0
# securityContext:
# runAsUser: 2000
# allowPrivilegeEscalation: false
# runAsNonRoot: true
imagePullPolicy: Always
ports:
- containerPort: 8185
resources:
requests:
memory: "250Mi"
cpu: "125m"
ephemeral-storage: "500Mi"
limits:
memory: "512Mi"
cpu: "550m"
ephemeral-storage: "1Gi"
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-config
configMap:
name: litmusportal-frontend-nginx-configuration
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-frontend-service
spec:
type: NodePort
ports:
- name: http
port: 9091
targetPort: 8185
selector:
component: litmusportal-frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-server
labels:
component: litmusportal-server
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-server
template:
metadata:
labels:
component: litmusportal-server
spec:
automountServiceAccountToken: false
volumes:
- name: gitops-storage
emptyDir: {}
- name: hub-storage
emptyDir: {}
containers:
- name: graphql-server
image: litmuschaos/litmusportal-server:3.12.0
volumeMounts:
- mountPath: /tmp/
name: gitops-storage
- mountPath: /tmp/version
name: hub-storage
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
envFrom:
- configMapRef:
name: litmus-portal-admin-config
- secretRef:
name: litmus-portal-admin-secret
env:
# if self-signed certificate are used pass the base64 tls certificate, to allow agents to use tls for communication
- name: TLS_CERT_B64
value: ""
- name: ENABLE_GQL_INTROSPECTION
value: "false"
- name: INFRA_DEPLOYMENTS
value: '["app=chaos-exporter", "name=chaos-operator", "app=workflow-controller", "app=event-tracker"]'
- name: CHAOS_CENTER_UI_ENDPOINT
value: ""
- name: SUBSCRIBER_IMAGE
value: "litmuschaos/litmusportal-subscriber:3.12.0"
- name: EVENT_TRACKER_IMAGE
value: "litmuschaos/litmusportal-event-tracker:3.12.0"
- name: ARGO_WORKFLOW_CONTROLLER_IMAGE
value: "litmuschaos/workflow-controller:v3.3.1"
- name: ARGO_WORKFLOW_EXECUTOR_IMAGE
value: "litmuschaos/argoexec:v3.3.1"
- name: LITMUS_CHAOS_OPERATOR_IMAGE
value: "litmuschaos/chaos-operator:3.12.0"
- name: LITMUS_CHAOS_RUNNER_IMAGE
value: "litmuschaos/chaos-runner:3.12.0"
- name: LITMUS_CHAOS_EXPORTER_IMAGE
value: "litmuschaos/chaos-exporter:3.12.0"
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "v3.12.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT
value: "3030"
- name: WORKFLOW_HELPER_IMAGE_VERSION
value: "3.12.0"
- name: REMOTE_HUB_MAX_SIZE
value: "5000000"
- name: INFRA_COMPATIBLE_VERSIONS
value: '["3.12.0"]'
- name: ALLOWED_ORIGINS
value: ".*" #eg: ^(http://|https://|)litmuschaos.io(:[0-9]+|)?,^(http://|https://|)litmusportal-server-service(:[0-9]+|)?
- name: ENABLE_INTERNAL_TLS
value: "false"
- name: TLS_CERT_PATH
value: ""
- name: TLS_KEY_PATH
value: ""
- name: CA_CERT_TLS_PATH
value: ""
- name: REST_PORT
value: "8080"
- name: GRPC_PORT
value: "8000"
ports:
- containerPort: 8080
- containerPort: 8000
imagePullPolicy: Always
resources:
requests:
memory: "250Mi"
cpu: "225m"
ephemeral-storage: "500Mi"
limits:
memory: "712Mi"
cpu: "550m"
ephemeral-storage: "1Gi"
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: litmusportal-server
namespace: litmus
labels:
component: litmusportal-server
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
component: litmusportal-server
ingress:
- from:
- podSelector:
matchLabels:
component: litmusportal-frontend
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-server-service
spec:
type: NodePort
ports:
- name: graphql-server
port: 9002
targetPort: 8080
- name: graphql-rpc-server
port: 8000
targetPort: 8000
selector:
component: litmusportal-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-auth-server
labels:
component: litmusportal-auth-server
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-auth-server
template:
metadata:
labels:
component: litmusportal-auth-server
spec:
automountServiceAccountToken: false
containers:
- name: auth-server
image: litmuschaos/litmusportal-auth-server:3.12.0
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
envFrom:
- configMapRef:
name: litmus-portal-admin-config
- secretRef:
name: litmus-portal-admin-secret
env:
- name: STRICT_PASSWORD_POLICY
value: "false"
- name: ADMIN_USERNAME
value: "admin"
- name: ADMIN_PASSWORD
value: "litmus"
- name: LITMUS_GQL_GRPC_ENDPOINT
value: "litmusportal-server-service"
- name: LITMUS_GQL_GRPC_PORT
value: "8000"
- name: ALLOWED_ORIGINS
value: ".*" #eg: ^(http://|https://|)litmuschaos.io(:[0-9]+|)?,^(http://|https://|)litmusportal-server-service(:[0-9]+|)?
- name: ENABLE_INTERNAL_TLS
value: "false"
- name: TLS_CERT_PATH
value: ""
- name: TLS_KEY_PATH
value: ""
- name: CA_CERT_TLS_PATH
value: ""
- name: REST_PORT
value: "3000"
- name: GRPC_PORT
value: "3030"
ports:
- containerPort: 3000
- containerPort: 3030
imagePullPolicy: Always
resources:
requests:
memory: "250Mi"
cpu: "125m"
ephemeral-storage: "500Mi"
limits:
memory: "712Mi"
cpu: "550m"
ephemeral-storage: "1Gi"
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: litmusportal-auth-server
namespace: litmus
labels:
component: litmusportal-auth-server
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
component: litmusportal-auth-server
ingress:
- from:
- podSelector:
matchLabels:
component: litmusportal-frontend
- from:
- podSelector:
matchLabels:
component: litmusportal-server
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-auth-server-service
spec:
type: NodePort
ports:
- name: auth-server
port: 9003
targetPort: 3000
- name: auth-rpc-server
port: 3030
targetPort: 3030
selector:
component: litmusportal-auth-server

View File

@ -0,0 +1,447 @@
---
apiVersion: v1
kind: Secret
metadata:
name: litmus-portal-admin-secret
stringData:
DB_USER: "root"
DB_PASSWORD: "1234"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: litmus-portal-admin-config
data:
DB_SERVER: mongodb://my-release-mongodb-0.my-release-mongodb-headless:27017,my-release-mongodb-1.my-release-mongodb-headless:27017,my-release-mongodb-2.my-release-mongodb-headless:27017/admin
VERSION: "3.12.0"
SKIP_SSL_VERIFY: "false"
# Configurations if you are using dex for OAuth
DEX_ENABLED: "false"
OIDC_ISSUER: "http://<Your Domain>:32000"
DEX_OAUTH_CALLBACK_URL: "http://<litmus-portal frontend exposed URL>:8080/auth/dex/callback"
DEX_OAUTH_CLIENT_ID: "LitmusPortalAuthBackend"
DEX_OAUTH_CLIENT_SECRET: "ZXhhbXBsZS1hcHAtc2VjcmV0"
OAuthJwtSecret: "litmus-oauth@123"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: litmusportal-frontend-nginx-configuration
data:
nginx.conf: |
pid /tmp/nginx.pid;
events {
worker_connections 1024;
}
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off;
include /etc/nginx/mime.types;
gzip on;
gzip_disable "msie6";
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server {
listen 8185 ssl;
ssl_certificate /etc/tls/tls.crt;
ssl_certificate_key /etc/tls/tls.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_client_certificate /etc/tls/ca.crt;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
root /opt/chaos;
location /health {
return 200;
}
location / {
proxy_http_version 1.1;
add_header Cache-Control "no-cache";
try_files $uri /index.html;
autoindex on;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location /auth/ {
proxy_ssl_verify off;
proxy_ssl_session_reuse on;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "https://litmusportal-auth-server-service:9005/";
proxy_ssl_certificate /etc/tls/tls.crt;
proxy_ssl_certificate_key /etc/tls/tls.key;
}
location /api/ {
proxy_ssl_verify off;
proxy_ssl_session_reuse on;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "https://litmusportal-server-service:9004/";
proxy_ssl_certificate /etc/tls/tls.crt;
proxy_ssl_certificate_key /etc/tls/tls.key;
}
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-frontend
labels:
component: litmusportal-frontend
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-frontend
template:
metadata:
labels:
component: litmusportal-frontend
spec:
automountServiceAccountToken: false
containers:
- name: litmusportal-frontend
image: litmuschaos/litmusportal-frontend:3.12.0
# securityContext:
# runAsUser: 2000
# allowPrivilegeEscalation: false
# runAsNonRoot: true
imagePullPolicy: Always
ports:
- containerPort: 8185
resources:
requests:
memory: "250Mi"
cpu: "125m"
ephemeral-storage: "500Mi"
limits:
memory: "512Mi"
cpu: "550m"
ephemeral-storage: "1Gi"
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- mountPath: /etc/tls
name: tls-secret
volumes:
- name: nginx-config
configMap:
name: litmusportal-frontend-nginx-configuration
- name: tls-secret
secret:
secretName: tls-secret
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-frontend-service
spec:
type: NodePort
ports:
- name: http
port: 9091
targetPort: 8185
selector:
component: litmusportal-frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-server
labels:
component: litmusportal-server
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-server
template:
metadata:
labels:
component: litmusportal-server
spec:
automountServiceAccountToken: false
volumes:
- name: gitops-storage
emptyDir: {}
- name: hub-storage
emptyDir: {}
- name: tls-secret
secret:
secretName: tls-secret
containers:
- name: graphql-server
image: litmuschaos/litmusportal-server:3.12.0
volumeMounts:
- mountPath: /tmp/
name: gitops-storage
- mountPath: /tmp/version
name: hub-storage
- mountPath: /etc/tls
name: tls-secret
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
envFrom:
- configMapRef:
name: litmus-portal-admin-config
- secretRef:
name: litmus-portal-admin-secret
env:
# if self-signed certificate are used pass the base64 tls certificate, to allow agents to use tls for communication
- name: TLS_CERT_B64
value: ""
- name: ENABLE_GQL_INTROSPECTION
value: "false"
- name: INFRA_DEPLOYMENTS
value: '["app=chaos-exporter", "name=chaos-operator", "app=workflow-controller", "app=event-tracker"]'
- name: CHAOS_CENTER_UI_ENDPOINT
value: ""
- name: SUBSCRIBER_IMAGE
value: "litmuschaos/litmusportal-subscriber:3.12.0"
- name: EVENT_TRACKER_IMAGE
value: "litmuschaos/litmusportal-event-tracker:3.12.0"
- name: ARGO_WORKFLOW_CONTROLLER_IMAGE
value: "litmuschaos/workflow-controller:v3.3.1"
- name: ARGO_WORKFLOW_EXECUTOR_IMAGE
value: "litmuschaos/argoexec:v3.3.1"
- name: LITMUS_CHAOS_OPERATOR_IMAGE
value: "litmuschaos/chaos-operator:3.12.0"
- name: LITMUS_CHAOS_RUNNER_IMAGE
value: "litmuschaos/chaos-runner:3.12.0"
- name: LITMUS_CHAOS_EXPORTER_IMAGE
value: "litmuschaos/chaos-exporter:3.12.0"
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "v3.12.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT
value: "3030"
- name: WORKFLOW_HELPER_IMAGE_VERSION
value: "3.12.0"
- name: REMOTE_HUB_MAX_SIZE
value: "5000000"
- name: INFRA_COMPATIBLE_VERSIONS
value: '["3.12.0"]'
- name: ALLOWED_ORIGINS
value: "^(http://|https://|)litmuschaos.io(:[0-9]+|)?,^(http://|https://|)litmusportal-server-service(:[0-9]+|)?"
- name: ENABLE_INTERNAL_TLS
value: "true"
- name: TLS_CERT_PATH
value: "/etc/tls/tls.crt"
- name: TLS_KEY_PATH
value: "/etc/tls/tls.key"
- name: CA_CERT_TLS_PATH
value: "/etc/tls/ca.crt"
- name: REST_PORT
value: "8081"
- name: GRPC_PORT
value: "8001"
ports:
- containerPort: 8081
- containerPort: 8001
imagePullPolicy: Always
resources:
requests:
memory: "250Mi"
cpu: "225m"
ephemeral-storage: "500Mi"
limits:
memory: "712Mi"
cpu: "550m"
ephemeral-storage: "1Gi"
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: litmusportal-server
namespace: litmus
labels:
component: litmusportal-server
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
component: litmusportal-server
ingress:
- from:
- podSelector:
matchLabels:
component: litmusportal-frontend
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-server-service
spec:
type: NodePort
ports:
- name: graphql-server-https
port: 9004
targetPort: 8081
- name: graphql-rpc-server-https
port: 8001
targetPort: 8001
selector:
component: litmusportal-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-auth-server
labels:
component: litmusportal-auth-server
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-auth-server
template:
metadata:
labels:
component: litmusportal-auth-server
spec:
volumes:
- name: tls-secret
secret:
secretName: tls-secret
automountServiceAccountToken: false
containers:
- name: auth-server
volumeMounts:
- mountPath: /etc/tls
name: tls-secret
image: litmuschaos/litmusportal-auth-server:3.12.0
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
envFrom:
- configMapRef:
name: litmus-portal-admin-config
- secretRef:
name: litmus-portal-admin-secret
env:
- name: STRICT_PASSWORD_POLICY
value: "false"
- name: ADMIN_USERNAME
value: "admin"
- name: ADMIN_PASSWORD
value: "litmus"
- name: LITMUS_GQL_GRPC_ENDPOINT
value: "litmusportal-server-service"
- name: LITMUS_GQL_GRPC_PORT
value: "8000"
- name: ALLOWED_ORIGINS
value: "^(http://|https://|)litmuschaos.io(:[0-9]+|)?,^(http://|https://|)litmusportal-server-service(:[0-9]+|)?" #ip needs to added here
- name: ENABLE_INTERNAL_TLS
value: "true"
- name: TLS_CERT_PATH
value: "/etc/tls/tls.crt"
- name: TLS_KEY_PATH
value: "/etc/tls/ctls.key"
- name: CA_CERT_TLS_PATH
value: "/etc/tls/ca.crt"
- name: REST_PORT
value: "3001"
- name: GRPC_PORT
value: "3031"
ports:
- containerPort: 3001
- containerPort: 3031
imagePullPolicy: Always
resources:
requests:
memory: "250Mi"
cpu: "125m"
ephemeral-storage: "500Mi"
limits:
memory: "712Mi"
cpu: "550m"
ephemeral-storage: "1Gi"
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: litmusportal-auth-server
namespace: litmus
labels:
component: litmusportal-auth-server
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
component: litmusportal-auth-server
ingress:
- from:
- podSelector:
matchLabels:
component: litmusportal-frontend
- from:
- podSelector:
matchLabels:
component: litmusportal-server
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-auth-server-service
spec:
type: NodePort
ports:
- name: auth-server-https
port: 9005
targetPort: 3001
- name: auth-rpc-server-https
port: 3031
targetPort: 3031
selector:
component: litmusportal-auth-server

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,420 @@
---
apiVersion: v1
kind: Secret
metadata:
name: litmus-portal-admin-secret
stringData:
DB_USER: "root"
DB_PASSWORD: "1234"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: litmus-portal-admin-config
data:
DB_SERVER: mongodb://my-release-mongodb-0.my-release-mongodb-headless:27017,my-release-mongodb-1.my-release-mongodb-headless:27017,my-release-mongodb-2.my-release-mongodb-headless:27017/admin
VERSION: "3.12.0"
SKIP_SSL_VERIFY: "false"
# Configurations if you are using dex for OAuth
DEX_ENABLED: "false"
OIDC_ISSUER: "http://<Your Domain>:32000"
DEX_OAUTH_CALLBACK_URL: "http://<litmus-portal frontend exposed URL>:8080/auth/dex/callback"
DEX_OAUTH_CLIENT_ID: "LitmusPortalAuthBackend"
DEX_OAUTH_CLIENT_SECRET: "ZXhhbXBsZS1hcHAtc2VjcmV0"
OAuthJwtSecret: "litmus-oauth@123"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: litmusportal-frontend-nginx-configuration
data:
nginx.conf: |
pid /tmp/nginx.pid;
events {
worker_connections 1024;
}
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off;
include /etc/nginx/mime.types;
gzip on;
gzip_disable "msie6";
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server {
listen 8185 ssl;
ssl_certificate /etc/tls/tls.crt;
ssl_certificate_key /etc/tls/tls.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_client_certificate /etc/tls/ca.crt;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
root /opt/chaos;
location /health {
return 200;
}
location / {
proxy_http_version 1.1;
add_header Cache-Control "no-cache";
try_files $uri /index.html;
autoindex on;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location /auth/ {
proxy_ssl_verify off;
proxy_ssl_session_reuse on;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "https://litmusportal-auth-server-service:9005/";
proxy_ssl_certificate /etc/tls/tls.crt;
proxy_ssl_certificate_key /etc/tls/tls.key;
}
location /api/ {
proxy_ssl_verify off;
proxy_ssl_session_reuse on;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "https://litmusportal-server-service:9004/";
proxy_ssl_certificate /etc/tls/tls.crt;
proxy_ssl_certificate_key /etc/tls/tls.key;
}
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-frontend
labels:
component: litmusportal-frontend
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-frontend
template:
metadata:
labels:
component: litmusportal-frontend
spec:
automountServiceAccountToken: false
containers:
- name: litmusportal-frontend
image: litmuschaos/litmusportal-frontend:3.12.0
# securityContext:
# runAsUser: 2000
# allowPrivilegeEscalation: false
# runAsNonRoot: true
imagePullPolicy: Always
ports:
- containerPort: 8185
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- mountPath: /etc/tls
name: tls-secret
volumes:
- name: nginx-config
configMap:
name: litmusportal-frontend-nginx-configuration
- name: tls-secret
secret:
secretName: tls-secret
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-frontend-service
spec:
type: NodePort
ports:
- name: http
port: 9091
targetPort: 8185
selector:
component: litmusportal-frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-server
labels:
component: litmusportal-server
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-server
template:
metadata:
labels:
component: litmusportal-server
spec:
automountServiceAccountToken: false
volumes:
- name: gitops-storage
emptyDir: {}
- name: hub-storage
emptyDir: {}
- name: tls-secret
secret:
secretName: tls-secret
containers:
- name: graphql-server
image: litmuschaos/litmusportal-server:3.12.0
volumeMounts:
- mountPath: /tmp/
name: gitops-storage
- mountPath: /tmp/version
name: hub-storage
- mountPath: /etc/tls
name: tls-secret
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
envFrom:
- configMapRef:
name: litmus-portal-admin-config
- secretRef:
name: litmus-portal-admin-secret
env:
# if self-signed certificate are used pass the base64 tls certificate, to allow agents to use tls for communication
- name: TLS_CERT_B64
value: ""
- name: ENABLE_GQL_INTROSPECTION
value: "false"
- name: INFRA_DEPLOYMENTS
value: '["app=chaos-exporter", "name=chaos-operator", "app=workflow-controller", "app=event-tracker"]'
- name: CHAOS_CENTER_UI_ENDPOINT
value: ""
- name: SUBSCRIBER_IMAGE
value: "litmuschaos/litmusportal-subscriber:3.12.0"
- name: EVENT_TRACKER_IMAGE
value: "litmuschaos/litmusportal-event-tracker:3.12.0"
- name: ARGO_WORKFLOW_CONTROLLER_IMAGE
value: "litmuschaos/workflow-controller:v3.3.1"
- name: ARGO_WORKFLOW_EXECUTOR_IMAGE
value: "litmuschaos/argoexec:v3.3.1"
- name: LITMUS_CHAOS_OPERATOR_IMAGE
value: "litmuschaos/chaos-operator:3.12.0"
- name: LITMUS_CHAOS_RUNNER_IMAGE
value: "litmuschaos/chaos-runner:3.12.0"
- name: LITMUS_CHAOS_EXPORTER_IMAGE
value: "litmuschaos/chaos-exporter:3.12.0"
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "v3.12.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT
value: "3030"
- name: WORKFLOW_HELPER_IMAGE_VERSION
value: "3.12.0"
- name: REMOTE_HUB_MAX_SIZE
value: "5000000"
- name: INFRA_COMPATIBLE_VERSIONS
value: '["3.12.0"]'
- name: ALLOWED_ORIGINS
value: ".*" #eg: ^(http://|https://|)litmuschaos.io(:[0-9]+|)?,^(http://|https://|)litmusportal-server-service(:[0-9]+|)?
- name: ENABLE_INTERNAL_TLS
value: "true"
- name: TLS_CERT_PATH
value: "/etc/tls/tls.crt"
- name: TLS_KEY_PATH
value: "/etc/tls/tls.key"
- name: CA_CERT_TLS_PATH
value: "/etc/tls/ca.crt"
- name: REST_PORT
value: "8081"
- name: GRPC_PORT
value: "8001"
ports:
- containerPort: 8081
- containerPort: 8001
imagePullPolicy: Always
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: litmusportal-server
namespace: litmus
labels:
component: litmusportal-server
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
component: litmusportal-server
ingress:
- from:
- podSelector:
matchLabels:
component: litmusportal-frontend
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-server-service
spec:
type: NodePort
ports:
- name: graphql-server-https
port: 9004
targetPort: 8081
- name: graphql-rpc-server-https
port: 8001
targetPort: 8001
selector:
component: litmusportal-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-auth-server
labels:
component: litmusportal-auth-server
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-auth-server
template:
metadata:
labels:
component: litmusportal-auth-server
spec:
volumes:
- name: tls-secret
secret:
secretName: tls-secret
automountServiceAccountToken: false
containers:
- name: auth-server
volumeMounts:
- mountPath: /etc/tls
name: tls-secret
image: litmuschaos/litmusportal-auth-server:3.12.0
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
envFrom:
- configMapRef:
name: litmus-portal-admin-config
- secretRef:
name: litmus-portal-admin-secret
env:
- name: STRICT_PASSWORD_POLICY
value: "false"
- name: ADMIN_USERNAME
value: "admin"
- name: ADMIN_PASSWORD
value: "litmus"
- name: LITMUS_GQL_GRPC_ENDPOINT
value: "litmusportal-server-service"
- name: LITMUS_GQL_GRPC_PORT
value: "8000"
- name: ALLOWED_ORIGINS
value: "^(http://|https://|)litmuschaos.io(:[0-9]+|)?,^(http://|https://|)litmusportal-server-service(:[0-9]+|)?" #ip needs to added here
- name: ENABLE_INTERNAL_TLS
value: "true"
- name: TLS_CERT_PATH
value: "/etc/tls/tls.crt"
- name: TLS_KEY_PATH
value: "/etc/tls/ctls.key"
- name: CA_CERT_TLS_PATH
value: "/etc/tls/ca.crt"
- name: REST_PORT
value: "3001"
- name: GRPC_PORT
value: "3031"
ports:
- containerPort: 3001
- containerPort: 3031
imagePullPolicy: Always
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: litmusportal-auth-server
namespace: litmus
labels:
component: litmusportal-auth-server
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
component: litmusportal-auth-server
ingress:
- from:
- podSelector:
matchLabels:
component: litmusportal-frontend
- from:
- podSelector:
matchLabels:
component: litmusportal-server
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-auth-server-service
spec:
type: NodePort
ports:
- name: auth-server-https
port: 9005
targetPort: 3001
- name: auth-rpc-server-https
port: 3031
targetPort: 3031
selector:
component: litmusportal-auth-server

View File

@ -0,0 +1,414 @@
---
apiVersion: v1
kind: Secret
metadata:
name: litmus-portal-admin-secret
stringData:
DB_USER: "root"
DB_PASSWORD: "1234"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: litmus-portal-admin-config
data:
DB_SERVER: mongodb://my-release-mongodb-0.my-release-mongodb-headless:27017,my-release-mongodb-1.my-release-mongodb-headless:27017,my-release-mongodb-2.my-release-mongodb-headless:27017/admin
VERSION: "3.13.0"
SKIP_SSL_VERIFY: "false"
# Configurations if you are using dex for OAuth
DEX_ENABLED: "false"
OIDC_ISSUER: "http://<Your Domain>:32000"
DEX_OAUTH_CALLBACK_URL: "http://<litmus-portal frontend exposed URL>:8080/auth/dex/callback"
DEX_OAUTH_CLIENT_ID: "LitmusPortalAuthBackend"
DEX_OAUTH_CLIENT_SECRET: "ZXhhbXBsZS1hcHAtc2VjcmV0"
OAuthJwtSecret: "litmus-oauth@123"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: litmusportal-frontend-nginx-configuration
data:
nginx.conf: |
pid /tmp/nginx.pid;
events {
worker_connections 1024;
}
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off;
include /etc/nginx/mime.types;
gzip on;
gzip_disable "msie6";
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server {
listen 8185 default_server;
root /opt/chaos;
location /health {
return 200;
}
location / {
proxy_http_version 1.1;
add_header Cache-Control "no-cache";
try_files $uri /index.html;
autoindex on;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location /auth/ {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "http://litmusportal-auth-server-service:9003/";
}
location /api/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "http://litmusportal-server-service:9002/";
}
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-frontend
labels:
component: litmusportal-frontend
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-frontend
template:
metadata:
labels:
component: litmusportal-frontend
spec:
automountServiceAccountToken: false
containers:
- name: litmusportal-frontend
image: litmuschaos/litmusportal-frontend:3.13.0
# securityContext:
# runAsUser: 2000
# allowPrivilegeEscalation: false
# runAsNonRoot: true
imagePullPolicy: Always
ports:
- containerPort: 8185
resources:
requests:
memory: "250Mi"
cpu: "125m"
ephemeral-storage: "500Mi"
limits:
memory: "512Mi"
cpu: "550m"
ephemeral-storage: "1Gi"
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-config
configMap:
name: litmusportal-frontend-nginx-configuration
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-frontend-service
spec:
type: NodePort
ports:
- name: http
port: 9091
targetPort: 8185
selector:
component: litmusportal-frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-server
labels:
component: litmusportal-server
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-server
template:
metadata:
labels:
component: litmusportal-server
spec:
automountServiceAccountToken: false
volumes:
- name: gitops-storage
emptyDir: {}
- name: hub-storage
emptyDir: {}
containers:
- name: graphql-server
image: litmuschaos/litmusportal-server:3.13.0
volumeMounts:
- mountPath: /tmp/
name: gitops-storage
- mountPath: /tmp/version
name: hub-storage
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
envFrom:
- configMapRef:
name: litmus-portal-admin-config
- secretRef:
name: litmus-portal-admin-secret
env:
# if self-signed certificate are used pass the base64 tls certificate, to allow agents to use tls for communication
- name: TLS_CERT_B64
value: ""
- name: ENABLE_GQL_INTROSPECTION
value: "false"
- name: INFRA_DEPLOYMENTS
value: '["app=chaos-exporter", "name=chaos-operator", "app=workflow-controller", "app=event-tracker"]'
- name: CHAOS_CENTER_UI_ENDPOINT
value: ""
- name: SUBSCRIBER_IMAGE
value: "litmuschaos/litmusportal-subscriber:3.13.0"
- name: EVENT_TRACKER_IMAGE
value: "litmuschaos/litmusportal-event-tracker:3.13.0"
- name: ARGO_WORKFLOW_CONTROLLER_IMAGE
value: "litmuschaos/workflow-controller:v3.3.1"
- name: ARGO_WORKFLOW_EXECUTOR_IMAGE
value: "litmuschaos/argoexec:v3.3.1"
- name: LITMUS_CHAOS_OPERATOR_IMAGE
value: "litmuschaos/chaos-operator:3.13.0"
- name: LITMUS_CHAOS_RUNNER_IMAGE
value: "litmuschaos/chaos-runner:3.13.0"
- name: LITMUS_CHAOS_EXPORTER_IMAGE
value: "litmuschaos/chaos-exporter:3.13.0"
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "v3.13.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT
value: "3030"
- name: WORKFLOW_HELPER_IMAGE_VERSION
value: "3.13.0"
- name: REMOTE_HUB_MAX_SIZE
value: "5000000"
- name: INFRA_COMPATIBLE_VERSIONS
value: '["3.13.0"]'
- name: ALLOWED_ORIGINS
value: ".*" #eg: ^(http://|https://|)litmuschaos.io(:[0-9]+|)?,^(http://|https://|)litmusportal-server-service(:[0-9]+|)?
- name: ENABLE_INTERNAL_TLS
value: "false"
- name: TLS_CERT_PATH
value: ""
- name: TLS_KEY_PATH
value: ""
- name: CA_CERT_TLS_PATH
value: ""
- name: REST_PORT
value: "8080"
- name: GRPC_PORT
value: "8000"
ports:
- containerPort: 8080
- containerPort: 8000
imagePullPolicy: Always
resources:
requests:
memory: "250Mi"
cpu: "225m"
ephemeral-storage: "500Mi"
limits:
memory: "712Mi"
cpu: "550m"
ephemeral-storage: "1Gi"
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: litmusportal-server
namespace: litmus
labels:
component: litmusportal-server
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
component: litmusportal-server
ingress:
- from:
- podSelector:
matchLabels:
component: litmusportal-frontend
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-server-service
spec:
type: NodePort
ports:
- name: graphql-server
port: 9002
targetPort: 8080
- name: graphql-rpc-server
port: 8000
targetPort: 8000
selector:
component: litmusportal-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-auth-server
labels:
component: litmusportal-auth-server
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-auth-server
template:
metadata:
labels:
component: litmusportal-auth-server
spec:
automountServiceAccountToken: false
containers:
- name: auth-server
image: litmuschaos/litmusportal-auth-server:3.13.0
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
envFrom:
- configMapRef:
name: litmus-portal-admin-config
- secretRef:
name: litmus-portal-admin-secret
env:
- name: STRICT_PASSWORD_POLICY
value: "false"
- name: ADMIN_USERNAME
value: "admin"
- name: ADMIN_PASSWORD
value: "litmus"
- name: LITMUS_GQL_GRPC_ENDPOINT
value: "litmusportal-server-service"
- name: LITMUS_GQL_GRPC_PORT
value: "8000"
- name: ALLOWED_ORIGINS
value: ".*" #eg: ^(http://|https://|)litmuschaos.io(:[0-9]+|)?,^(http://|https://|)litmusportal-server-service(:[0-9]+|)?
- name: ENABLE_INTERNAL_TLS
value: "false"
- name: TLS_CERT_PATH
value: ""
- name: TLS_KEY_PATH
value: ""
- name: CA_CERT_TLS_PATH
value: ""
- name: REST_PORT
value: "3000"
- name: GRPC_PORT
value: "3030"
ports:
- containerPort: 3000
- containerPort: 3030
imagePullPolicy: Always
resources:
requests:
memory: "250Mi"
cpu: "125m"
ephemeral-storage: "500Mi"
limits:
memory: "712Mi"
cpu: "550m"
ephemeral-storage: "1Gi"
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: litmusportal-auth-server
namespace: litmus
labels:
component: litmusportal-auth-server
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
component: litmusportal-auth-server
ingress:
- from:
- podSelector:
matchLabels:
component: litmusportal-frontend
- from:
- podSelector:
matchLabels:
component: litmusportal-server
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-auth-server-service
spec:
type: NodePort
ports:
- name: auth-server
port: 9003
targetPort: 3000
- name: auth-rpc-server
port: 3030
targetPort: 3030
selector:
component: litmusportal-auth-server

View File

@ -0,0 +1,447 @@
---
apiVersion: v1
kind: Secret
metadata:
name: litmus-portal-admin-secret
stringData:
DB_USER: "root"
DB_PASSWORD: "1234"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: litmus-portal-admin-config
data:
DB_SERVER: mongodb://my-release-mongodb-0.my-release-mongodb-headless:27017,my-release-mongodb-1.my-release-mongodb-headless:27017,my-release-mongodb-2.my-release-mongodb-headless:27017/admin
VERSION: "3.13.0"
SKIP_SSL_VERIFY: "false"
# Configurations if you are using dex for OAuth
DEX_ENABLED: "false"
OIDC_ISSUER: "http://<Your Domain>:32000"
DEX_OAUTH_CALLBACK_URL: "http://<litmus-portal frontend exposed URL>:8080/auth/dex/callback"
DEX_OAUTH_CLIENT_ID: "LitmusPortalAuthBackend"
DEX_OAUTH_CLIENT_SECRET: "ZXhhbXBsZS1hcHAtc2VjcmV0"
OAuthJwtSecret: "litmus-oauth@123"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: litmusportal-frontend-nginx-configuration
data:
nginx.conf: |
pid /tmp/nginx.pid;
events {
worker_connections 1024;
}
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off;
include /etc/nginx/mime.types;
gzip on;
gzip_disable "msie6";
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server {
listen 8185 ssl;
ssl_certificate /etc/tls/tls.crt;
ssl_certificate_key /etc/tls/tls.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_client_certificate /etc/tls/ca.crt;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
root /opt/chaos;
location /health {
return 200;
}
location / {
proxy_http_version 1.1;
add_header Cache-Control "no-cache";
try_files $uri /index.html;
autoindex on;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location /auth/ {
proxy_ssl_verify off;
proxy_ssl_session_reuse on;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "https://litmusportal-auth-server-service:9005/";
proxy_ssl_certificate /etc/tls/tls.crt;
proxy_ssl_certificate_key /etc/tls/tls.key;
}
location /api/ {
proxy_ssl_verify off;
proxy_ssl_session_reuse on;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "https://litmusportal-server-service:9004/";
proxy_ssl_certificate /etc/tls/tls.crt;
proxy_ssl_certificate_key /etc/tls/tls.key;
}
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-frontend
labels:
component: litmusportal-frontend
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-frontend
template:
metadata:
labels:
component: litmusportal-frontend
spec:
automountServiceAccountToken: false
containers:
- name: litmusportal-frontend
image: litmuschaos/litmusportal-frontend:3.13.0
# securityContext:
# runAsUser: 2000
# allowPrivilegeEscalation: false
# runAsNonRoot: true
imagePullPolicy: Always
ports:
- containerPort: 8185
resources:
requests:
memory: "250Mi"
cpu: "125m"
ephemeral-storage: "500Mi"
limits:
memory: "512Mi"
cpu: "550m"
ephemeral-storage: "1Gi"
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- mountPath: /etc/tls
name: tls-secret
volumes:
- name: nginx-config
configMap:
name: litmusportal-frontend-nginx-configuration
- name: tls-secret
secret:
secretName: tls-secret
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-frontend-service
spec:
type: NodePort
ports:
- name: http
port: 9091
targetPort: 8185
selector:
component: litmusportal-frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-server
labels:
component: litmusportal-server
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-server
template:
metadata:
labels:
component: litmusportal-server
spec:
automountServiceAccountToken: false
volumes:
- name: gitops-storage
emptyDir: {}
- name: hub-storage
emptyDir: {}
- name: tls-secret
secret:
secretName: tls-secret
containers:
- name: graphql-server
image: litmuschaos/litmusportal-server:3.13.0
volumeMounts:
- mountPath: /tmp/
name: gitops-storage
- mountPath: /tmp/version
name: hub-storage
- mountPath: /etc/tls
name: tls-secret
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
envFrom:
- configMapRef:
name: litmus-portal-admin-config
- secretRef:
name: litmus-portal-admin-secret
env:
# if self-signed certificate are used pass the base64 tls certificate, to allow agents to use tls for communication
- name: TLS_CERT_B64
value: ""
- name: ENABLE_GQL_INTROSPECTION
value: "false"
- name: INFRA_DEPLOYMENTS
value: '["app=chaos-exporter", "name=chaos-operator", "app=workflow-controller", "app=event-tracker"]'
- name: CHAOS_CENTER_UI_ENDPOINT
value: ""
- name: SUBSCRIBER_IMAGE
value: "litmuschaos/litmusportal-subscriber:3.13.0"
- name: EVENT_TRACKER_IMAGE
value: "litmuschaos/litmusportal-event-tracker:3.13.0"
- name: ARGO_WORKFLOW_CONTROLLER_IMAGE
value: "litmuschaos/workflow-controller:v3.3.1"
- name: ARGO_WORKFLOW_EXECUTOR_IMAGE
value: "litmuschaos/argoexec:v3.3.1"
- name: LITMUS_CHAOS_OPERATOR_IMAGE
value: "litmuschaos/chaos-operator:3.13.0"
- name: LITMUS_CHAOS_RUNNER_IMAGE
value: "litmuschaos/chaos-runner:3.13.0"
- name: LITMUS_CHAOS_EXPORTER_IMAGE
value: "litmuschaos/chaos-exporter:3.13.0"
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "v3.13.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT
value: "3030"
- name: WORKFLOW_HELPER_IMAGE_VERSION
value: "3.13.0"
- name: REMOTE_HUB_MAX_SIZE
value: "5000000"
- name: INFRA_COMPATIBLE_VERSIONS
value: '["3.13.0"]'
- name: ALLOWED_ORIGINS
value: "^(http://|https://|)litmuschaos.io(:[0-9]+|)?,^(http://|https://|)litmusportal-server-service(:[0-9]+|)?"
- name: ENABLE_INTERNAL_TLS
value: "true"
- name: TLS_CERT_PATH
value: "/etc/tls/tls.crt"
- name: TLS_KEY_PATH
value: "/etc/tls/tls.key"
- name: CA_CERT_TLS_PATH
value: "/etc/tls/ca.crt"
- name: REST_PORT
value: "8081"
- name: GRPC_PORT
value: "8001"
ports:
- containerPort: 8081
- containerPort: 8001
imagePullPolicy: Always
resources:
requests:
memory: "250Mi"
cpu: "225m"
ephemeral-storage: "500Mi"
limits:
memory: "712Mi"
cpu: "550m"
ephemeral-storage: "1Gi"
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: litmusportal-server
namespace: litmus
labels:
component: litmusportal-server
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
component: litmusportal-server
ingress:
- from:
- podSelector:
matchLabels:
component: litmusportal-frontend
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-server-service
spec:
type: NodePort
ports:
- name: graphql-server-https
port: 9004
targetPort: 8081
- name: graphql-rpc-server-https
port: 8001
targetPort: 8001
selector:
component: litmusportal-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-auth-server
labels:
component: litmusportal-auth-server
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-auth-server
template:
metadata:
labels:
component: litmusportal-auth-server
spec:
volumes:
- name: tls-secret
secret:
secretName: tls-secret
automountServiceAccountToken: false
containers:
- name: auth-server
volumeMounts:
- mountPath: /etc/tls
name: tls-secret
image: litmuschaos/litmusportal-auth-server:3.13.0
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
envFrom:
- configMapRef:
name: litmus-portal-admin-config
- secretRef:
name: litmus-portal-admin-secret
env:
- name: STRICT_PASSWORD_POLICY
value: "false"
- name: ADMIN_USERNAME
value: "admin"
- name: ADMIN_PASSWORD
value: "litmus"
- name: LITMUS_GQL_GRPC_ENDPOINT
value: "litmusportal-server-service"
- name: LITMUS_GQL_GRPC_PORT
value: "8000"
- name: ALLOWED_ORIGINS
value: "^(http://|https://|)litmuschaos.io(:[0-9]+|)?,^(http://|https://|)litmusportal-server-service(:[0-9]+|)?" #ip needs to added here
- name: ENABLE_INTERNAL_TLS
value: "true"
- name: TLS_CERT_PATH
value: "/etc/tls/tls.crt"
- name: TLS_KEY_PATH
value: "/etc/tls/ctls.key"
- name: CA_CERT_TLS_PATH
value: "/etc/tls/ca.crt"
- name: REST_PORT
value: "3001"
- name: GRPC_PORT
value: "3031"
ports:
- containerPort: 3001
- containerPort: 3031
imagePullPolicy: Always
resources:
requests:
memory: "250Mi"
cpu: "125m"
ephemeral-storage: "500Mi"
limits:
memory: "712Mi"
cpu: "550m"
ephemeral-storage: "1Gi"
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: litmusportal-auth-server
namespace: litmus
labels:
component: litmusportal-auth-server
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
component: litmusportal-auth-server
ingress:
- from:
- podSelector:
matchLabels:
component: litmusportal-frontend
- from:
- podSelector:
matchLabels:
component: litmusportal-server
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-auth-server-service
spec:
type: NodePort
ports:
- name: auth-server-https
port: 9005
targetPort: 3001
- name: auth-rpc-server-https
port: 3031
targetPort: 3031
selector:
component: litmusportal-auth-server

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,420 @@
---
apiVersion: v1
kind: Secret
metadata:
name: litmus-portal-admin-secret
stringData:
DB_USER: "root"
DB_PASSWORD: "1234"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: litmus-portal-admin-config
data:
DB_SERVER: mongodb://my-release-mongodb-0.my-release-mongodb-headless:27017,my-release-mongodb-1.my-release-mongodb-headless:27017,my-release-mongodb-2.my-release-mongodb-headless:27017/admin
VERSION: "3.13.0"
SKIP_SSL_VERIFY: "false"
# Configurations if you are using dex for OAuth
DEX_ENABLED: "false"
OIDC_ISSUER: "http://<Your Domain>:32000"
DEX_OAUTH_CALLBACK_URL: "http://<litmus-portal frontend exposed URL>:8080/auth/dex/callback"
DEX_OAUTH_CLIENT_ID: "LitmusPortalAuthBackend"
DEX_OAUTH_CLIENT_SECRET: "ZXhhbXBsZS1hcHAtc2VjcmV0"
OAuthJwtSecret: "litmus-oauth@123"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: litmusportal-frontend-nginx-configuration
data:
nginx.conf: |
pid /tmp/nginx.pid;
events {
worker_connections 1024;
}
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off;
include /etc/nginx/mime.types;
gzip on;
gzip_disable "msie6";
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server {
listen 8185 ssl;
ssl_certificate /etc/tls/tls.crt;
ssl_certificate_key /etc/tls/tls.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_client_certificate /etc/tls/ca.crt;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
root /opt/chaos;
location /health {
return 200;
}
location / {
proxy_http_version 1.1;
add_header Cache-Control "no-cache";
try_files $uri /index.html;
autoindex on;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location /auth/ {
proxy_ssl_verify off;
proxy_ssl_session_reuse on;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "https://litmusportal-auth-server-service:9005/";
proxy_ssl_certificate /etc/tls/tls.crt;
proxy_ssl_certificate_key /etc/tls/tls.key;
}
location /api/ {
proxy_ssl_verify off;
proxy_ssl_session_reuse on;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "https://litmusportal-server-service:9004/";
proxy_ssl_certificate /etc/tls/tls.crt;
proxy_ssl_certificate_key /etc/tls/tls.key;
}
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-frontend
labels:
component: litmusportal-frontend
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-frontend
template:
metadata:
labels:
component: litmusportal-frontend
spec:
automountServiceAccountToken: false
containers:
- name: litmusportal-frontend
image: litmuschaos/litmusportal-frontend:3.13.0
# securityContext:
# runAsUser: 2000
# allowPrivilegeEscalation: false
# runAsNonRoot: true
imagePullPolicy: Always
ports:
- containerPort: 8185
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- mountPath: /etc/tls
name: tls-secret
volumes:
- name: nginx-config
configMap:
name: litmusportal-frontend-nginx-configuration
- name: tls-secret
secret:
secretName: tls-secret
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-frontend-service
spec:
type: NodePort
ports:
- name: http
port: 9091
targetPort: 8185
selector:
component: litmusportal-frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-server
labels:
component: litmusportal-server
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-server
template:
metadata:
labels:
component: litmusportal-server
spec:
automountServiceAccountToken: false
volumes:
- name: gitops-storage
emptyDir: {}
- name: hub-storage
emptyDir: {}
- name: tls-secret
secret:
secretName: tls-secret
containers:
- name: graphql-server
image: litmuschaos/litmusportal-server:3.13.0
volumeMounts:
- mountPath: /tmp/
name: gitops-storage
- mountPath: /tmp/version
name: hub-storage
- mountPath: /etc/tls
name: tls-secret
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
envFrom:
- configMapRef:
name: litmus-portal-admin-config
- secretRef:
name: litmus-portal-admin-secret
env:
# if self-signed certificate are used pass the base64 tls certificate, to allow agents to use tls for communication
- name: TLS_CERT_B64
value: ""
- name: ENABLE_GQL_INTROSPECTION
value: "false"
- name: INFRA_DEPLOYMENTS
value: '["app=chaos-exporter", "name=chaos-operator", "app=workflow-controller", "app=event-tracker"]'
- name: CHAOS_CENTER_UI_ENDPOINT
value: ""
- name: SUBSCRIBER_IMAGE
value: "litmuschaos/litmusportal-subscriber:3.13.0"
- name: EVENT_TRACKER_IMAGE
value: "litmuschaos/litmusportal-event-tracker:3.13.0"
- name: ARGO_WORKFLOW_CONTROLLER_IMAGE
value: "litmuschaos/workflow-controller:v3.3.1"
- name: ARGO_WORKFLOW_EXECUTOR_IMAGE
value: "litmuschaos/argoexec:v3.3.1"
- name: LITMUS_CHAOS_OPERATOR_IMAGE
value: "litmuschaos/chaos-operator:3.13.0"
- name: LITMUS_CHAOS_RUNNER_IMAGE
value: "litmuschaos/chaos-runner:3.13.0"
- name: LITMUS_CHAOS_EXPORTER_IMAGE
value: "litmuschaos/chaos-exporter:3.13.0"
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "v3.13.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT
value: "3030"
- name: WORKFLOW_HELPER_IMAGE_VERSION
value: "3.13.0"
- name: REMOTE_HUB_MAX_SIZE
value: "5000000"
- name: INFRA_COMPATIBLE_VERSIONS
value: '["3.13.0"]'
- name: ALLOWED_ORIGINS
value: ".*" #eg: ^(http://|https://|)litmuschaos.io(:[0-9]+|)?,^(http://|https://|)litmusportal-server-service(:[0-9]+|)?
- name: ENABLE_INTERNAL_TLS
value: "true"
- name: TLS_CERT_PATH
value: "/etc/tls/tls.crt"
- name: TLS_KEY_PATH
value: "/etc/tls/tls.key"
- name: CA_CERT_TLS_PATH
value: "/etc/tls/ca.crt"
- name: REST_PORT
value: "8081"
- name: GRPC_PORT
value: "8001"
ports:
- containerPort: 8081
- containerPort: 8001
imagePullPolicy: Always
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: litmusportal-server
namespace: litmus
labels:
component: litmusportal-server
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
component: litmusportal-server
ingress:
- from:
- podSelector:
matchLabels:
component: litmusportal-frontend
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-server-service
spec:
type: NodePort
ports:
- name: graphql-server-https
port: 9004
targetPort: 8081
- name: graphql-rpc-server-https
port: 8001
targetPort: 8001
selector:
component: litmusportal-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-auth-server
labels:
component: litmusportal-auth-server
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-auth-server
template:
metadata:
labels:
component: litmusportal-auth-server
spec:
volumes:
- name: tls-secret
secret:
secretName: tls-secret
automountServiceAccountToken: false
containers:
- name: auth-server
volumeMounts:
- mountPath: /etc/tls
name: tls-secret
image: litmuschaos/litmusportal-auth-server:3.13.0
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
envFrom:
- configMapRef:
name: litmus-portal-admin-config
- secretRef:
name: litmus-portal-admin-secret
env:
- name: STRICT_PASSWORD_POLICY
value: "false"
- name: ADMIN_USERNAME
value: "admin"
- name: ADMIN_PASSWORD
value: "litmus"
- name: LITMUS_GQL_GRPC_ENDPOINT
value: "litmusportal-server-service"
- name: LITMUS_GQL_GRPC_PORT
value: "8000"
- name: ALLOWED_ORIGINS
value: "^(http://|https://|)litmuschaos.io(:[0-9]+|)?,^(http://|https://|)litmusportal-server-service(:[0-9]+|)?" #ip needs to added here
- name: ENABLE_INTERNAL_TLS
value: "true"
- name: TLS_CERT_PATH
value: "/etc/tls/tls.crt"
- name: TLS_KEY_PATH
value: "/etc/tls/ctls.key"
- name: CA_CERT_TLS_PATH
value: "/etc/tls/ca.crt"
- name: REST_PORT
value: "3001"
- name: GRPC_PORT
value: "3031"
ports:
- containerPort: 3001
- containerPort: 3031
imagePullPolicy: Always
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: litmusportal-auth-server
namespace: litmus
labels:
component: litmusportal-auth-server
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
component: litmusportal-auth-server
ingress:
- from:
- podSelector:
matchLabels:
component: litmusportal-frontend
- from:
- podSelector:
matchLabels:
component: litmusportal-server
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-auth-server-service
spec:
type: NodePort
ports:
- name: auth-server-https
port: 9005
targetPort: 3001
- name: auth-rpc-server-https
port: 3031
targetPort: 3031
selector:
component: litmusportal-auth-server

View File

@ -231,7 +231,7 @@ spec:
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "3.9.x"
value: "v3.9.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT

View File

@ -257,7 +257,7 @@ spec:
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "3.9.x"
value: "v3.9.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT

View File

@ -248,7 +248,7 @@ spec:
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "3.9.x"
value: "v3.9.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT

View File

@ -231,7 +231,7 @@ spec:
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "3.9.x"
value: "v3.9.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT

View File

@ -257,7 +257,7 @@ spec:
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "3.9.x"
value: "v3.9.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT

View File

@ -248,7 +248,7 @@ spec:
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "3.9.x"
value: "v3.9.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT

View File

@ -231,7 +231,7 @@ spec:
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "3.9.x"
value: "v3.9.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT

View File

@ -257,7 +257,7 @@ spec:
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "3.9.x"
value: "v3.9.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT

View File

@ -248,7 +248,7 @@ spec:
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: DEFAULT_HUB_BRANCH_NAME
value: "3.9.x"
value: "v3.9.x"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -14,6 +14,6 @@
<tr>
<td>GraphQL Server</td>
<td>Contains GraphQL Server API documentation</td>
<td><a target="_" href="/litmus/graphql/v2.0.0/api.html">GraphQL Server</a></td>
<td><a target="_" href="/litmus/graphql/v3.11.0/api.html">GraphQL Server</a></td>
</tr>
</table>

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,33 @@
spectaql:
targetDir: ./mkdocs/docs/graphql/v3.10.0
logoFile: ./mkdocs/docs/graphql/logo.png
faviconFile: ./mkdocs/docs/graphql/logo.png
displayAllServers: true
themeDir: ./mkdocs/docs/graphql/v3.10.0/custom-theme
introspection:
removeTrailingPeriodFromDescriptions: false
schemaFile: ./chaoscenter/graphql/definitions/shared/*.graphqls
queryNameStrategy: capitalizeFirst
fieldExpansionDepth: 2
spectaqlDirective:
enable: true
extensions:
graphqlScalarExamples: true
info:
title: ChaosCenter API Documentation
description: Litmus Portal provides console and UI experience for managing, monitoring, and events around chaos workflows. Chaos workflows consist of a sequence of experiments run together to achieve the objective of introducing some kind of fault into an application or the Kubernetes platform.
x-introItems:
- title: Common Error Response
file: ./mkdocs/docs/graphql/v3.10.0/error_response_guide.md
servers:
- url: http://localhost:8080
description: Dev
- url: http://localhost:8080/query
description: Prod
production: true

View File

@ -0,0 +1,24 @@
$line-height-heading: 1.2;
$font-size-large-heading: 1.7105263158rem;
$font-weight-large-heading: 700;
$content-padding: 20px;
$text-color: #535b60;
$text-weight: 400;
$text-size: 1.5789473684rem;
#spectaql {
h2 {
color: $text-color;
font-weight: $text-weight;
font-size: $text-size;
}
.doc-heading {
line-height: $line-height-heading;
font-size: $font-size-large-heading;
font-weight: $font-weight-large-heading;
color: #535b60
}
}

View File

@ -0,0 +1,29 @@
All error responses follow the structure outlined below.
```json
{
"errors": [
{
"message": "Error message",
"path": [
"Request path"
]
}
],
"data": {}
}
```
### Field Descriptions:
- **errors**: <br>
An array of error objects. Multiple errors can occur, and each error contains a message and a path field. <br>
**Type: `Array`** <br><br>
- **message**: <br>
A description of the error. <br>
**Type: `String`** <br><br>
- **path**: <br>
Indicates the GraphQL or API operation path where the error occurred. It helps in identifying which operation or resolver triggered the error. <br>
**Type: `Array of Strings`** <br><br>
- **data**: <br>
This field contains the data returned from the request. In the event of an error, the field corresponding to the failed operation will be null, while other successful operations (if any) will still return valid data. <br>
**Type: `Object`** <br>

Binary file not shown.

After

Width:  |  Height:  |  Size: 450 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 450 B

View File

@ -0,0 +1 @@
function scrollSpy(){var l=5,e=document.querySelector("html"),c=(e&&(e=window.getComputedStyle(e).scrollPaddingTop)&&"string"==typeof e&&"auto"!==e&&e.endsWith("px")&&(l+=parseInt(e.split("px")[0])),"nav-scroll-active"),i=null,d=[];function t(){i=null;var e=document.querySelectorAll("[data-traverse-target]");Array.prototype.forEach.call(e,function(e){d.push({id:e.id,top:e.offsetTop})})}var n=debounce(function(){t(),o()},500),o=debounce(function(){var e,t,n,o,r=(e=>{for(var t=e+l,n=0;n<d.length;n++){var o=d[n+1];if(t>=d[n].top&&(!o||t<o.top))return n}return-1})(document.documentElement.scrollTop||document.body.scrollTop);r!==i&&(r=d[i=r],e=document.querySelector("."+c),t=s(r=r?document.querySelector('#nav a[href="#'+r.id+'"]'):null),n=(o=s(e))!==t,o&&n&&u(o,!1),t&&n&&u(t,!0),r&&(r.classList.add(c),r.scrollIntoViewIfNeeded?r.scrollIntoViewIfNeeded():!r.scrollIntoView||0<=(o=(o=r).getBoundingClientRect()).top&&0<=o.left&&o.bottom<=(window.innerHeight||document.documentElement.clientHeight)&&o.right<=(window.innerWidth||document.documentElement.clientWidth)||r.scrollIntoView({block:"center",inline:"start"})),e)&&e.classList.remove(c)},100);function u(e,t){for(var n=t?"add":"remove";e;)e.classList[n]("nav-scroll-expand"),e=s(e.parentNode)}function s(e){return e&&e.closest?e.closest(".nav-group-section"):null}setTimeout(function(){t(),o(),window.addEventListener("scroll",o),window.addEventListener("resize",n)},300)}function toggleMenu(){var t="drawer-open",e=document.querySelector("#spectaql .sidebar-open-button"),n=document.querySelector("#spectaql #sidebar .close-button"),o=document.querySelector("#spectaql .drawer-overlay");function r(){var e=document.querySelector("#spectaql #page");e.classList.contains(t)?e.classList.remove(t):e.classList.add(t)}e.addEventListener("click",r),n.addEventListener("click",r),o.addEventListener("click",r)}function debounce(e,t){var n=null;return function(){clearTimeout(n),n=setTimeout(function(){e.apply(null)},t)}}window.addEventListener("DOMContentLoaded",e=>{toggleMenu(),scrollSpy()});

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,33 @@
spectaql:
targetDir: ./mkdocs/docs/graphql/v3.10.x
logoFile: ./mkdocs/docs/graphql/logo.png
faviconFile: ./mkdocs/docs/graphql/logo.png
displayAllServers: true
themeDir: ./mkdocs/docs/graphql/v3.10.x/custom-theme
introspection:
removeTrailingPeriodFromDescriptions: false
schemaFile: ./chaoscenter/graphql/definitions/shared/*.graphqls
queryNameStrategy: capitalizeFirst
fieldExpansionDepth: 2
spectaqlDirective:
enable: true
extensions:
graphqlScalarExamples: true
info:
title: ChaosCenter API Documentation
description: Litmus Portal provides console and UI experience for managing, monitoring, and events around chaos workflows. Chaos workflows consist of a sequence of experiments run together to achieve the objective of introducing some kind of fault into an application or the Kubernetes platform.
x-introItems:
- title: Common Error Response
file: ./mkdocs/docs/graphql/v3.10.x/error_response_guide.md
servers:
- url: http://localhost:8080
description: Dev
- url: http://localhost:8080/query
description: Prod
production: true

View File

@ -0,0 +1,24 @@
$line-height-heading: 1.2;
$font-size-large-heading: 1.7105263158rem;
$font-weight-large-heading: 700;
$content-padding: 20px;
$text-color: #535b60;
$text-weight: 400;
$text-size: 1.5789473684rem;
#spectaql {
h2 {
color: $text-color;
font-weight: $text-weight;
font-size: $text-size;
}
.doc-heading {
line-height: $line-height-heading;
font-size: $font-size-large-heading;
font-weight: $font-weight-large-heading;
color: #535b60
}
}

View File

@ -0,0 +1,29 @@
All error responses follow the structure outlined below.
```json
{
"errors": [
{
"message": "Error message",
"path": [
"Request path"
]
}
],
"data": {}
}
```
### Field Descriptions:
- **errors**: <br>
An array of error objects. Multiple errors can occur, and each error contains a message and a path field. <br>
**Type: `Array`** <br><br>
- **message**: <br>
A description of the error. <br>
**Type: `String`** <br><br>
- **path**: <br>
Indicates the GraphQL or API operation path where the error occurred. It helps in identifying which operation or resolver triggered the error. <br>
**Type: `Array of Strings`** <br><br>
- **data**: <br>
This field contains the data returned from the request. In the event of an error, the field corresponding to the failed operation will be null, while other successful operations (if any) will still return valid data. <br>
**Type: `Object`** <br>

Binary file not shown.

After

Width:  |  Height:  |  Size: 450 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 450 B

View File

@ -0,0 +1 @@
function scrollSpy(){var l=5,e=document.querySelector("html"),c=(e&&(e=window.getComputedStyle(e).scrollPaddingTop)&&"string"==typeof e&&"auto"!==e&&e.endsWith("px")&&(l+=parseInt(e.split("px")[0])),"nav-scroll-active"),i=null,d=[];function t(){i=null;var e=document.querySelectorAll("[data-traverse-target]");Array.prototype.forEach.call(e,function(e){d.push({id:e.id,top:e.offsetTop})})}var n=debounce(function(){t(),o()},500),o=debounce(function(){var e,t,n,o,r=(e=>{for(var t=e+l,n=0;n<d.length;n++){var o=d[n+1];if(t>=d[n].top&&(!o||t<o.top))return n}return-1})(document.documentElement.scrollTop||document.body.scrollTop);r!==i&&(r=d[i=r],e=document.querySelector("."+c),t=s(r=r?document.querySelector('#nav a[href="#'+r.id+'"]'):null),n=(o=s(e))!==t,o&&n&&u(o,!1),t&&n&&u(t,!0),r&&(r.classList.add(c),r.scrollIntoViewIfNeeded?r.scrollIntoViewIfNeeded():!r.scrollIntoView||0<=(o=(o=r).getBoundingClientRect()).top&&0<=o.left&&o.bottom<=(window.innerHeight||document.documentElement.clientHeight)&&o.right<=(window.innerWidth||document.documentElement.clientWidth)||r.scrollIntoView({block:"center",inline:"start"})),e)&&e.classList.remove(c)},100);function u(e,t){for(var n=t?"add":"remove";e;)e.classList[n]("nav-scroll-expand"),e=s(e.parentNode)}function s(e){return e&&e.closest?e.closest(".nav-group-section"):null}setTimeout(function(){t(),o(),window.addEventListener("scroll",o),window.addEventListener("resize",n)},300)}function toggleMenu(){var t="drawer-open",e=document.querySelector("#spectaql .sidebar-open-button"),n=document.querySelector("#spectaql #sidebar .close-button"),o=document.querySelector("#spectaql .drawer-overlay");function r(){var e=document.querySelector("#spectaql #page");e.classList.contains(t)?e.classList.remove(t):e.classList.add(t)}e.addEventListener("click",r),n.addEventListener("click",r),o.addEventListener("click",r)}function debounce(e,t){var n=null;return function(){clearTimeout(n),n=setTimeout(function(){e.apply(null)},t)}}window.addEventListener("DOMContentLoaded",e=>{toggleMenu(),scrollSpy()});

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,33 @@
spectaql:
targetDir: ./mkdocs/docs/graphql/v3.11.0
logoFile: ./mkdocs/docs/graphql/logo.png
faviconFile: ./mkdocs/docs/graphql/logo.png
displayAllServers: true
themeDir: ./mkdocs/docs/graphql/v3.11.0/custom-theme
introspection:
removeTrailingPeriodFromDescriptions: false
schemaFile: ./chaoscenter/graphql/definitions/shared/*.graphqls
queryNameStrategy: capitalizeFirst
fieldExpansionDepth: 2
spectaqlDirective:
enable: true
extensions:
graphqlScalarExamples: true
info:
title: ChaosCenter API Documentation
description: Litmus Portal provides console and UI experience for managing, monitoring, and events around chaos workflows. Chaos workflows consist of a sequence of experiments run together to achieve the objective of introducing some kind of fault into an application or the Kubernetes platform.
x-introItems:
- title: Common Error Response
file: ./mkdocs/docs/graphql/v3.11.0/error_response_guide.md
servers:
- url: http://localhost:8080
description: Dev
- url: http://localhost:8080/query
description: Prod
production: true

View File

@ -0,0 +1,24 @@
$line-height-heading: 1.2;
$font-size-large-heading: 1.7105263158rem;
$font-weight-large-heading: 700;
$content-padding: 20px;
$text-color: #535b60;
$text-weight: 400;
$text-size: 1.5789473684rem;
#spectaql {
h2 {
color: $text-color;
font-weight: $text-weight;
font-size: $text-size;
}
.doc-heading {
line-height: $line-height-heading;
font-size: $font-size-large-heading;
font-weight: $font-weight-large-heading;
color: #535b60
}
}

View File

@ -0,0 +1,29 @@
All error responses follow the structure outlined below.
```json
{
"errors": [
{
"message": "Error message",
"path": [
"Request path"
]
}
],
"data": {}
}
```
### Field Descriptions:
- **errors**: <br>
An array of error objects. Multiple errors can occur, and each error contains a message and a path field. <br>
**Type: `Array`** <br><br>
- **message**: <br>
A description of the error. <br>
**Type: `String`** <br><br>
- **path**: <br>
Indicates the GraphQL or API operation path where the error occurred. It helps in identifying which operation or resolver triggered the error. <br>
**Type: `Array of Strings`** <br><br>
- **data**: <br>
This field contains the data returned from the request. In the event of an error, the field corresponding to the failed operation will be null, while other successful operations (if any) will still return valid data. <br>
**Type: `Object`** <br>

Binary file not shown.

After

Width:  |  Height:  |  Size: 450 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 450 B

View File

@ -0,0 +1 @@
function scrollSpy(){var l=5,e=document.querySelector("html"),c=(e&&(e=window.getComputedStyle(e).scrollPaddingTop)&&"string"==typeof e&&"auto"!==e&&e.endsWith("px")&&(l+=parseInt(e.split("px")[0])),"nav-scroll-active"),i=null,d=[];function t(){i=null;var e=document.querySelectorAll("[data-traverse-target]");Array.prototype.forEach.call(e,function(e){d.push({id:e.id,top:e.offsetTop})})}var n=debounce(function(){t(),o()},500),o=debounce(function(){var e,t,n,o,r=(e=>{for(var t=e+l,n=0;n<d.length;n++){var o=d[n+1];if(t>=d[n].top&&(!o||t<o.top))return n}return-1})(document.documentElement.scrollTop||document.body.scrollTop);r!==i&&(r=d[i=r],e=document.querySelector("."+c),t=s(r=r?document.querySelector('#nav a[href="#'+r.id+'"]'):null),n=(o=s(e))!==t,o&&n&&u(o,!1),t&&n&&u(t,!0),r&&(r.classList.add(c),r.scrollIntoViewIfNeeded?r.scrollIntoViewIfNeeded():!r.scrollIntoView||0<=(o=(o=r).getBoundingClientRect()).top&&0<=o.left&&o.bottom<=(window.innerHeight||document.documentElement.clientHeight)&&o.right<=(window.innerWidth||document.documentElement.clientWidth)||r.scrollIntoView({block:"center",inline:"start"})),e)&&e.classList.remove(c)},100);function u(e,t){for(var n=t?"add":"remove";e;)e.classList[n]("nav-scroll-expand"),e=s(e.parentNode)}function s(e){return e&&e.closest?e.closest(".nav-group-section"):null}setTimeout(function(){t(),o(),window.addEventListener("scroll",o),window.addEventListener("resize",n)},300)}function toggleMenu(){var t="drawer-open",e=document.querySelector("#spectaql .sidebar-open-button"),n=document.querySelector("#spectaql #sidebar .close-button"),o=document.querySelector("#spectaql .drawer-overlay");function r(){var e=document.querySelector("#spectaql #page");e.classList.contains(t)?e.classList.remove(t):e.classList.add(t)}e.addEventListener("click",r),n.addEventListener("click",r),o.addEventListener("click",r)}function debounce(e,t){var n=null;return function(){clearTimeout(n),n=setTimeout(function(){e.apply(null)},t)}}window.addEventListener("DOMContentLoaded",e=>{toggleMenu(),scrollSpy()});

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,33 @@
spectaql:
targetDir: ./mkdocs/docs/graphql/v3.11.x
logoFile: ./mkdocs/docs/graphql/logo.png
faviconFile: ./mkdocs/docs/graphql/logo.png
displayAllServers: true
themeDir: ./mkdocs/docs/graphql/v3.11.x/custom-theme
introspection:
removeTrailingPeriodFromDescriptions: false
schemaFile: ./chaoscenter/graphql/definitions/shared/*.graphqls
queryNameStrategy: capitalizeFirst
fieldExpansionDepth: 2
spectaqlDirective:
enable: true
extensions:
graphqlScalarExamples: true
info:
title: ChaosCenter API Documentation
description: Litmus Portal provides console and UI experience for managing, monitoring, and events around chaos workflows. Chaos workflows consist of a sequence of experiments run together to achieve the objective of introducing some kind of fault into an application or the Kubernetes platform.
x-introItems:
- title: Common Error Response
file: ./mkdocs/docs/graphql/v3.11.x/error_response_guide.md
servers:
- url: http://localhost:8080
description: Dev
- url: http://localhost:8080/query
description: Prod
production: true

View File

@ -0,0 +1,24 @@
$line-height-heading: 1.2;
$font-size-large-heading: 1.7105263158rem;
$font-weight-large-heading: 700;
$content-padding: 20px;
$text-color: #535b60;
$text-weight: 400;
$text-size: 1.5789473684rem;
#spectaql {
h2 {
color: $text-color;
font-weight: $text-weight;
font-size: $text-size;
}
.doc-heading {
line-height: $line-height-heading;
font-size: $font-size-large-heading;
font-weight: $font-weight-large-heading;
color: #535b60
}
}

View File

@ -0,0 +1,29 @@
All error responses follow the structure outlined below.
```json
{
"errors": [
{
"message": "Error message",
"path": [
"Request path"
]
}
],
"data": {}
}
```
### Field Descriptions:
- **errors**: <br>
An array of error objects. Multiple errors can occur, and each error contains a message and a path field. <br>
**Type: `Array`** <br><br>
- **message**: <br>
A description of the error. <br>
**Type: `String`** <br><br>
- **path**: <br>
Indicates the GraphQL or API operation path where the error occurred. It helps in identifying which operation or resolver triggered the error. <br>
**Type: `Array of Strings`** <br><br>
- **data**: <br>
This field contains the data returned from the request. In the event of an error, the field corresponding to the failed operation will be null, while other successful operations (if any) will still return valid data. <br>
**Type: `Object`** <br>

Binary file not shown.

After

Width:  |  Height:  |  Size: 450 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 450 B

View File

@ -0,0 +1 @@
function scrollSpy(){var l=5,e=document.querySelector("html"),c=(e&&(e=window.getComputedStyle(e).scrollPaddingTop)&&"string"==typeof e&&"auto"!==e&&e.endsWith("px")&&(l+=parseInt(e.split("px")[0])),"nav-scroll-active"),i=null,d=[];function t(){i=null;var e=document.querySelectorAll("[data-traverse-target]");Array.prototype.forEach.call(e,function(e){d.push({id:e.id,top:e.offsetTop})})}var n=debounce(function(){t(),o()},500),o=debounce(function(){var e,t,n,o,r=(e=>{for(var t=e+l,n=0;n<d.length;n++){var o=d[n+1];if(t>=d[n].top&&(!o||t<o.top))return n}return-1})(document.documentElement.scrollTop||document.body.scrollTop);r!==i&&(r=d[i=r],e=document.querySelector("."+c),t=s(r=r?document.querySelector('#nav a[href="#'+r.id+'"]'):null),n=(o=s(e))!==t,o&&n&&u(o,!1),t&&n&&u(t,!0),r&&(r.classList.add(c),r.scrollIntoViewIfNeeded?r.scrollIntoViewIfNeeded():!r.scrollIntoView||0<=(o=(o=r).getBoundingClientRect()).top&&0<=o.left&&o.bottom<=(window.innerHeight||document.documentElement.clientHeight)&&o.right<=(window.innerWidth||document.documentElement.clientWidth)||r.scrollIntoView({block:"center",inline:"start"})),e)&&e.classList.remove(c)},100);function u(e,t){for(var n=t?"add":"remove";e;)e.classList[n]("nav-scroll-expand"),e=s(e.parentNode)}function s(e){return e&&e.closest?e.closest(".nav-group-section"):null}setTimeout(function(){t(),o(),window.addEventListener("scroll",o),window.addEventListener("resize",n)},300)}function toggleMenu(){var t="drawer-open",e=document.querySelector("#spectaql .sidebar-open-button"),n=document.querySelector("#spectaql #sidebar .close-button"),o=document.querySelector("#spectaql .drawer-overlay");function r(){var e=document.querySelector("#spectaql #page");e.classList.contains(t)?e.classList.remove(t):e.classList.add(t)}e.addEventListener("click",r),n.addEventListener("click",r),o.addEventListener("click",r)}function debounce(e,t){var n=null;return function(){clearTimeout(n),n=setTimeout(function(){e.apply(null)},t)}}window.addEventListener("DOMContentLoaded",e=>{toggleMenu(),scrollSpy()});

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,33 @@
spectaql:
targetDir: ./mkdocs/docs/graphql/v3.9.0
logoFile: ./mkdocs/docs/graphql/logo.png
faviconFile: ./mkdocs/docs/graphql/logo.png
displayAllServers: true
themeDir: ./mkdocs/docs/graphql/v3.9.0/custom-theme
introspection:
removeTrailingPeriodFromDescriptions: false
schemaFile: ./chaoscenter/graphql/definitions/shared/*.graphqls
queryNameStrategy: capitalizeFirst
fieldExpansionDepth: 2
spectaqlDirective:
enable: true
extensions:
graphqlScalarExamples: true
info:
title: ChaosCenter API Documentation
description: Litmus Portal provides console and UI experience for managing, monitoring, and events around chaos workflows. Chaos workflows consist of a sequence of experiments run together to achieve the objective of introducing some kind of fault into an application or the Kubernetes platform.
x-introItems:
- title: Common Error Response
file: ./mkdocs/docs/graphql/v3.9.0/error_response_guide.md
servers:
- url: http://localhost:8080
description: Dev
- url: http://localhost:8080/query
description: Prod
production: true

View File

@ -0,0 +1,24 @@
$line-height-heading: 1.2;
$font-size-large-heading: 1.7105263158rem;
$font-weight-large-heading: 700;
$content-padding: 20px;
$text-color: #535b60;
$text-weight: 400;
$text-size: 1.5789473684rem;
#spectaql {
h2 {
color: $text-color;
font-weight: $text-weight;
font-size: $text-size;
}
.doc-heading {
line-height: $line-height-heading;
font-size: $font-size-large-heading;
font-weight: $font-weight-large-heading;
color: #535b60
}
}

View File

@ -0,0 +1,29 @@
All error responses follow the structure outlined below.
```json
{
"errors": [
{
"message": "Error message",
"path": [
"Request path"
]
}
],
"data": {}
}
```
### Field Descriptions:
- **errors**: <br>
An array of error objects. Multiple errors can occur, and each error contains a message and a path field. <br>
**Type: `Array`** <br><br>
- **message**: <br>
A description of the error. <br>
**Type: `String`** <br><br>
- **path**: <br>
Indicates the GraphQL or API operation path where the error occurred. It helps in identifying which operation or resolver triggered the error. <br>
**Type: `Array of Strings`** <br><br>
- **data**: <br>
This field contains the data returned from the request. In the event of an error, the field corresponding to the failed operation will be null, while other successful operations (if any) will still return valid data. <br>
**Type: `Object`** <br>

Binary file not shown.

After

Width:  |  Height:  |  Size: 450 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 450 B

View File

@ -0,0 +1 @@
function scrollSpy(){var l=5,e=document.querySelector("html"),c=(e&&(e=window.getComputedStyle(e).scrollPaddingTop)&&"string"==typeof e&&"auto"!==e&&e.endsWith("px")&&(l+=parseInt(e.split("px")[0])),"nav-scroll-active"),i=null,d=[];function t(){i=null;var e=document.querySelectorAll("[data-traverse-target]");Array.prototype.forEach.call(e,function(e){d.push({id:e.id,top:e.offsetTop})})}var n=debounce(function(){t(),o()},500),o=debounce(function(){var e,t,n,o,r=(e=>{for(var t=e+l,n=0;n<d.length;n++){var o=d[n+1];if(t>=d[n].top&&(!o||t<o.top))return n}return-1})(document.documentElement.scrollTop||document.body.scrollTop);r!==i&&(r=d[i=r],e=document.querySelector("."+c),t=s(r=r?document.querySelector('#nav a[href="#'+r.id+'"]'):null),n=(o=s(e))!==t,o&&n&&u(o,!1),t&&n&&u(t,!0),r&&(r.classList.add(c),r.scrollIntoViewIfNeeded?r.scrollIntoViewIfNeeded():!r.scrollIntoView||0<=(o=(o=r).getBoundingClientRect()).top&&0<=o.left&&o.bottom<=(window.innerHeight||document.documentElement.clientHeight)&&o.right<=(window.innerWidth||document.documentElement.clientWidth)||r.scrollIntoView({block:"center",inline:"start"})),e)&&e.classList.remove(c)},100);function u(e,t){for(var n=t?"add":"remove";e;)e.classList[n]("nav-scroll-expand"),e=s(e.parentNode)}function s(e){return e&&e.closest?e.closest(".nav-group-section"):null}setTimeout(function(){t(),o(),window.addEventListener("scroll",o),window.addEventListener("resize",n)},300)}function toggleMenu(){var t="drawer-open",e=document.querySelector("#spectaql .sidebar-open-button"),n=document.querySelector("#spectaql #sidebar .close-button"),o=document.querySelector("#spectaql .drawer-overlay");function r(){var e=document.querySelector("#spectaql #page");e.classList.contains(t)?e.classList.remove(t):e.classList.add(t)}e.addEventListener("click",r),n.addEventListener("click",r),o.addEventListener("click",r)}function debounce(e,t){var n=null;return function(){clearTimeout(n),n=setTimeout(function(){e.apply(null)},t)}}window.addEventListener("DOMContentLoaded",e=>{toggleMenu(),scrollSpy()});

File diff suppressed because one or more lines are too long

View File

@ -7,7 +7,7 @@ metadata:
app.kubernetes.io/name: litmus
# provide unique instance-id if applicable
# app.kubernetes.io/instance: litmus-abcxzy
app.kubernetes.io/version: v3.11.0
app.kubernetes.io/version: v3.13.0
app.kubernetes.io/component: operator-serviceaccount
app.kubernetes.io/part-of: litmus
app.kubernetes.io/managed-by: kubectl
@ -22,7 +22,7 @@ metadata:
app.kubernetes.io/name: litmus
# provide unique instance-id if applicable
# app.kubernetes.io/instance: litmus-abcxzy
app.kubernetes.io/version: v3.11.0
app.kubernetes.io/version: v3.13.0
app.kubernetes.io/component: operator-role
app.kubernetes.io/part-of: litmus
app.kubernetes.io/managed-by: kubectl
@ -59,7 +59,7 @@ metadata:
app.kubernetes.io/name: litmus
# provide unique instance-id if applicable
# app.kubernetes.io/instance: litmus-abcxzy
app.kubernetes.io/version: v3.11.0
app.kubernetes.io/version: v3.13.0
app.kubernetes.io/component: operator-rolebinding
app.kubernetes.io/part-of: litmus
app.kubernetes.io/managed-by: kubectl
@ -81,7 +81,7 @@ metadata:
app.kubernetes.io/name: litmus
# provide unique instance-id if applicable
# app.kubernetes.io/instance: litmus-abcxzy
app.kubernetes.io/version: v3.11.0
app.kubernetes.io/version: v3.13.0
app.kubernetes.io/component: operator
app.kubernetes.io/part-of: litmus
app.kubernetes.io/managed-by: kubectl
@ -97,7 +97,7 @@ spec:
app.kubernetes.io/name: litmus
# provide unique instance-id if applicable
# app.kubernetes.io/instance: litmus-abcxzy
app.kubernetes.io/version: v3.11.0
app.kubernetes.io/version: v3.13.0
app.kubernetes.io/component: operator
app.kubernetes.io/part-of: litmus
app.kubernetes.io/managed-by: kubectl
@ -106,13 +106,13 @@ spec:
serviceAccountName: litmus
containers:
- name: chaos-operator
image: litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.11.0
image: litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.13.0
command:
- chaos-operator
imagePullPolicy: Always
env:
- name: CHAOS_RUNNER_IMAGE
value: "litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.11.0"
value: "litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.13.0"
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:

View File

@ -16,7 +16,7 @@ spec:
containers:
- name: chaos-scheduler
# Replace this with the built image name
image: litmuschaos.docker.scarf.sh/litmuschaos/chaos-scheduler:3.11.0
image: litmuschaos.docker.scarf.sh/litmuschaos/chaos-scheduler:3.13.0
command:
- chaos-scheduler
imagePullPolicy: IfNotPresent

View File

@ -7,7 +7,7 @@ metadata:
app.kubernetes.io/name: litmus
# provide unique instance-id if applicable
# app.kubernetes.io/instance: litmus-abcxzy
app.kubernetes.io/version: v3.11.0
app.kubernetes.io/version: v3.13.0
app.kubernetes.io/component: operator-serviceaccount
app.kubernetes.io/part-of: litmus
app.kubernetes.io/managed-by: kubectl
@ -22,7 +22,7 @@ metadata:
app.kubernetes.io/name: litmus
# provide unique instance-id if applicable
# app.kubernetes.io/instance: litmus-abcxzy
app.kubernetes.io/version: v3.11.0
app.kubernetes.io/version: v3.13.0
app.kubernetes.io/component: operator-role
app.kubernetes.io/part-of: litmus
app.kubernetes.io/managed-by: kubectl
@ -59,7 +59,7 @@ metadata:
app.kubernetes.io/name: litmus
# provide unique instance-id if applicable
# app.kubernetes.io/instance: litmus-abcxzy
app.kubernetes.io/version: v3.11.0
app.kubernetes.io/version: v3.13.0
app.kubernetes.io/component: operator-rolebinding
app.kubernetes.io/part-of: litmus
app.kubernetes.io/managed-by: kubectl

View File

@ -7,7 +7,7 @@ metadata:
app.kubernetes.io/name: litmus
# provide unique instance-id if applicable
# app.kubernetes.io/instance: litmus-abcxzy
app.kubernetes.io/version: v3.11.0
app.kubernetes.io/version: v3.13.0
app.kubernetes.io/component: operator-serviceaccount
app.kubernetes.io/part-of: litmus
app.kubernetes.io/managed-by: kubectl
@ -22,7 +22,7 @@ metadata:
app.kubernetes.io/name: litmus
# provide unique instance-id if applicable
# app.kubernetes.io/instance: litmus-abcxzy
app.kubernetes.io/version: v3.11.0
app.kubernetes.io/version: v3.13.0
app.kubernetes.io/component: operator-role
app.kubernetes.io/part-of: litmus
app.kubernetes.io/managed-by: kubectl
@ -62,7 +62,7 @@ metadata:
app.kubernetes.io/name: litmus
# provide unique instance-id if applicable
# app.kubernetes.io/instance: litmus-abcxzy
app.kubernetes.io/version: v3.11.0
app.kubernetes.io/version: v3.13.0
app.kubernetes.io/component: operator-rolebinding
app.kubernetes.io/part-of: litmus
app.kubernetes.io/managed-by: kubectl

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,105 @@
| title | authors | creation-date | last-updated |
|-------------------------------------------|----------------------------------------------|---------------|--------------|
| Distributed tracing for chaos experiments | [@namkyu1999](https://github.com/namkyu1999) | 2024-06-01 | 2024-06-01 |
# Distributed tracing for chaos experiments
- [Summary](#summary)
- [Motivation](#motivation)
- [Goals](#goals)
- [Non-Goals](#non-goals)
- [Proposal](#proposal)
- [Use Cases](#use-cases)
- [Implementation Details](#implementation-details)
- [Risks and Mitigations](#risks-and-mitigations)
- [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy)
- [Drawbacks](#drawbacks)
- [Alternatives](#alternatives)
- [References](#references)
- [Implementation PRs](#implementation-prs)
## Summary
This proposal suggests adopting open telemetry sdk into `chaos-operator` and `chaos-runner` for measuring(tracing) the performance of chaos experiments.
## Motivation
The phrase `You can't manage what you don't measure` gives an idea to our project. We offer [monitoring metrics](https://github.com/litmuschaos/litmus/tree/master/monitoring) by exposing `/metrics` endpoint. However, it is not enough to measure the performance of chaos experiments. We need to trace the performance of chaos experiments. There are so many pods(ex. argo, probes, runner ...) are running and completing in a single chaos experiment. We don't know which pod is causing the performance issue so that it is hard to trace the performance of chaos experiments. Distributed tracing helps pinpoint where failures occur and what causes poor performance. It is a key tool for debugging and understanding complex systems.
I was also inspired by [Tekton](https://tekton.dev/)'s [distributed tracing proposal](https://github.com/tektoncd/community/blob/main/teps/0124-distributed-tracing-for-tasks-and-pipelines.md).
### Goals
- Adopt open telemetry sdk into chaos-operator, chaos-runner, and all components running for chaos experiment.
- Implementation of opentelemetry tracing with Jaeger.
- Able to visualize chaos experiment steps in jaeger
- Add documentation to /monitoring and litmus docs.
### Non-Goals
- Not changing the existing chaos-experiment structure.
- Not changing the existing monitoring metrics.
- Not changing the existing API.
## Proposal
### Use Cases
#### Use case 1 - LitmusChaos user
As a user, I want to know what is happening in the chaos experiment so that I can trace the performance of chaos experiments.
#### Use case 2 - OSS Developer
As a developer, I want to know where the performance issue is happening in the chaos experiment so that I can debug and fix the issue.
### Implementation Details
I plan to use open telemetry SDK. But I need to consider the following points.
In general distributed tracing, All the components are communicate via HTTP or gRPC. So they add trace context to the [request header](https://opentelemetry.io/docs/concepts/context-propagation/). But in chaos experiments, we are using the Kubernetes API to create resources. So we need to pass the trace context other than the request header.
I made a simple demo to show how to pass the trace context to the child container using env. Here is the [demo](https://github.com/namkyu1999/async-trace).
![demo-arch](./images/distributed-tracing-demo-arch.png)
In this demo, there are two containers. The first container is a parent container and the second container is a child container created by the parent container using the docker client API. When the child container is created, the parent container passes the trace context to the environment variable. The child container reads the trace context from the environment variable and sends the trace context to the Jaeger. Two containers sending each trace context using OpenTelemetry SDK. And open telemetry consider two trace context as a single trace.
So I will use the same approach in the chaos experiment. I will pass the trace context to the child container using the environment variable. And I will use the OpenTelemetry SDK to send the trace context to the Opentelemtry Collector.
Here's a implementation plan.
- Add OpenTelemetry SDK to chaos-operator.
- Add OpenTelemetry SDK to all components running for chaos experiment.
- Send the trace context to the Opentelemetry Collector.
- Visualize the chaos experiment steps in Jaeger.
- Add documentation to /monitoring and litmus docs.
After the implementation, the chaos experiment steps will be visualized in Jaeger like this.
![result-example](./images/distributed-tracing-example.png)
The API remains unchanged. Enabling tracing is entirely optional for the end user. If tracing is disabled or not configured with the correct tracing backend URL, the reconcilers will function as usual. Therefore, we can categorize this as a non-breaking change.
## Risks and Mitigations
Because the OpenTelemetry SDK performs additional tasks, it can cause latency. So end user can disable the tracing feature.
## Upgrade / Downgrade Strategy
## Drawbacks
## Alternatives
## References
- [Environment Variables as Carrier for Inter-Process Propagation to transport context](https://github.com/open-telemetry/opentelemetry-specification/issues/740)
- [Tekton's distributed tracing proposal](https://github.com/tektoncd/community/blob/main/teps/0124-distributed-tracing-for-tasks-and-pipelines.md)
- [noop tracer](https://github.com/open-telemetry/opentelemetry-go/discussions/2659)
## Implementation PRs
| isMerged | PR |
|----------|--------------------------------------------------------------------------|
| N | [chaos-runner](https://github.com/litmuschaos/chaos-runner/pull/221) |
| N | [chaos-operator](https://github.com/litmuschaos/chaos-operator/pull/498) |
| N | [litmus-go](https://github.com/litmuschaos/litmus-go/pull/706) |
| N | [chaos center](https://github.com/litmuschaos/litmus/pull/4746) |

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 347 KiB

View File

@ -0,0 +1,71 @@
| title | authors | creation-date | last-updated |
|--------------------------------------------------|------------------------------------------|---------------|--------------|
| Adding a New Chaos Fault - AWS RDS Instance Stop | [@jongwooo](https://github.com/jongwooo) | 2024-09-02 | 2024-09-02 |
# Adding a New Chaos Fault - AWS RDS Instance Stop
- [Summary](#summary)
- [Motivation](#motivation)
- [Goals](#goals)
- [Non-Goals](#non-goals)
- [Proposal](#proposal)
- [Use Cases](#use-cases)
- [Implementation Details](#implementation-details)
- [Risks and Mitigations](#risks-and-mitigations)
- [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy)
- [Drawbacks](#drawbacks)
- [Alternatives](#alternatives)
- [References](#references)
## Summary
[Amazon Relational Database Service (RDS)](https://aws.amazon.com/en/rds/) is a managed relational database service provided by AWS. It is a fully managed database service that makes it easy to set up, operate, and scale a relational database in the cloud.
So I want to add a new chaos fault(rds instance stop) to Litmus ChaosHub.
## Motivation
Litmus ChaosHub has plenty of Chaos Faults. But there is no Chaos Fault for RDS. RDS is a widely used service in AWS. So I want to add a new Chaos Fault for RDS. Adding 'rds instance stop' Chaos Fault to Litmus ChaosHub can help create a more resilient system.
### Goals
- Adding a 'rds instance stop' Chaos Fault to [Litmus ChaosHub](https://hub.litmuschaos.io/)
- Fixing [litmus-go](https://github.com/litmuschaos/litmus-go) and [chaos-charts](https://github.com/litmuschaos/chaos-charts) codes
### Non-Goals
- Fixing Litmus codes except for [litmus-go](https://github.com/litmuschaos/litmus-go) and [chaos-charts](https://github.com/litmuschaos/chaos-charts) is a non-goal
## Proposal
### Use Cases
#### Use case 1
In Chaos Studio, Users can select 'rds instance stop' Chaos Fault as part of the Chaos Experiment. They can compose it with other Chaos Faults.
### Implementation Details
Here's a Chaos Fault Scenario.
![rds-fault-scenario](./images/rds-fault-scenario.png)
#### Phase 1 - Add scenario to the litmus-go repository
I will use `litmuschaos/go-runner` image. So I am going to add a new case in the litmus-go repository.
#### Phase 2 - Add a new Chaos Fault to the Litmus ChaosHub
After Phase 1 PR gets merged, I will raise a PR that adds a 'rds instance stop' Chaos Fault to the `chaos-charts` repository. When all is done, the user can easily assault the AWS RDS instance.
## Risks and Mitigations
We need to grant proper RBAC permissions to the runner container. Granting override permissions may affect other systems.
## Upgrade / Downgrade Strategy
## Drawbacks
## Alternatives
## References
- [Amazon Relational Database Service (RDS)](https://aws.amazon.com/en/rds/)

View File

@ -0,0 +1,38 @@
| title | authors | creation-date | last-updated |
|-------|------------------------------------------|---------------|--------------|
| Replace mongoDB features not supported by AWS | [@kwx4957](https://github.com/kwx4957) | | |
### Summary
There are some operators that are supported by mongoDB but not by aws documentDB. Enhancing this feature with other features or code would make LitmusChaos more developer-friendly on AWS.
### Goals
- Changing features not supported by the cloud environment (AWS) with alternative operators or in code
### Implementation Details
[AWS support api scanning](https://docs.aws.amazon.com/documentdb/latest/developerguide/migration-playbook.html)
As a result of using these features, the two operators $facet and $bucket are not supported by AWS. We are using the following features in graphQL and expect that modifying them with other features or code will make it easier to run LitmusChaos on AWS.
```
he following 2 unsupported operators were found:
$facet | found 8 time(s)
$bucket | found 1 time(s)
Unsupported operators by filename and line number:
$facet | lines = found 8 time(s)
../../litmus/chaoscenter/graphql/server/pkg/chaos_infrastructure/service.go | lines = [664, 841]
../../litmus/chaoscenter/graphql/server/pkg/chaos_experiment/handler/handler.go | lines = [733, 955, 1131]
../../litmus/chaoscenter/graphql/server/pkg/chaos_experiment_run/handler/handler.go | lines = [531]
../../litmus/chaoscenter/graphql/server/pkg/environment/handler/handler.go | lines = [346]
../../litmus/chaoscenter/graphql/server/pkg/chaoshub/service.go | lines = [922]
$bucket | lines = found 1 time(s)
../../litmus/chaoscenter/graphql/server/pkg/chaos_experiment/handler/handler.go | lines = [1106]
```
## References
[issue#4459](https://github.com/litmuschaos/litmus/issues/4459)
[AWS support api](https://docs.aws.amazon.com/documentdb/latest/developerguide/mongo-apis.html#w144aac17c19b5b3)
[Azure support api](https://learn.microsoft.com/es-es/azure/cosmos-db/mongodb/feature-support-36#aggregation-stages)

View File

@ -141,7 +141,7 @@ ChaosExperiment CR은 <a href="https://hub.litmuschaos.io" target="_blank">hub.l
(카오스 엔지니어링 사례로 Litmus를 사용하고 있는 조직은 위 페이지로 PR을 보내주세요)
## 라이
## 라이
Litmus는 아파치(Apache) 라이선스 버전 2.0을 적용합니다. 전체 라이선스 텍스트는 [LICENSE](./LICENSE)를 참고하세요. Litmus 프로젝트에서 사용하는 일부 프로젝트는 다른 라이선스에 적용받을 수 있으며, 별도의 라이선스를 참고하세요.