Adding multiple workloads and pod filtering (#3836)

* feat(selectors): Adding multiple workloads and pod filtering (#3780)

* feat(selectors): Adding multiple workloads and pod filtering

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

* updating pod owner ref code

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

* chore(selectors): adding frontend changes for kubeobject subscription (#3808)

* chore(selectors): adding frontend changes for kubeobject subscription

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

* upading image tag

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

* adding upgrade agent for 3.0-beta1 (#3826)

Signed-off-by: Adarsh kumar <adarsh.kumar@harness.io>

Signed-off-by: Adarsh kumar <adarsh.kumar@harness.io>

* adding upgrade agent for 3.0-beta1 (#3829)

Signed-off-by: Adarsh kumar <adarsh.kumar@harness.io>

Signed-off-by: Adarsh kumar <adarsh.kumar@harness.io>
Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

* fix import order lint

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

* fix import order lint

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

* Added installation manifests for 3.0-beta1 (#3830)

* feat(selectors): Adding multiple workloads and pod filtering (#3780)

* feat(selectors): Adding multiple workloads and pod filtering

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

* updating pod owner ref code

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

* chore(selectors): adding frontend changes for kubeobject subscription (#3808)

* chore(selectors): adding frontend changes for kubeobject subscription

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

* upading image tag

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

* adding upgrade agent for 3.0-beta1 (#3826)

Signed-off-by: Adarsh kumar <adarsh.kumar@harness.io>

Signed-off-by: Adarsh kumar <adarsh.kumar@harness.io>

* Added installation manifests for 3.0-beta1

Signed-off-by: Jonsy13 <vedant.shrotria@harness.io>

* Added hub branch for 3.0-beta1

Signed-off-by: Jonsy13 <vedant.shrotria@harness.io>

* Added hub branch for 3.0-beta1

Signed-off-by: Jonsy13 <vedant.shrotria@harness.io>

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>
Signed-off-by: Adarsh kumar <adarsh.kumar@harness.io>
Signed-off-by: Jonsy13 <vedant.shrotria@harness.io>
Co-authored-by: Shubham Chaudhary <shubham.chaudhary@harness.io>
Co-authored-by: Adarshkumar14 <adarsh.kumar@harness.io>

* Updated manifest readme for 3.0-Beta1 (#3832)

* feat(selectors): Adding multiple workloads and pod filtering (#3780)

* feat(selectors): Adding multiple workloads and pod filtering

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

* updating pod owner ref code

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

* chore(selectors): adding frontend changes for kubeobject subscription (#3808)

* chore(selectors): adding frontend changes for kubeobject subscription

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

* upading image tag

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

* adding upgrade agent for 3.0-beta1 (#3826)

Signed-off-by: Adarsh kumar <adarsh.kumar@harness.io>

Signed-off-by: Adarsh kumar <adarsh.kumar@harness.io>

* adding upgrade agent for 3.0-beta1 (#3829)

Signed-off-by: Adarsh kumar <adarsh.kumar@harness.io>

Signed-off-by: Adarsh kumar <adarsh.kumar@harness.io>

* Updated manifest readme for 3.0-Beta1

Signed-off-by: Amit Kumar Das <amit.das@harness.io>

* Minor fix

Signed-off-by: Amit Kumar Das <amit.das@harness.io>

* Minor fix in readme

Signed-off-by: Amit Kumar Das <amit.das@harness.io>

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>
Signed-off-by: Adarsh kumar <adarsh.kumar@harness.io>
Signed-off-by: Amit Kumar Das <amit.das@harness.io>
Co-authored-by: Shubham Chaudhary <shubham.chaudhary@harness.io>
Co-authored-by: Adarshkumar14 <adarsh.kumar@harness.io>

* fixing build

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>

Signed-off-by: Shubham Chaudhary <shubham.chaudhary@harness.io>
Signed-off-by: Adarsh kumar <adarsh.kumar@harness.io>
Signed-off-by: Jonsy13 <vedant.shrotria@harness.io>
Signed-off-by: Amit Kumar Das <amit.das@harness.io>
Co-authored-by: Adarshkumar14 <adarsh.kumar@harness.io>
Co-authored-by: Vedant Shrotria <vedant.shrotria@harness.io>
Co-authored-by: Amit Kumar Das <amit.das@harness.io>
This commit is contained in:
Shubham Chaudhary 2022-11-21 18:31:23 +05:30 committed by GitHub
parent cb85d9250b
commit 44e039d285
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
21 changed files with 6437 additions and 28248 deletions

View File

@ -22,23 +22,23 @@ ChaosCenter provides console and UI experience for managing, monitoring, and eve
#### Applying k8s manifest
> Litmus-2.14.0 (Stable) Cluster Scope manifest
> Litmus-3.0-beta1 Cluster Scope manifest
```bash
kubectl apply -f https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/2.14.0/litmus-2.14.0.yaml
kubectl apply -f https://raw.githubusercontent.com/litmuschaos/litmus/dev-3.x/mkdocs/docs/3.0-beta1/litmus-3.0-beta1.yaml
```
Or
> Litmus-2.14.0 (Stable) Namespaced Scope manifest.
> Litmus-3.0-beta1 Namespaced Scope manifest.
```bash
#Create a namespace eg: litmus
kubectl create ns litmus
#Install CRDs, if SELF_AGENT env is set to TRUE
kubectl apply -f https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/2.14.0/litmus-portal-crds-2.14.0.yml
kubectl apply -f https://raw.githubusercontent.com/litmuschaos/litmus/dev-3.x/mkdocs/docs/3.0-beta1/litmus-portal-crds-3.0-beta1.yml
#Install ChaosCenter
kubectl apply -f https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/2.14.0/litmus-namespaced-2.14.0.yaml -n litmus
kubectl apply -f https://raw.githubusercontent.com/litmuschaos/litmus/dev-3.x/mkdocs/docs/3.0-beta1/litmus-namespaced-3.0-beta1.yaml -n litmus
```
Or
@ -46,7 +46,7 @@ Or
> Master (Latest) Cluster scope. Install in litmus namespace by default.
```bash
kubectl apply -f https://raw.githubusercontent.com/litmuschaos/litmus/master/litmus-portal/manifests/cluster-k8s-manifest.yml
kubectl apply -f https://raw.githubusercontent.com/litmuschaos/litmus/dev-3.x/litmus-portal/manifests/cluster-k8s-manifest.yml
```
Or
@ -57,9 +57,9 @@ Or
#Create a namespace eg: litmus
kubectl create ns litmus
#Install CRDs, if SELF_AGENT env is set to TRUE
kubectl apply -f https://raw.githubusercontent.com/litmuschaos/litmus/master/litmus-portal/manifests/litmus-portal-crds.yml
kubectl apply -f https://raw.githubusercontent.com/litmuschaos/litmus/dev-3.x/litmus-portal/manifests/litmus-portal-crds.yml
#Install ChaosCenter
kubectl apply -f https://raw.githubusercontent.com/litmuschaos/litmus/master/litmus-portal/manifests/namespace-k8s-manifest.yml -n litmus
kubectl apply -f https://raw.githubusercontent.com/litmuschaos/litmus/dev-3.x/litmus-portal/manifests/namespace-k8s-manifest.yml -n litmus
```
#### Configuration Options for Cluster scope.

View File

@ -8,10 +8,10 @@ import (
"strings"
"github.com/litmuschaos/litmus/litmus-portal/cluster-agents/subscriber/pkg/graphql"
"github.com/sirupsen/logrus"
"github.com/litmuschaos/litmus/litmus-portal/cluster-agents/subscriber/pkg/types"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/litmuschaos/litmus/litmus-portal/cluster-agents/subscriber/pkg/workloads"
"github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
@ -33,49 +33,63 @@ func GetKubernetesObjects(request types.KubeObjRequest) ([]*types.KubeObject, er
return nil, err
}
resourceType := schema.GroupVersionResource{
Group: request.KubeGVRRequest.Group,
Version: request.KubeGVRRequest.Version,
Resource: request.KubeGVRRequest.Resource,
}
_, dynamicClient, err := GetDynamicAndDiscoveryClient()
if err != nil {
return nil, err
}
var ObjData []*types.KubeObject
if strings.ToLower(AgentScope) == "namespace" {
dataList, err := GetObjectDataByNamespace(AgentNamespace, dynamicClient, resourceType)
if len(request.Workloads) != 0 {
ObjData, err = getPodsFromWorkloads(request.Workloads, clientSet, dynamicClient)
if err != nil {
return nil, err
}
KubeObj := &types.KubeObject{
Namespace: AgentNamespace,
Data: dataList,
}
ObjData = append(ObjData, KubeObj)
} else {
namespace, err := clientSet.CoreV1().Namespaces().List(context.TODO(), metav1.ListOptions{})
if err != nil {
return nil, err
}
var gvrList []schema.GroupVersionResource
if len(namespace.Items) > 0 {
for _, namespace := range namespace.Items {
podList, err := GetObjectDataByNamespace(namespace.GetName(), dynamicClient, resourceType)
if err != nil {
return nil, err
}
KubeObj := &types.KubeObject{
Namespace: namespace.GetName(),
Data: podList,
}
ObjData = append(ObjData, KubeObj)
for _, req := range request.KubeGVRRequest {
resourceType := schema.GroupVersionResource{
Group: req.Group,
Version: req.Version,
Resource: req.Resource,
}
} else {
return nil, errors.New("no namespace available")
gvrList = append(gvrList, resourceType)
}
if strings.ToLower(AgentScope) == "namespace" {
dataList, err := GetObjectDataByNamespace(AgentNamespace, dynamicClient, gvrList)
if err != nil {
return nil, err
}
KubeObj := &types.KubeObject{
Namespace: AgentNamespace,
Data: dataList,
}
ObjData = append(ObjData, KubeObj)
} else {
namespace, err := clientSet.CoreV1().Namespaces().List(context.TODO(), v1.ListOptions{})
if err != nil {
return nil, err
}
if len(namespace.Items) > 0 {
for _, namespace := range namespace.Items {
dataList, err := GetObjectDataByNamespace(namespace.GetName(), dynamicClient, gvrList)
if err != nil {
return nil, err
}
KubeObj := &types.KubeObject{
Namespace: namespace.GetName(),
Data: dataList,
}
ObjData = append(ObjData, KubeObj)
}
} else {
return nil, errors.New("no namespace available")
}
}
}
kubeData, _ := json.Marshal(ObjData)
var kubeObjects []*types.KubeObject
@ -86,24 +100,45 @@ func GetKubernetesObjects(request types.KubeObjRequest) ([]*types.KubeObject, er
return kubeObjects, nil
}
//GetObjectDataByNamespace uses dynamic client to fetch Kubernetes Objects data.
func GetObjectDataByNamespace(namespace string, dynamicClient dynamic.Interface, resourceType schema.GroupVersionResource) ([]types.ObjectData, error) {
list, err := dynamicClient.Resource(resourceType).Namespace(namespace).List(context.TODO(), metav1.ListOptions{})
var kubeObjects []types.ObjectData
func getPodsFromWorkloads(resources []types.Workload, k8sClient *kubernetes.Clientset, dynamicClient dynamic.Interface) ([]*types.KubeObject, error) {
var ObjData []*types.KubeObject
podNsMap, err := workloads.GetPodsFromWorkloads(resources, k8sClient, dynamicClient)
if err != nil {
return kubeObjects, nil
return nil, err
}
for _, list := range list.Items {
listInfo := types.ObjectData{
Name: list.GetName(),
UID: list.GetUID(),
Namespace: list.GetNamespace(),
APIVersion: list.GetAPIVersion(),
CreationTimestamp: list.GetCreationTimestamp(),
TerminationGracePeriods: list.GetDeletionGracePeriodSeconds(),
Labels: list.GetLabels(),
for ns, podList := range podNsMap {
var data []types.ObjectData
for _, pod := range podList {
data = append(data, types.ObjectData{
Name: pod,
Kind: "Pod",
})
}
ObjData = append(ObjData, &types.KubeObject{
Namespace: ns,
Data: data,
})
}
return ObjData, nil
}
//GetObjectDataByNamespace uses dynamic client to fetch Kubernetes Objects data.
func GetObjectDataByNamespace(namespace string, dynamicClient dynamic.Interface, gvrList []schema.GroupVersionResource) ([]types.ObjectData, error) {
var kubeObjects []types.ObjectData
for _, gvr := range gvrList {
list, err := dynamicClient.Resource(gvr).Namespace(namespace).List(context.TODO(), v1.ListOptions{})
if err != nil {
return kubeObjects, nil
}
for _, list := range list.Items {
listInfo := types.ObjectData{
Name: list.GetName(),
Kind: list.GetKind(),
Labels: list.GetLabels(),
}
kubeObjects = append(kubeObjects, listInfo)
}
kubeObjects = append(kubeObjects, listInfo)
}
return kubeObjects, nil
}
@ -130,6 +165,17 @@ func SendKubeObjects(clusterData map[string]string, kubeObjectRequest types.Kube
payload, err := GenerateKubeObject(clusterData["CLUSTER_ID"], clusterData["ACCESS_KEY"], clusterData["VERSION"], kubeObjectRequest)
if err != nil {
logrus.WithError(err).Print("Error while getting KubeObject Data")
clusterID := `{clusterID: \"` + clusterData["CLUSTER_ID"] + `\", version: \"` + clusterData["VERSION"] + `\", accessKey: \"` + clusterData["ACCESS_KEY"] + `\"}`
mutation := `{ clusterID: ` + clusterID + `, requestID:\"` + kubeObjectRequest.RequestID + `\", kubeObj:\"` + "failed to get kubeobjects" + `\"}`
var payload = []byte(`{"query":"mutation { kubeObj(request:` + mutation + ` )}"}`)
body, reqErr := graphql.SendRequest(clusterData["SERVER_ADDR"], payload)
if reqErr != nil {
logrus.Print(reqErr.Error())
return reqErr
}
logrus.Println("Response", body)
return err
}

View File

@ -1,16 +1,18 @@
package types
import (
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
)
type KubeObjRequest struct {
RequestID string
ClusterID string `json:"clusterID"`
ObjectType string `json:"objectType"`
KubeGVRRequest KubeGVRRequest `json:"kubeObjRequest"`
ClusterID string `json:"clusterID"`
ObjectType string `json:"objectType"`
Workloads []Workload `json:"workloads"`
KubeGVRRequest []*KubeGVRRequest `json:"kubeObjRequest"`
}
// Workload consists of workload details
type Workload struct {
Name string `json:"name"`
Kind string `json:"kind"`
Namespace string `json:"namespace"`
}
type KubeGVRRequest struct {
@ -27,13 +29,7 @@ type KubeObject struct {
//ObjectData consists of Kubernetes Objects related details
type ObjectData struct {
Name string `json:"name"`
UID types.UID `json:"uid"`
Namespace string `json:"namespace"`
APIVersion string `json:"apiVersion"`
CreationTimestamp metav1.Time `json:"creationTimestamp"`
Containers []v1.Container `json:"containers"`
TerminationGracePeriods *int64 `json:"terminationGracePeriods"`
Volumes []v1.Volume `json:"volumes"`
Labels map[string]string `json:"labels"`
Name string `json:"name"`
Kind string `json:"kind"`
Labels map[string]string `json:"labels"`
}

View File

@ -0,0 +1,199 @@
// Package workloads implements utility to derive the pods from the parent workloads
package workloads
import (
"context"
"fmt"
"strings"
"github.com/litmuschaos/litmus/litmus-portal/cluster-agents/subscriber/pkg/types"
kcorev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
)
var (
gvrrc = schema.GroupVersionResource{
Group: "",
Version: "v1",
Resource: "replicacontrollers",
}
gvrrs = schema.GroupVersionResource{
Group: "apps",
Version: "v1",
Resource: "replicasets",
}
)
// GetPodsFromWorkloads derives the pods from the parent workloads
func GetPodsFromWorkloads(workloads []types.Workload, client *kubernetes.Clientset, dynamicClient dynamic.Interface) (map[string][]string, error) {
workloadMap := aggregateWorkloadsByNamespace(workloads)
result := make(map[string][]string)
for ns, w := range workloadMap {
allPods, err := getAllPods(ns, client)
if err != nil {
return nil, err
}
pods, err := getPodsByAppKind(ns, w.WorkloadKindMap, allPods, client, dynamicClient)
if err != nil {
return nil, err
}
result[ns] = removeDuplicateItems(pods)
}
return result, nil
}
func getPodsByAppKind(ns string, wldMap map[string][]string, allPods *kcorev1.PodList, client *kubernetes.Clientset, dynamicClient dynamic.Interface) ([]string, error) {
podsFromWld, err := getPodsFromWorkload(wldMap, allPods, dynamicClient)
if err != nil {
return nil, err
}
podsFromSvc, err := getPodsFromServices(ns, wldMap["service"], client)
if err != nil {
return nil, err
}
return append(podsFromWld, podsFromSvc...), nil
}
func getPodsFromWorkload(wld map[string][]string, allPods *kcorev1.PodList, dynamicClient dynamic.Interface) ([]string, error) {
var pods []string
for _, r := range allPods.Items {
ownerType, ownerName, err := getPodOwnerTypeAndName(&r, dynamicClient)
if err != nil {
return nil, err
}
if ownerName == "" || ownerType == "" {
continue
}
if matchPodOwnerWithWorkloads(ownerName, ownerType, wld) {
pods = append(pods, r.Name)
}
}
return pods, nil
}
func getPodsFromServices(ns string, wld []string, client *kubernetes.Clientset) ([]string, error) {
var pods []string
for _, svcName := range wld {
svc, err := client.CoreV1().Services(ns).Get(context.Background(), svcName, v1.GetOptions{})
if err != nil {
return nil, err
}
if svc.Spec.Selector == nil {
return nil, nil
}
var svcSelector string
for k, v := range svc.Spec.Selector {
if svcSelector == "" {
svcSelector += fmt.Sprintf("%s=%s", k, v)
continue
}
svcSelector += fmt.Sprintf(",%s=%s", k, v)
}
res, err := client.CoreV1().Pods(svc.Namespace).List(context.Background(), v1.ListOptions{LabelSelector: svcSelector})
if err != nil {
return nil, err
}
for _, p := range res.Items {
pods = append(pods, p.Name)
}
}
return pods, nil
}
func getPodOwnerTypeAndName(pod *kcorev1.Pod, dynamicClient dynamic.Interface) (parentType, parentName string, err error) {
for _, owner := range pod.GetOwnerReferences() {
parentName = owner.Name
if owner.Kind == "StatefulSet" || owner.Kind == "DaemonSet" {
return strings.ToLower(owner.Kind), parentName, nil
}
if owner.Kind == "ReplicaSet" && strings.HasSuffix(owner.Name, pod.Labels["pod-template-hash"]) {
return getParent(owner.Name, pod.Namespace, gvrrs, dynamicClient)
}
if owner.Kind == "ReplicaController" {
return getParent(owner.Name, pod.Namespace, gvrrc, dynamicClient)
}
}
return parentType, parentName, nil
}
func getParent(name, namespace string, gvr schema.GroupVersionResource, dynamicClient dynamic.Interface) (string, string, error) {
res, err := dynamicClient.Resource(gvr).Namespace(namespace).Get(context.Background(), name, v1.GetOptions{})
if err != nil {
return "", "", err
}
for _, v := range res.GetOwnerReferences() {
kind := strings.ToLower(v.Kind)
if kind == "deployment" || kind == "rollout" || kind == "deploymentconfig" {
return kind, v.Name, nil
}
}
return "", "", nil
}
func matchPodOwnerWithWorkloads(name, kind string, workloadMap map[string][]string) bool {
if val, ok := workloadMap[kind]; ok {
for _, v := range val {
if v == name {
return true
}
}
}
return false
}
func aggregateWorkloadsByNamespace(workloads []types.Workload) map[string]workload {
result := make(map[string]workload)
for _, w := range workloads {
data, ok := result[w.Namespace]
if !ok {
result[w.Namespace] = workload{
WorkloadKindMap: map[string][]string{
w.Kind: {w.Name},
},
}
continue
}
data.WorkloadKindMap[w.Kind] = append(data.WorkloadKindMap[w.Kind], w.Name)
result[w.Namespace] = data
}
return result
}
func getAllPods(namespace string, client *kubernetes.Clientset) (*kcorev1.PodList, error) {
return client.CoreV1().Pods(namespace).List(context.Background(), v1.ListOptions{})
}
type workload struct {
WorkloadKindMap map[string][]string
}
func removeDuplicateItems(slice []string) []string {
var unique []string
for _, v := range slice {
if !contains(v, unique) {
unique = append(unique, v)
}
}
return unique
}
func contains(val string, slice []string) bool {
for _, v := range slice {
if val == v {
return true
}
}
return false
}

File diff suppressed because it is too large Load Diff

View File

@ -40,7 +40,7 @@ export interface KubeObjRequest {
request: {
clusterID: string;
objectType: string;
kubeObjRequest: GVRRequest;
kubeObjRequest: GVRRequest[];
};
}

View File

@ -132,11 +132,13 @@ const TargetApplication: React.FC<TargetApplicationProp> = ({ gotoStep }) => {
request: {
clusterID,
objectType: 'kubeobject',
kubeObjRequest: {
group: GVRObj.group,
version: GVRObj.version,
resource: GVRObj.resource,
},
kubeObjRequest: [
{
group: GVRObj.group,
version: GVRObj.version,
resource: GVRObj.resource,
},
],
},
},
fetchPolicy: 'network-only',

View File

@ -200,11 +200,13 @@ const DashboardMetadataForm: React.FC<DashboardMetadataFormProps> = ({
request: {
clusterID: dashboardDetails.agentID ?? '',
objectType: 'kubeobject',
kubeObjRequest: {
group: kubeObjInput.group,
version: kubeObjInput.version,
resource: kubeObjInput.resource,
},
kubeObjRequest: [
{
group: kubeObjInput.group,
version: kubeObjInput.version,
resource: kubeObjInput.resource,
},
],
},
},
onSubscriptionComplete: () => {

View File

@ -397,9 +397,11 @@ github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfc
github.com/coreos/prometheus-operator v0.34.0/go.mod h1:Li6rMllG/hYIyXfMuvUwhyC+hqwJVHdsDdP21hypT1M=
github.com/coreos/prometheus-operator v0.38.1-0.20200424145508-7e176fda06cc/go.mod h1:erio69w1R/aC14D5nfvAXSlE8FT8jt2Hnavc50Dp33A=
github.com/coreos/rkt v1.30.0/go.mod h1:O634mlH6U7qk87poQifK6M2rsFNt+FyUTWNMnP1hF1U=
github.com/cpuguy83/go-md2man v1.0.10 h1:BSKMNlYxDvnunlTymqtgONjNnaRV1sTpcovwwjF22jk=
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.1 h1:r/myEWzV9lfsM1tFLgDyu0atFtJ1fXn261LKYj/3DxU=
github.com/cpuguy83/go-md2man/v2 v2.0.1/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
@ -1609,8 +1611,10 @@ github.com/rubenv/sql-migrate v0.0.0-20191025130928-9355dd04f4b3/go.mod h1:WS0rl
github.com/rubenv/sql-migrate v0.0.0-20200212082348-64f95ea68aa3/go.mod h1:rtQlpHw+eR6UrqaS3kX1VYeaCxzCVdimDS7g5Ln4pPc=
github.com/rubiojr/go-vhd v0.0.0-20200706105327-02e210299021/go.mod h1:DM5xW0nvfNNm2uytzsvhI3OnX8uzaRAg8UX/CnDqbto=
github.com/russross/blackfriday v0.0.0-20170610170232-067529f716f4/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/russross/blackfriday v1.5.2 h1:HyvC0ARfnZBqnXwABFeSZHpKvJHJJfPz81GNueLj0oo=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
@ -1756,8 +1760,10 @@ github.com/ultraware/funlen v0.0.1/go.mod h1:Dp4UiAus7Wdb9KUZsYWZEWiRzGuM2kXM1lP
github.com/ultraware/funlen v0.0.2/go.mod h1:Dp4UiAus7Wdb9KUZsYWZEWiRzGuM2kXM1lPbfaF6xhA=
github.com/urfave/cli v0.0.0-20171014202726-7bc6a0acffa5/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.22.2 h1:gsqYFH8bb9ekPA12kRo0hfjngWQjkJPlN9R0N78BoUo=
github.com/urfave/cli v1.22.2/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/urfave/cli/v2 v2.1.1/go.mod h1:SE9GqnLQmjVa0iPEY0f1w3ygNIYcIJ0OKPMoW2caLfQ=
github.com/urfave/cli/v2 v2.3.0 h1:qph92Y649prgesehzOrQjdWyxFOp/QVM+6imKHad91M=
github.com/urfave/cli/v2 v2.3.0/go.mod h1:LJmUH05zAU44vOAcrfzZQKsZbVcdbOG8rtL3/XcUArI=
github.com/urfave/negroni v1.0.0/go.mod h1:Meg73S6kFm/4PpbYdq35yYWoCZ9mS/YSx+lKnmiohz4=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
@ -1997,6 +2003,7 @@ golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.5.0/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
golang.org/x/mod v0.5.1/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 h1:6zppjxzCulZykYSLyVDYbneBfbaBIQPYMevg0bEwv2s=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20170915142106-8351a756f30f/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@ -2400,6 +2407,7 @@ golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.6-0.20210820212750-d4cc65f0b2ff/go.mod h1:YD9qOF0M9xpSpdWTBbzEl5e/RnCefISl8E5Noe10jFM=
golang.org/x/tools v0.1.8/go.mod h1:nABZi5QlRsZVlzPpHl034qft6wpY4eDcsTt5AaioBiU=
golang.org/x/tools v0.1.11 h1:loJ25fNOEhSXfHrpoGj91eCUThwdNX6u24rO1xnNteY=
golang.org/x/tools v0.1.11/go.mod h1:SgwaegtQh8clINPpECJMqnxLv9I09HLqnW3RMqW0CA4=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=

View File

@ -4706,8 +4706,17 @@ type RegisterClusterResponse {
clusterName: String!
}
"""
Response received for fetching GQL server version
"""
type ServerVersionResponse {
"""
Returns server version key
"""
key: String!
"""
Returns server version value
"""
value: String!
}
@ -4716,6 +4725,7 @@ extend type Query {
Returns version of gql server
"""
getServerVersion: ServerVersionResponse!
# CLUSTER OPERATIONS
"""
Returns clusters with a particular cluster type in the project
@ -5111,7 +5121,8 @@ input KubeObjectRequest {
Type of the Kubernetes object to be fetched
"""
objectType: String!
kubeObjRequest: KubeGVRRequest!
kubeObjRequest: [KubeGVRRequest]
workloads: [Workload]
}
input KubeGVRRequest {
@ -5119,6 +5130,12 @@ input KubeGVRRequest {
version: String!
resource: String!
}
input Workload {
name: String!
kind: String!
namespace: String!
}
`, BuiltIn: false},
&ast.Source{Name: "graph/myhub.graphqls", Input: `enum AuthType {
BASIC
@ -25028,7 +25045,13 @@ func (ec *executionContext) unmarshalInputKubeObjectRequest(ctx context.Context,
}
case "kubeObjRequest":
var err error
it.KubeObjRequest, err = ec.unmarshalNKubeGVRRequest2ᚖgithubᚗcomᚋlitmuschaosᚋlitmusᚋlitmusᚑportalᚋgraphqlᚑserverᚋgraphᚋmodelᚐKubeGVRRequest(ctx, v)
it.KubeObjRequest, err = ec.unmarshalOKubeGVRRequest2ᚕᚖgithubᚗcomᚋlitmuschaosᚋlitmusᚋlitmusᚑportalᚋgraphqlᚑserverᚋgraphᚋmodelᚐKubeGVRRequest(ctx, v)
if err != nil {
return it, err
}
case "workloads":
var err error
it.Workloads, err = ec.unmarshalOWorkload2ᚕᚖgithubᚗcomᚋlitmuschaosᚋlitmusᚋlitmusᚑportalᚋgraphqlᚑserverᚋgraphᚋmodelᚐWorkload(ctx, v)
if err != nil {
return it, err
}
@ -26304,6 +26327,36 @@ func (ec *executionContext) unmarshalInputWorkflowSortInput(ctx context.Context,
return it, nil
}
func (ec *executionContext) unmarshalInputWorkload(ctx context.Context, obj interface{}) (model.Workload, error) {
var it model.Workload
var asMap = obj.(map[string]interface{})
for k, v := range asMap {
switch k {
case "name":
var err error
it.Name, err = ec.unmarshalNString2string(ctx, v)
if err != nil {
return it, err
}
case "kind":
var err error
it.Kind, err = ec.unmarshalNString2string(ctx, v)
if err != nil {
return it, err
}
case "namespace":
var err error
it.Namespace, err = ec.unmarshalNString2string(ctx, v)
if err != nil {
return it, err
}
}
}
return it, nil
}
// endregion **************************** input.gotpl *****************************
// region ************************** interface.gotpl ***************************
@ -30570,18 +30623,6 @@ func (ec *executionContext) marshalNInt2int(ctx context.Context, sel ast.Selecti
return res
}
func (ec *executionContext) unmarshalNKubeGVRRequest2githubᚗcomᚋlitmuschaosᚋlitmusᚋlitmusᚑportalᚋgraphqlᚑserverᚋgraphᚋmodelᚐKubeGVRRequest(ctx context.Context, v interface{}) (model.KubeGVRRequest, error) {
return ec.unmarshalInputKubeGVRRequest(ctx, v)
}
func (ec *executionContext) unmarshalNKubeGVRRequest2ᚖgithubᚗcomᚋlitmuschaosᚋlitmusᚋlitmusᚑportalᚋgraphqlᚑserverᚋgraphᚋmodelᚐKubeGVRRequest(ctx context.Context, v interface{}) (*model.KubeGVRRequest, error) {
if v == nil {
return nil, nil
}
res, err := ec.unmarshalNKubeGVRRequest2githubᚗcomᚋlitmuschaosᚋlitmusᚋlitmusᚑportalᚋgraphqlᚑserverᚋgraphᚋmodelᚐKubeGVRRequest(ctx, v)
return &res, err
}
func (ec *executionContext) unmarshalNKubeObjectData2githubᚗcomᚋlitmuschaosᚋlitmusᚋlitmusᚑportalᚋgraphqlᚑserverᚋgraphᚋmodelᚐKubeObjectData(ctx context.Context, v interface{}) (model.KubeObjectData, error) {
return ec.unmarshalInputKubeObjectData(ctx, v)
}
@ -32432,6 +32473,38 @@ func (ec *executionContext) marshalOInt2ᚖint(ctx context.Context, sel ast.Sele
return ec.marshalOInt2int(ctx, sel, *v)
}
func (ec *executionContext) unmarshalOKubeGVRRequest2githubᚗcomᚋlitmuschaosᚋlitmusᚋlitmusᚑportalᚋgraphqlᚑserverᚋgraphᚋmodelᚐKubeGVRRequest(ctx context.Context, v interface{}) (model.KubeGVRRequest, error) {
return ec.unmarshalInputKubeGVRRequest(ctx, v)
}
func (ec *executionContext) unmarshalOKubeGVRRequest2ᚕᚖgithubᚗcomᚋlitmuschaosᚋlitmusᚋlitmusᚑportalᚋgraphqlᚑserverᚋgraphᚋmodelᚐKubeGVRRequest(ctx context.Context, v interface{}) ([]*model.KubeGVRRequest, error) {
var vSlice []interface{}
if v != nil {
if tmp1, ok := v.([]interface{}); ok {
vSlice = tmp1
} else {
vSlice = []interface{}{v}
}
}
var err error
res := make([]*model.KubeGVRRequest, len(vSlice))
for i := range vSlice {
res[i], err = ec.unmarshalOKubeGVRRequest2ᚖgithubᚗcomᚋlitmuschaosᚋlitmusᚋlitmusᚑportalᚋgraphqlᚑserverᚋgraphᚋmodelᚐKubeGVRRequest(ctx, vSlice[i])
if err != nil {
return nil, err
}
}
return res, nil
}
func (ec *executionContext) unmarshalOKubeGVRRequest2ᚖgithubᚗcomᚋlitmuschaosᚋlitmusᚋlitmusᚑportalᚋgraphqlᚑserverᚋgraphᚋmodelᚐKubeGVRRequest(ctx context.Context, v interface{}) (*model.KubeGVRRequest, error) {
if v == nil {
return nil, nil
}
res, err := ec.unmarshalOKubeGVRRequest2githubᚗcomᚋlitmuschaosᚋlitmusᚋlitmusᚑportalᚋgraphqlᚑserverᚋgraphᚋmodelᚐKubeGVRRequest(ctx, v)
return &res, err
}
func (ec *executionContext) marshalOLabelValue2githubᚗcomᚋlitmuschaosᚋlitmusᚋlitmusᚑportalᚋgraphqlᚑserverᚋgraphᚋmodelᚐLabelValue(ctx context.Context, sel ast.SelectionSet, v model.LabelValue) graphql.Marshaler {
return ec._LabelValue(ctx, sel, &v)
}
@ -33575,6 +33648,38 @@ func (ec *executionContext) marshalOWorkflowTemplate2ᚖgithubᚗcomᚋlitmuscha
return ec._WorkflowTemplate(ctx, sel, v)
}
func (ec *executionContext) unmarshalOWorkload2githubᚗcomᚋlitmuschaosᚋlitmusᚋlitmusᚑportalᚋgraphqlᚑserverᚋgraphᚋmodelᚐWorkload(ctx context.Context, v interface{}) (model.Workload, error) {
return ec.unmarshalInputWorkload(ctx, v)
}
func (ec *executionContext) unmarshalOWorkload2ᚕᚖgithubᚗcomᚋlitmuschaosᚋlitmusᚋlitmusᚑportalᚋgraphqlᚑserverᚋgraphᚋmodelᚐWorkload(ctx context.Context, v interface{}) ([]*model.Workload, error) {
var vSlice []interface{}
if v != nil {
if tmp1, ok := v.([]interface{}); ok {
vSlice = tmp1
} else {
vSlice = []interface{}{v}
}
}
var err error
res := make([]*model.Workload, len(vSlice))
for i := range vSlice {
res[i], err = ec.unmarshalOWorkload2ᚖgithubᚗcomᚋlitmuschaosᚋlitmusᚋlitmusᚑportalᚋgraphqlᚑserverᚋgraphᚋmodelᚐWorkload(ctx, vSlice[i])
if err != nil {
return nil, err
}
}
return res, nil
}
func (ec *executionContext) unmarshalOWorkload2ᚖgithubᚗcomᚋlitmuschaosᚋlitmusᚋlitmusᚑportalᚋgraphqlᚑserverᚋgraphᚋmodelᚐWorkload(ctx context.Context, v interface{}) (*model.Workload, error) {
if v == nil {
return nil, nil
}
res, err := ec.unmarshalOWorkload2githubᚗcomᚋlitmuschaosᚋlitmusᚋlitmusᚑportalᚋgraphqlᚑserverᚋgraphᚋmodelᚐWorkload(ctx, v)
return &res, err
}
func (ec *executionContext) marshalO__EnumValue2ᚕgithubᚗcomᚋ99designsᚋgqlgenᚋgraphqlᚋintrospectionᚐEnumValueᚄ(ctx context.Context, sel ast.SelectionSet, v []introspection.EnumValue) graphql.Marshaler {
if v == nil {
return graphql.Null

View File

@ -43,7 +43,8 @@ input KubeObjectRequest {
Type of the Kubernetes object to be fetched
"""
objectType: String!
kubeObjRequest: KubeGVRRequest!
kubeObjRequest: [KubeGVRRequest]
workloads: [Workload]
}
input KubeGVRRequest {
@ -51,3 +52,9 @@ input KubeGVRRequest {
version: String!
resource: String!
}
input Workload {
name: String!
kind: String!
namespace: String!
}

View File

@ -533,8 +533,9 @@ type KubeObjectRequest struct {
// ID of the cluster in which the Kubernetes object is present
ClusterID string `json:"clusterID"`
// Type of the Kubernetes object to be fetched
ObjectType string `json:"objectType"`
KubeObjRequest *KubeGVRRequest `json:"kubeObjRequest"`
ObjectType string `json:"objectType"`
KubeObjRequest []*KubeGVRRequest `json:"kubeObjRequest"`
Workloads []*Workload `json:"workloads"`
}
// Response received for querying Kubernetes Object
@ -931,8 +932,11 @@ type SSHKey struct {
PrivateKey string `json:"privateKey"`
}
// Response received for fetching GQL server version
type ServerVersionResponse struct {
Key string `json:"key"`
// Returns server version key
Key string `json:"key"`
// Returns server version value
Value string `json:"value"`
}
@ -1291,6 +1295,12 @@ type WorkflowTemplate struct {
IsCustomWorkflow bool `json:"isCustomWorkflow"`
}
type Workload struct {
Name string `json:"name"`
Kind string `json:"kind"`
Namespace string `json:"namespace"`
}
type AuthType string
const (

View File

@ -77,7 +77,7 @@ func init() {
logrus.Fatal(err)
}
// confirm version env is valid
if !strings.Contains(strings.ToLower(c.Version), cluster.CIVersion) {
if !strings.Contains(strings.ToLower(c.Version), cluster.CIVersion) && !strings.Contains(strings.ToLower(c.Version), "3.0-beta") {
splitCPVersion := strings.Split(c.Version, ".")
if len(splitCPVersion) != 3 {
logrus.Fatal("version doesn't follow semver semantic")

View File

@ -11,7 +11,7 @@ require (
go.uber.org/zap v1.18.1
golang.org/x/crypto v0.0.0-20220315160706-3147a52a75dd // indirect
golang.org/x/lint v0.0.0-20200302205851-738671d3881b // indirect
golang.org/x/text v0.3.7 // indirect
golang.org/x/tools v0.0.0-20210106214847-113979e3529a // indirect
golang.org/x/text v0.3.8 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
)

View File

@ -84,7 +84,7 @@ github.com/xdg-go/stringprep v1.0.2 h1:6iq84/ryjjeRmMJwxutI51F2GIPlP5BfTvXHeYjyh
github.com/xdg-go/stringprep v1.0.2/go.mod h1:8F9zXuvzgwmyT5DUm4GUfZGDdT3W+LCvS6+da4O5kxM=
github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d h1:splanxYIlg+5LfHAM6xpdFEAYOk8iySO56hMFq6uLyA=
github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d/go.mod h1:rHwXgn7JulP+udvsHwJoVG1YGAP6VLg4y9I5dyZdqmA=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.mongodb.org/mongo-driver v1.7.1 h1:jwqTeEM3x6L9xDXrCxN0Hbg7vdGfPBOTIkr0+/LYZDA=
go.mongodb.org/mongo-driver v1.7.1/go.mod h1:Q4oFMbo1+MSNqICAdYMlC/zSTrwCogR4R8NzkI+yfU8=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
@ -102,25 +102,26 @@ golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACk
golang.org/x/crypto v0.0.0-20190422162423-af44ce270edf/go.mod h1:WFFai1msRO1wXaEeE5yQxYXgSfI8pQAWXbQop6sCtWE=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200302210943-78000ba7a073/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20220315160706-3147a52a75dd h1:XcWmESyNjXJMLahc3mqVQJcgSTDxFxhETVlfk9uGc38=
golang.org/x/crypto v0.0.0-20220315160706-3147a52a75dd/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b h1:Wh+f8QHJXR411sJR8/vRBTZ7YapZaRvUcLFFJhusH0k=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190412183630-56d357773e84/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9 h1:SQFwaSi55rU7vdNs9Yr0Z324VNlrF+0wMqRXT4St8ck=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4 h1:uVc8UZUe6tr40fFVnUP5Oj+veunVezqYl9z7DYw9xzw=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190403152447-81d4e9dc473e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -128,18 +129,20 @@ golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190419153524-e8e3143a4f4a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190531175056-4c3a928424d2/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.3.8 h1:nAL+RVCQ9uMn3vJZbV+MRnydTJFPf8qqY42YiA6MrqY=
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190329151228-23e29df326fe/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
@ -149,8 +152,8 @@ golang.org/x/tools v0.0.0-20190531172133-b3315ee88b7d/go.mod h1:/rFqwRUd4F7ZHNgw
golang.org/x/tools v0.0.0-20191108193012-7d206e10da11/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a h1:CB3a9Nez8M13wwlr/E2YtwoU+qYHKfC+JrDa45RXXoQ=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.12 h1:VveCTK38A2rkS8ZqFY25HIDFscX5X9OoEhJd3quQmXU=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=

View File

@ -113,8 +113,13 @@ func (m *UpgradeManager) getUpgradePath() map[string]UpgradeExecutor {
VersionManager: nil,
},
// latest version, no more upgrades available
"2.14.0": {
NextVersion: "3.0-beta1",
VersionManager: nil,
},
// latest version, no more upgrades available
"3.0-beta1": {
NextVersion: "",
VersionManager: nil,
},

View File

@ -0,0 +1,838 @@
### RBAC Manifests
## If SELF_AGENT="true" then these permissions are required to apply
## https://github.com/litmuschaos/litmus/blob/master/litmus-portal/graphql-server/manifests/cluster/1b_argo_rbac.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: argo-cr-for-litmusportal-server
rules:
- apiGroups: [""]
resources: [pods, pods/exec]
verbs: [create, get, list, watch, update, patch, delete]
- apiGroups: [""]
resources: [configmaps]
verbs: [get, watch, list]
- apiGroups: [""]
resources: [persistentvolumeclaims]
verbs: [create, delete]
- apiGroups: [argoproj.io]
resources: [workflows, workflows/finalizers]
verbs: [get, list, watch, update, patch, delete, create]
- apiGroups: [argoproj.io]
resources: [workflowtemplates, workflowtemplates/finalizers, clusterworkflowtemplates, clusterworkflowtemplates/finalizers, workflowtasksets]
verbs: [get, list, watch]
- apiGroups: [argoproj.io]
resources: [workflowtaskresults]
verbs: [list, watch, deletecollection]
- apiGroups: [""]
resources: [serviceaccounts]
verbs: [get, list]
- apiGroups: [argoproj.io]
resources: [cronworkflows, cronworkflows/finalizers]
verbs: [get, list, watch, update, patch, delete]
- apiGroups: [""]
resources: [events]
verbs: [create, patch]
- apiGroups: [policy]
resources: [poddisruptionbudgets]
verbs: [create, get, delete]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: argo-crb-for-litmusportal-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: argo-cr-for-litmusportal-server
subjects:
- kind: ServiceAccount
name: litmus-server-account
namespace: litmus
#these permissions are required to apply https://github.com/litmuschaos/litmus/blob/master/litmus-portal/graphql-server/manifests/cluster/2b_litmus_rbac.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: litmus-cluster-scope-for-litmusportal-server
labels:
app.kubernetes.io/name: litmus
# provide unique instance-id if applicable
# app.kubernetes.io/instance: litmus-abcxzy
app.kubernetes.io/version: 3.0-beta1
app.kubernetes.io/component: operator-clusterrole
app.kubernetes.io/part-of: litmus
app.kubernetes.io/managed-by: kubectl
name: litmus-cluster-scope-for-litmusportal-server
rules:
- apiGroups: [""]
resources: [replicationcontrollers, secrets]
verbs: [get, list]
- apiGroups: [apps.openshift.io]
resources: [deploymentconfigs]
verbs: [get, list]
- apiGroups: [apps]
resources: [deployments, daemonsets, replicasets, statefulsets]
verbs: [get, list]
- apiGroups: [batch]
resources: [jobs]
verbs: [get, list, deletecollection]
- apiGroups: [argoproj.io]
resources: [rollouts]
verbs: [get, list]
- apiGroups: [""]
resources: [pods, configmaps, events, services]
verbs: [get, create, update, patch, delete, list, watch, deletecollection]
- apiGroups: [litmuschaos.io]
resources: [chaosengines, chaosexperiments, chaosresults]
verbs: [get, create, update, patch, delete, list, watch, deletecollection]
- apiGroups: [apiextensions.k8s.io]
resources: [customresourcedefinitions]
verbs: [list, get]
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines/finalizers"]
verbs: ["update"]
- apiGroups: [ "coordination.k8s.io" ]
resources: [ "leases" ]
verbs: [ "get","create","list","update","delete" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: litmus-cluster-scope-crb-for-litmusportal-server
labels:
app.kubernetes.io/name: litmus
# provide unique instance-id if applicable
# app.kubernetes.io/instance: litmus-abcxzy
app.kubernetes.io/version: 3.0-beta1
app.kubernetes.io/component: operator-clusterrolebinding
app.kubernetes.io/part-of: litmus
app.kubernetes.io/managed-by: kubectl
name: litmus-cluster-scope-crb-for-litmusportal-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: litmus-cluster-scope-for-litmusportal-server
subjects:
- kind: ServiceAccount
name: litmus-server-account
namespace: litmus
#these permissions are required to apply https://github.com/litmuschaos/litmus/blob/master/litmus-portal/graphql-server/manifests/cluster/3a_agents_rbac.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: litmus-admin-cr-for-litmusportal-server
labels:
name: litmus-admin-cr-for-litmusportal-server
rules:
# ***************************************************************************************
# Permissions needed for preparing and monitor the chaos resources by chaos-runner
# ***************************************************************************************
# The chaos operator watches the chaosengine resource and orchestartes the chaos experiment..
## .. by creating the chaos-runner
# for creating and monitoring the chaos-runner pods
- apiGroups: [""]
resources: [pods,events]
verbs: [create, delete, get, list, patch, update, deletecollection]
# for fetching configmaps and secrets to inject into chaos-runner pod (if specified)
- apiGroups: [""]
resources: [secrets, configmaps]
verbs: [get, list]
# for tracking & getting logs of the pods created by chaos-runner to implement individual steps in the runner
- apiGroups: [""]
resources: [pods/log]
verbs: [get, list, watch]
# for configuring and monitor the experiment job by chaos-runner pod
- apiGroups: [batch]
resources: [jobs]
verbs: [create, list, get, delete, deletecollection]
# ********************************************************************
# Permissions needed for creation and discovery of chaos experiments
# ********************************************************************
# The helper pods are created by experiment to perform the actual chaos injection ...
# ... for a period of chaos duration
# for creating and deleting the helper or target app pod and events by experiment
- apiGroups: [""]
resources: [pods]
verbs: [create, delete, deletecollection]
# for creating and monitoring the events for chaos operations
- apiGroups: [""]
resources: [events]
verbs: [create, delete, get, list, patch, update, deletecollection]
# for monitoring the helper and target app pod
- apiGroups: [""]
resources: [pods]
verbs: [get, list, patch, update]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: [pods/exec, pods/eviction, replicationcontrollers]
verbs: [get,list,create]
# for tracking & getting logs of the pods created by experiment pod to implement individual steps in the experiment
- apiGroups: [""]
resources: [pods/log]
verbs: [get, list, watch]
# for creating and monitoring liveness services or monitoring target app services during chaos injection
- apiGroups: [""]
resources: [services]
verbs: [create, delete, get, list, delete, deletecollection]
# for checking the app parent resources as deployments or sts and are eligible chaos candidates
- apiGroups: [apps]
resources: [deployments, statefulsets]
verbs: [list, get, patch, update, create, delete]
# for checking the app parent resources as replicasets and are eligible chaos candidates
- apiGroups: [apps]
resources: [replicasets]
verbs: [list, get]
# for checking the app parent resources as deamonsets and are eligible chaos candidates
- apiGroups: [apps]
resources: [daemonsets]
verbs: [list, get, delete]
# for checking (openshift) app parent resources if they are eligible chaos candidates
- apiGroups: [apps.openshift.io]
resources: [deploymentconfigs]
verbs: [list, get]
# for checking (argo) app parent resources if they are eligible chaos candidates
- apiGroups: [argoproj.io]
resources: [rollouts]
verbs: [list, get]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: [litmuschaos.io]
resources: [chaosengines, chaosexperiments, chaosresults]
verbs: [create, list, get, patch, update, delete]
# for experiment to perform node status checks and other node level operations like taint, drain in the experiment.
- apiGroups: [""]
resources: [nodes]
verbs: [patch, get, list, update]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: litmus-admin-crb-for-litmusportal-server
labels:
name: litmus-admin-crb-for-litmusportal-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: litmus-admin-cr-for-litmusportal-server
subjects:
- kind: ServiceAccount
name: litmus-server-account
namespace: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaos-cr-for-litmusportal-server
rules:
# for managing the pods created by workflow controller to implement individual steps in the workflow
- apiGroups: [""]
resources: [pods, services, namespaces]
verbs: [create, get, watch, patch, delete, list]
# for tracking & getting logs of the pods created by workflow controller to implement individual steps in the workflow
- apiGroups: [""]
resources: [pods/log, secrets, configmaps]
verbs: [get, watch, create, delete, patch]
# for creation & deletion of application in predefined workflows
- apiGroups: [apps]
resources: [deployments, statefulsets]
verbs: [get, watch, patch, create, delete]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: [litmuschaos.io]
resources: [chaosengines, chaosexperiments, chaosresults, chaosschedules]
verbs: [create, list, get, patch, delete, watch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaos-crb-for-litmusportal-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaos-cr-for-litmusportal-server
subjects:
- kind: ServiceAccount
name: litmus-server-account
namespace: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: subscriber-cr-for-litmusportal-server
namespace: litmus
labels:
name: subscriber-cr-for-litmusportal-server
rules:
- apiGroups: [""]
resources: [configmaps, secrets]
verbs: [get, create, delete, update]
- apiGroups: [""]
resources: [pods/log]
verbs: [get, list, watch]
- apiGroups: [""]
resources: [pods, namespaces, nodes, services]
verbs: [get, list, watch]
- apiGroups: [litmuschaos.io]
resources: [chaosengines, chaosschedules, chaosresults]
verbs: [get, list, create, delete, update, watch]
- apiGroups: [apps.openshift.io]
resources: [deploymentconfigs]
verbs: [get, list]
- apiGroups: [apps]
resources: [deployments, daemonsets, replicasets, statefulsets]
verbs: [get, list, delete]
- apiGroups: [argoproj.io]
resources: [workflows, workflows/finalizers, workflowtemplates, workflowtemplates/finalizers, cronworkflows, cronworkflows/finalizers, clusterworkflowtemplates, clusterworkflowtemplates/finalizers, rollouts]
verbs: [get, list, create, delete, update, watch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: subscriber-crb-for-litmusportal-server
namespace: litmus
subjects:
- kind: ServiceAccount
name: litmus-server-account
namespace: litmus
roleRef:
kind: ClusterRole
name: subscriber-cr-for-litmusportal-server
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: event-tracker-cr-for-litmusportal-server
rules:
- apiGroups: [eventtracker.litmuschaos.io]
resources: [eventtrackerpolicies]
verbs: [create, delete, get, list, patch, update, watch]
- apiGroups: [eventtracker.litmuschaos.io]
resources: [eventtrackerpolicies/status]
verbs: [get, patch, update]
- apiGroups: ["", extensions, apps]
resources: [deployments, daemonsets, statefulsets, pods, configmaps, secrets]
verbs: [get, list, watch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: event-tracker-crb-for-litmusportal-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: event-tracker-cr-for-litmusportal-server
subjects:
- kind: ServiceAccount
name: litmus-server-account
namespace: litmus
# litmus-server-cr is used by the litmusportal-server
# If SELF_AGENT=false, then only litmus-server-cr and litmus-server-crb are required.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: litmus-server-cr
rules:
- apiGroups: [networking.k8s.io, extensions]
resources: [ingresses]
verbs: [get]
- apiGroups: [""]
resources: [services, nodes, pods/log]
verbs: [get, watch]
- apiGroups: [apiextensions.k8s.io]
resources: [customresourcedefinitions]
verbs: [create]
- apiGroups: [apps]
resources: [deployments]
verbs: [create]
- apiGroups: [""]
resources: [configmaps]
verbs: [get]
- apiGroups: [""]
resources: [serviceaccounts]
verbs: [create]
- apiGroups: [rbac.authorization.k8s.io]
resources: [rolebindings, roles, clusterrolebindings, clusterroles]
verbs: [create]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: litmus-server-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: litmus-server-cr
subjects:
- kind: ServiceAccount
name: litmus-server-account
namespace: litmus
## Control plane manifests
---
apiVersion: v1
kind: Namespace
metadata:
name: litmus
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: litmus-server-account
namespace: litmus
---
apiVersion: v1
kind: Secret
metadata:
name: litmus-portal-admin-secret
namespace: litmus
stringData:
JWT_SECRET: "litmus-portal@123"
DB_USER: "admin"
DB_PASSWORD: "1234"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: litmus-portal-admin-config
namespace: litmus
data:
DB_SERVER: "mongodb://mongo-service:27017"
AGENT_SCOPE: cluster
AGENT_NAMESPACE: litmus
VERSION: "3.0-beta1"
SKIP_SSL_VERIFY: "false"
# Configurations if you are using dex for OAuth
DEX_ENABLED: "false"
OIDC_ISSUER: "http://<Your Domain>:32000"
DEX_OAUTH_CALLBACK_URL: "http://<litmus-portal frontend exposed URL>:8080/auth/dex/callback"
DEX_OAUTH_CLIENT_ID: "LitmusPortalAuthBackend"
DEX_OAUTH_CLIENT_SECRET: "ZXhhbXBsZS1hcHAtc2VjcmV0"
OAuthJwtSecret: "litmus-oauth@123"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: litmusportal-frontend-nginx-configuration
namespace: litmus
data:
default.conf: |
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 8080;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_http_version 1.1;
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri /index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
location /auth/ {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "http://litmusportal-auth-server-service:9003/";
}
location /api/ {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "http://litmusportal-server-service:9002/";
}
location /ws/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "http://litmusportal-server-service:9002/";
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-frontend
namespace: litmus
labels:
component: litmusportal-frontend
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-frontend
template:
metadata:
labels:
component: litmusportal-frontend
spec:
automountServiceAccountToken: false
containers:
- name: litmusportal-frontend
image: litmuschaos/litmusportal-frontend:3.0-beta1
imagePullPolicy: Always
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
ports:
- containerPort: 8080
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d/default.conf
subPath: default.conf
volumes:
- name: nginx-config
configMap:
name: litmusportal-frontend-nginx-configuration
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-frontend-service
namespace: litmus
spec:
type: NodePort
ports:
- name: http
port: 9091
targetPort: 8080
selector:
component: litmusportal-frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-server
namespace: litmus
labels:
component: litmusportal-server
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-server
template:
metadata:
labels:
component: litmusportal-server
spec:
initContainers:
- name: wait-for-mongodb
image: litmuschaos/curl:3.0-beta1
command: ["/bin/sh", "-c"]
args:
[
"while [[ $(curl -sw '%{http_code}' http://mongo-service:27017 -o /dev/null) -ne 200 ]]; do sleep 5; echo 'Waiting for the MongoDB to be ready...'; done; echo 'Connection with MongoDB established'",
]
volumes:
- name: gitops-storage
emptyDir: {}
- name: hub-storage
emptyDir: {}
containers:
- name: graphql-server
image: litmuschaos/litmusportal-server:3.0-beta1
volumeMounts:
- mountPath: /tmp/
name: gitops-storage
- mountPath: /tmp/version
name: hub-storage
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
envFrom:
- configMapRef:
name: litmus-portal-admin-config
- secretRef:
name: litmus-portal-admin-secret
env:
- name: SELF_AGENT
value: "true"
# if self-signed certificate are used pass the k8s tls secret name created in portal ns, to allow agents to use tls for communication
- name: TLS_SECRET_NAME
value: ""
- name: LITMUS_PORTAL_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CHAOS_CENTER_SCOPE
value: "cluster"
- name: SUBSCRIBER_IMAGE
value: "litmuschaos/litmusportal-subscriber:3.0-beta1"
- name: EVENT_TRACKER_IMAGE
value: "litmuschaos/litmusportal-event-tracker:3.0-beta1"
- name: ARGO_WORKFLOW_CONTROLLER_IMAGE
value: "litmuschaos/workflow-controller:v3.3.1"
- name: ARGO_WORKFLOW_EXECUTOR_IMAGE
value: "litmuschaos/argoexec:v3.3.1"
- name: LITMUS_CHAOS_OPERATOR_IMAGE
value: "litmuschaos/chaos-operator:3.0-beta1"
- name: LITMUS_CHAOS_RUNNER_IMAGE
value: "litmuschaos/chaos-runner:3.0-beta1"
- name: LITMUS_CHAOS_EXPORTER_IMAGE
value: "litmuschaos/chaos-exporter:3.0-beta1"
- name: SERVER_SERVICE_NAME
value: "litmusportal-server-service"
- name: AGENT_DEPLOYMENTS
value: "[\"app=chaos-exporter\", \"name=chaos-operator\", \"app=event-tracker\", \"app=workflow-controller\"]"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: SELF_AGENT_NODE_SELECTOR
value: ""
- name: SELF_AGENT_TOLERATIONS
value: ""
- name: CHAOS_CENTER_UI_ENDPOINT
value: ""
- name: INGRESS
value: "false"
- name: INGRESS_NAME
value: "litmus-ingress"
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: HUB_BRANCH_NAME
value: "v3.0-beta1"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service.litmus.svc.cluster.local"
- name: LITMUS_AUTH_GRPC_PORT
value: ":3030"
- name: WORKFLOW_HELPER_IMAGE_VERSION
value: "3.0-beta1"
- name: REMOTE_HUB_MAX_SIZE
value: "5000000"
ports:
- containerPort: 8080
- containerPort: 8000
imagePullPolicy: Always
serviceAccountName: litmus-server-account
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-server-service
namespace: litmus
spec:
type: NodePort
ports:
- name: graphql-server
port: 9002
targetPort: 8080
- name: graphql-rpc-server
port: 8000
targetPort: 8000
selector:
component: litmusportal-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-auth-server
namespace: litmus
labels:
component: litmusportal-auth-server
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-auth-server
template:
metadata:
labels:
component: litmusportal-auth-server
spec:
automountServiceAccountToken: false
initContainers:
- name: wait-for-mongodb
image: litmuschaos/curl:3.0-beta1
command: ["/bin/sh", "-c"]
args:
[
"while [[ $(curl -sw '%{http_code}' http://mongo-service:27017 -o /dev/null) -ne 200 ]]; do sleep 5; echo 'Waiting for the MongoDB to be ready...'; done; echo 'Connection with MongoDB established'",
]
containers:
- name: auth-server
image: litmuschaos/litmusportal-auth-server:3.0-beta1
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
envFrom:
- configMapRef:
name: litmus-portal-admin-config
- secretRef:
name: litmus-portal-admin-secret
env:
- name: STRICT_PASSWORD_POLICY
value: "false"
- name: ADMIN_USERNAME
value: "admin"
- name: ADMIN_PASSWORD
value: "litmus"
- name: LITMUS_GQL_GRPC_ENDPOINT
value: "litmusportal-server-service.litmus.svc.cluster.local"
- name: LITMUS_GQL_GRPC_PORT
value: ":8000"
ports:
- containerPort: 3000
- containerPort: 3030
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-auth-server-service
namespace: litmus
spec:
type: NodePort
ports:
- name: auth-server
port: 9003
targetPort: 3000
- name: auth-rpc-server
port: 3030
targetPort: 3030
selector:
component: litmusportal-auth-server
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
namespace: litmus
labels:
app: mongo
spec:
selector:
matchLabels:
component: database
serviceName: mongo-headless-service
replicas: 1
template:
metadata:
labels:
component: database
spec:
automountServiceAccountToken: false
containers:
- name: mongo
image: litmuschaos/mongo:4.2.8
securityContext:
# runAsUser: 2000
allowPrivilegeEscalation: false
# runAsNonRoot: true
args: ["--ipv6"]
ports:
- containerPort: 27017
imagePullPolicy: Always
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: litmus-portal-admin-secret
key: DB_USER
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: litmus-portal-admin-secret
key: DB_PASSWORD
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongo-service
namespace: litmus
spec:
ports:
- port: 27017
targetPort: 27017
selector:
component: database
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongo-headless-service
namespace: litmus
spec:
clusterIP: None
ports:
- port: 27017
targetPort: 27017
selector:
component: database

View File

@ -0,0 +1,892 @@
### RBAC Manifests
## If SELF_AGENT="true" then these permissions are required to apply
## https://github.com/litmuschaos/litmus/blob/master/litmus-portal/graphql-server/manifests/cluster/1b_argo_rbac.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: argo-cr-for-litmusportal-server
rules:
- apiGroups: [""]
resources: [pods, pods/exec]
verbs: [create, get, list, watch, update, patch, delete]
- apiGroups: [""]
resources: [configmaps]
verbs: [get, watch, list]
- apiGroups: [""]
resources: [persistentvolumeclaims]
verbs: [create, delete]
- apiGroups: [argoproj.io]
resources: [workflows, workflows/finalizers]
verbs: [get, list, watch, update, patch, delete, create]
- apiGroups: [argoproj.io]
resources: [workflowtemplates, workflowtemplates/finalizers, clusterworkflowtemplates, clusterworkflowtemplates/finalizers, workflowtasksets]
verbs: [get, list, watch]
- apiGroups: [argoproj.io]
resources: [workflowtaskresults]
verbs: [list, watch, deletecollection]
- apiGroups: [""]
resources: [serviceaccounts]
verbs: [get, list]
- apiGroups: [argoproj.io]
resources: [cronworkflows, cronworkflows/finalizers]
verbs: [get, list, watch, update, patch, delete]
- apiGroups: [""]
resources: [events]
verbs: [create, patch]
- apiGroups: [policy]
resources: [poddisruptionbudgets]
verbs: [create, get, delete]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: argo-crb-for-litmusportal-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: argo-cr-for-litmusportal-server
subjects:
- kind: ServiceAccount
name: litmus-server-account
namespace: litmus
#these permissions are required to apply https://github.com/litmuschaos/litmus/blob/master/litmus-portal/graphql-server/manifests/cluster/2b_litmus_rbac.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: litmus-cluster-scope-for-litmusportal-server
labels:
app.kubernetes.io/name: litmus
# provide unique instance-id if applicable
# app.kubernetes.io/instance: litmus-abcxzy
app.kubernetes.io/version: 3.0-beta1
app.kubernetes.io/component: operator-clusterrole
app.kubernetes.io/part-of: litmus
app.kubernetes.io/managed-by: kubectl
name: litmus-cluster-scope-for-litmusportal-server
rules:
- apiGroups: [""]
resources: [replicationcontrollers, secrets]
verbs: [get, list]
- apiGroups: [apps.openshift.io]
resources: [deploymentconfigs]
verbs: [get, list]
- apiGroups: [apps]
resources: [deployments, daemonsets, replicasets, statefulsets]
verbs: [get, list]
- apiGroups: [batch]
resources: [jobs]
verbs: [get, list, deletecollection]
- apiGroups: [argoproj.io]
resources: [rollouts]
verbs: [get, list]
- apiGroups: [""]
resources: [pods, configmaps, events, services]
verbs: [get, create, update, patch, delete, list, watch, deletecollection]
- apiGroups: [litmuschaos.io]
resources: [chaosengines, chaosexperiments, chaosresults]
verbs: [get, create, update, patch, delete, list, watch, deletecollection]
- apiGroups: [apiextensions.k8s.io]
resources: [customresourcedefinitions]
verbs: [list, get]
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines/finalizers"]
verbs: ["update"]
- apiGroups: [ "coordination.k8s.io" ]
resources: [ "leases" ]
verbs: [ "get","create","list","update","delete" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: litmus-cluster-scope-crb-for-litmusportal-server
labels:
app.kubernetes.io/name: litmus
# provide unique instance-id if applicable
# app.kubernetes.io/instance: litmus-abcxzy
app.kubernetes.io/version: 3.0-beta1
app.kubernetes.io/component: operator-clusterrolebinding
app.kubernetes.io/part-of: litmus
app.kubernetes.io/managed-by: kubectl
name: litmus-cluster-scope-crb-for-litmusportal-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: litmus-cluster-scope-for-litmusportal-server
subjects:
- kind: ServiceAccount
name: litmus-server-account
namespace: litmus
#these permissions are required to apply https://github.com/litmuschaos/litmus/blob/master/litmus-portal/graphql-server/manifests/cluster/3a_agents_rbac.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: litmus-admin-cr-for-litmusportal-server
labels:
name: litmus-admin-cr-for-litmusportal-server
rules:
# ***************************************************************************************
# Permissions needed for preparing and monitor the chaos resources by chaos-runner
# ***************************************************************************************
# The chaos operator watches the chaosengine resource and orchestartes the chaos experiment..
## .. by creating the chaos-runner
# for creating and monitoring the chaos-runner pods
- apiGroups: [""]
resources: [pods,events]
verbs: [create, delete, get, list, patch, update, deletecollection]
# for fetching configmaps and secrets to inject into chaos-runner pod (if specified)
- apiGroups: [""]
resources: [secrets, configmaps]
verbs: [get, list]
# for tracking & getting logs of the pods created by chaos-runner to implement individual steps in the runner
- apiGroups: [""]
resources: [pods/log]
verbs: [get, list, watch]
# for configuring and monitor the experiment job by chaos-runner pod
- apiGroups: [batch]
resources: [jobs]
verbs: [create, list, get, delete, deletecollection]
# ********************************************************************
# Permissions needed for creation and discovery of chaos experiments
# ********************************************************************
# The helper pods are created by experiment to perform the actual chaos injection ...
# ... for a period of chaos duration
# for creating and deleting the helper or target app pod and events by experiment
- apiGroups: [""]
resources: [pods]
verbs: [create, delete, deletecollection]
# for creating and monitoring the events for chaos operations
- apiGroups: [""]
resources: [events]
verbs: [create, delete, get, list, patch, update, deletecollection]
# for monitoring the helper and target app pod
- apiGroups: [""]
resources: [pods]
verbs: [get, list, patch, update]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: [pods/exec, pods/eviction, replicationcontrollers]
verbs: [get,list,create]
# for tracking & getting logs of the pods created by experiment pod to implement individual steps in the experiment
- apiGroups: [""]
resources: [pods/log]
verbs: [get, list, watch]
# for creating and monitoring liveness services or monitoring target app services during chaos injection
- apiGroups: [""]
resources: [services]
verbs: [create, delete, get, list, delete, deletecollection]
# for checking the app parent resources as deployments or sts and are eligible chaos candidates
- apiGroups: [apps]
resources: [deployments, statefulsets]
verbs: [list, get, patch, update, create, delete]
# for checking the app parent resources as replicasets and are eligible chaos candidates
- apiGroups: [apps]
resources: [replicasets]
verbs: [list, get]
# for checking the app parent resources as deamonsets and are eligible chaos candidates
- apiGroups: [apps]
resources: [daemonsets]
verbs: [list, get, delete]
# for checking (openshift) app parent resources if they are eligible chaos candidates
- apiGroups: [apps.openshift.io]
resources: [deploymentconfigs]
verbs: [list, get]
# for checking (argo) app parent resources if they are eligible chaos candidates
- apiGroups: [argoproj.io]
resources: [rollouts]
verbs: [list, get]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: [litmuschaos.io]
resources: [chaosengines, chaosexperiments, chaosresults]
verbs: [create, list, get, patch, update, delete]
# for experiment to perform node status checks and other node level operations like taint, drain in the experiment.
- apiGroups: [""]
resources: [nodes]
verbs: [patch, get, list, update]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: litmus-admin-crb-for-litmusportal-server
labels:
name: litmus-admin-crb-for-litmusportal-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: litmus-admin-cr-for-litmusportal-server
subjects:
- kind: ServiceAccount
name: litmus-server-account
namespace: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaos-cr-for-litmusportal-server
rules:
# for managing the pods created by workflow controller to implement individual steps in the workflow
- apiGroups: [""]
resources: [pods, services, namespaces]
verbs: [create, get, watch, patch, delete, list]
# for tracking & getting logs of the pods created by workflow controller to implement individual steps in the workflow
- apiGroups: [""]
resources: [pods/log, secrets, configmaps]
verbs: [get, watch, create, delete, patch]
# for creation & deletion of application in predefined workflows
- apiGroups: [apps]
resources: [deployments, statefulsets]
verbs: [get, watch, patch, create, delete]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: [litmuschaos.io]
resources: [chaosengines, chaosexperiments, chaosresults, chaosschedules]
verbs: [create, list, get, patch, delete, watch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaos-crb-for-litmusportal-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaos-cr-for-litmusportal-server
subjects:
- kind: ServiceAccount
name: litmus-server-account
namespace: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: subscriber-cr-for-litmusportal-server
namespace: litmus
labels:
name: subscriber-cr-for-litmusportal-server
rules:
- apiGroups: [""]
resources: [configmaps, secrets]
verbs: [get, create, delete, update]
- apiGroups: [""]
resources: [pods/log]
verbs: [get, list, watch]
- apiGroups: [""]
resources: [pods, namespaces, nodes, services]
verbs: [get, list, watch]
- apiGroups: [litmuschaos.io]
resources: [chaosengines, chaosschedules, chaosresults]
verbs: [get, list, create, delete, update, watch]
- apiGroups: [apps.openshift.io]
resources: [deploymentconfigs]
verbs: [get, list]
- apiGroups: [apps]
resources: [deployments, daemonsets, replicasets, statefulsets]
verbs: [get, list, delete]
- apiGroups: [argoproj.io]
resources: [workflows, workflows/finalizers, workflowtemplates, workflowtemplates/finalizers, cronworkflows, cronworkflows/finalizers, clusterworkflowtemplates, clusterworkflowtemplates/finalizers, rollouts]
verbs: [get, list, create, delete, update, watch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: subscriber-crb-for-litmusportal-server
namespace: litmus
subjects:
- kind: ServiceAccount
name: litmus-server-account
namespace: litmus
roleRef:
kind: ClusterRole
name: subscriber-cr-for-litmusportal-server
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: event-tracker-cr-for-litmusportal-server
rules:
- apiGroups: [eventtracker.litmuschaos.io]
resources: [eventtrackerpolicies]
verbs: [create, delete, get, list, patch, update, watch]
- apiGroups: [eventtracker.litmuschaos.io]
resources: [eventtrackerpolicies/status]
verbs: [get, patch, update]
- apiGroups: ["", extensions, apps]
resources: [deployments, daemonsets, statefulsets, pods, configmaps, secrets]
verbs: [get, list, watch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: event-tracker-crb-for-litmusportal-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: event-tracker-cr-for-litmusportal-server
subjects:
- kind: ServiceAccount
name: litmus-server-account
namespace: litmus
# litmus-server-cr is used by the litmusportal-server
# If SELF_AGENT=false, then only litmus-server-cr and litmus-server-crb are required.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: litmus-server-cr
rules:
- apiGroups: [networking.k8s.io, extensions]
resources: [ingresses]
verbs: [get]
- apiGroups: [""]
resources: [services, nodes, pods/log]
verbs: [get, watch]
- apiGroups: [apiextensions.k8s.io]
resources: [customresourcedefinitions]
verbs: [create]
- apiGroups: [apps]
resources: [deployments]
verbs: [create]
- apiGroups: [""]
resources: [configmaps]
verbs: [get]
- apiGroups: [""]
resources: [serviceaccounts]
verbs: [create]
- apiGroups: [rbac.authorization.k8s.io]
resources: [rolebindings, roles, clusterrolebindings, clusterroles]
verbs: [create]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: litmus-server-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: litmus-server-cr
subjects:
- kind: ServiceAccount
name: litmus-server-account
namespace: litmus
## Control plane manifests
---
apiVersion: v1
kind: Namespace
metadata:
name: litmus
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: litmus-server-account
namespace: litmus
---
apiVersion: v1
kind: Secret
metadata:
name: litmus-portal-admin-secret
namespace: litmus
stringData:
JWT_SECRET: "litmus-portal@123"
DB_USER: "admin"
DB_PASSWORD: "1234"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: litmus-portal-admin-config
namespace: litmus
data:
DB_SERVER: "mongodb://mongo-service:27017"
AGENT_SCOPE: cluster
AGENT_NAMESPACE: litmus
VERSION: "3.0-beta1"
SKIP_SSL_VERIFY: "false"
# Configurations if you are using dex for OAuth
DEX_ENABLED: "false"
OIDC_ISSUER: "http://<Your Domain>:32000"
DEX_OAUTH_CALLBACK_URL: "http://<litmus-portal frontend exposed URL>:8080/auth/dex/callback"
DEX_OAUTH_CLIENT_ID: "LitmusPortalAuthBackend"
DEX_OAUTH_CLIENT_SECRET: "ZXhhbXBsZS1hcHAtc2VjcmV0"
OAuthJwtSecret: "litmus-oauth@123"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: litmusportal-frontend-nginx-configuration
namespace: litmus
data:
default.conf: |
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 8080;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_http_version 1.1;
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri /index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
location /auth/ {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "http://litmusportal-auth-server-service:9003/";
}
location /api/ {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "http://litmusportal-server-service:9002/";
}
location /ws/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "http://litmusportal-server-service:9002/";
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-frontend
namespace: litmus
labels:
component: litmusportal-frontend
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-frontend
template:
metadata:
labels:
component: litmusportal-frontend
spec:
automountServiceAccountToken: false
containers:
- name: litmusportal-frontend
image: litmuschaos/litmusportal-frontend:3.0-beta1
imagePullPolicy: Always
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
ports:
- containerPort: 8080
resources:
requests:
memory: "150Mi"
cpu: "125m"
ephemeral-storage: "500Mi"
limits:
memory: "512Mi"
cpu: "550m"
ephemeral-storage: "1Gi"
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d/default.conf
subPath: default.conf
volumes:
- name: nginx-config
configMap:
name: litmusportal-frontend-nginx-configuration
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-frontend-service
namespace: litmus
spec:
type: NodePort
ports:
- name: http
port: 9091
targetPort: 8080
selector:
component: litmusportal-frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-server
namespace: litmus
labels:
component: litmusportal-server
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-server
template:
metadata:
labels:
component: litmusportal-server
spec:
initContainers:
- name: wait-for-mongodb
image: litmuschaos/curl:3.0-beta1
command: ["/bin/sh", "-c"]
args:
[
"while [[ $(curl -sw '%{http_code}' http://mongo-service:27017 -o /dev/null) -ne 200 ]]; do sleep 5; echo 'Waiting for the MongoDB to be ready...'; done; echo 'Connection with MongoDB established'",
]
resources:
requests:
memory: "150Mi"
cpu: "25m"
ephemeral-storage: "500Mi"
limits:
memory: "512Mi"
cpu: "250m"
ephemeral-storage: "1Gi"
volumes:
- name: gitops-storage
emptyDir: {}
- name: hub-storage
emptyDir: {}
containers:
- name: graphql-server
image: litmuschaos/litmusportal-server:3.0-beta1
volumeMounts:
- mountPath: /tmp/
name: gitops-storage
- mountPath: /tmp/version
name: hub-storage
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
envFrom:
- configMapRef:
name: litmus-portal-admin-config
- secretRef:
name: litmus-portal-admin-secret
env:
- name: SELF_AGENT
value: "true"
# if self-signed certificate are used pass the k8s tls secret name created in portal ns, to allow agents to use tls for communication
- name: TLS_SECRET_NAME
value: ""
- name: LITMUS_PORTAL_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CHAOS_CENTER_SCOPE
value: "cluster"
- name: SUBSCRIBER_IMAGE
value: "litmuschaos/litmusportal-subscriber:3.0-beta1"
- name: EVENT_TRACKER_IMAGE
value: "litmuschaos/litmusportal-event-tracker:3.0-beta1"
- name: ARGO_WORKFLOW_CONTROLLER_IMAGE
value: "litmuschaos/workflow-controller:v3.3.1"
- name: ARGO_WORKFLOW_EXECUTOR_IMAGE
value: "litmuschaos/argoexec:v3.3.1"
- name: LITMUS_CHAOS_OPERATOR_IMAGE
value: "litmuschaos/chaos-operator:3.0-beta1"
- name: LITMUS_CHAOS_RUNNER_IMAGE
value: "litmuschaos/chaos-runner:3.0-beta1"
- name: LITMUS_CHAOS_EXPORTER_IMAGE
value: "litmuschaos/chaos-exporter:3.0-beta1"
- name: SERVER_SERVICE_NAME
value: "litmusportal-server-service"
- name: AGENT_DEPLOYMENTS
value: "[\"app=chaos-exporter\", \"name=chaos-operator\", \"app=event-tracker\", \"app=workflow-controller\"]"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: SELF_AGENT_NODE_SELECTOR
value: ""
- name: SELF_AGENT_TOLERATIONS
value: ""
- name: CHAOS_CENTER_UI_ENDPOINT
value: ""
- name: INGRESS
value: "false"
- name: INGRESS_NAME
value: "litmus-ingress"
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: HUB_BRANCH_NAME
value: "v3.0-beta1"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service.litmus.svc.cluster.local"
- name: LITMUS_AUTH_GRPC_PORT
value: ":3030"
- name: WORKFLOW_HELPER_IMAGE_VERSION
value: "3.0-beta1"
- name: REMOTE_HUB_MAX_SIZE
value: "5000000"
ports:
- containerPort: 8080
- containerPort: 8000
imagePullPolicy: Always
resources:
requests:
memory: "250Mi"
cpu: "225m"
ephemeral-storage: "500Mi"
limits:
memory: "712Mi"
cpu: "550m"
ephemeral-storage: "1Gi"
serviceAccountName: litmus-server-account
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-server-service
namespace: litmus
spec:
type: NodePort
ports:
- name: graphql-server
port: 9002
targetPort: 8080
- name: graphql-rpc-server
port: 8000
targetPort: 8000
selector:
component: litmusportal-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-auth-server
namespace: litmus
labels:
component: litmusportal-auth-server
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-auth-server
template:
metadata:
labels:
component: litmusportal-auth-server
spec:
automountServiceAccountToken: false
initContainers:
- name: wait-for-mongodb
image: litmuschaos/curl:3.0-beta1
command: ["/bin/sh", "-c"]
args:
[
"while [[ $(curl -sw '%{http_code}' http://mongo-service:27017 -o /dev/null) -ne 200 ]]; do sleep 5; echo 'Waiting for the MongoDB to be ready...'; done; echo 'Connection with MongoDB established'",
]
resources:
requests:
memory: "150Mi"
cpu: "25m"
ephemeral-storage: "500Mi"
limits:
memory: "225Mi"
cpu: "250m"
ephemeral-storage: "1Gi"
containers:
- name: auth-server
image: litmuschaos/litmusportal-auth-server:3.0-beta1
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
envFrom:
- configMapRef:
name: litmus-portal-admin-config
- secretRef:
name: litmus-portal-admin-secret
env:
- name: STRICT_PASSWORD_POLICY
value: "false"
- name: ADMIN_USERNAME
value: "admin"
- name: ADMIN_PASSWORD
value: "litmus"
- name: LITMUS_GQL_GRPC_ENDPOINT
value: "litmusportal-server-service.litmus.svc.cluster.local"
- name: LITMUS_GQL_GRPC_PORT
value: ":8000"
resources:
requests:
memory: "250Mi"
cpu: "225m"
ephemeral-storage: "500Mi"
limits:
memory: "712Mi"
cpu: "550m"
ephemeral-storage: "1Gi"
ports:
- containerPort: 3000
- containerPort: 3030
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-auth-server-service
namespace: litmus
spec:
type: NodePort
ports:
- name: auth-server
port: 9003
targetPort: 3000
- name: auth-rpc-server
port: 3030
targetPort: 3030
selector:
component: litmusportal-auth-server
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
namespace: litmus
labels:
app: mongo
spec:
selector:
matchLabels:
component: database
serviceName: mongo-headless-service
replicas: 1
template:
metadata:
labels:
component: database
spec:
automountServiceAccountToken: false
containers:
- name: mongo
image: litmuschaos/mongo:4.2.8
securityContext:
# runAsUser: 2000
allowPrivilegeEscalation: false
# runAsNonRoot: true
args: ["--ipv6"]
ports:
- containerPort: 27017
imagePullPolicy: Always
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
resources:
requests:
memory: "550Mi"
cpu: "225m"
ephemeral-storage: "1Gi"
limits:
memory: "1Gi"
cpu: "750m"
ephemeral-storage: "3Gi"
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: litmus-portal-admin-secret
key: DB_USER
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: litmus-portal-admin-secret
key: DB_PASSWORD
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongo-service
namespace: litmus
spec:
ports:
- port: 27017
targetPort: 27017
selector:
component: database
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongo-headless-service
namespace: litmus
spec:
clusterIP: None
ports:
- port: 27017
targetPort: 27017
selector:
component: database

View File

@ -0,0 +1,857 @@
### RBAC Manifests
## If SELF_AGENT="true" then these permissions are required to apply
## https://github.com/litmuschaos/litmus/blob/master/litmus-portal/graphql-server/manifests/namespace/1b_argo_rbac.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: argo-role-for-litmusportal-server
rules:
- apiGroups: [""]
resources: [pods, pods/exec]
verbs: [create, get, list, watch, update, patch, delete]
- apiGroups: [""]
resources: [configmaps]
verbs: [get, watch, list]
- apiGroups: [""]
resources: [persistentvolumeclaims]
verbs: [create, delete]
- apiGroups: [argoproj.io]
resources: [workflows, workflows/finalizers]
verbs: [get, list, watch, update, patch, delete, create]
- apiGroups: [argoproj.io]
resources: [workflowtemplates, workflowtemplates/finalizers,workflowtasksets]
verbs: [get, list, watch]
- apiGroups: [argoproj.io]
resources: [workflowtaskresults]
verbs: [list, watch, deletecollection]
- apiGroups: [""]
resources: [serviceaccounts]
verbs: [get, list]
- apiGroups: [""]
resources: [secrets]
verbs: [get]
- apiGroups: [argoproj.io]
resources: [cronworkflows, cronworkflows/finalizers]
verbs: [get, list, watch, update, patch, delete]
- apiGroups: [""]
resources: [events]
verbs: [create, patch]
- apiGroups: [policy]
resources: [poddisruptionbudgets]
verbs: [create, get, delete]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: argo-rb-for-litmusportal-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: argo-role-for-litmusportal-server
subjects:
- kind: ServiceAccount
name: litmus-server-account
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: litmus-namespace-scope-for-litmusportal-server
labels:
app.kubernetes.io/name: litmus
# provide unique instance-id if applicable
# app.kubernetes.io/instance: litmus-abcxzy
app.kubernetes.io/version: 3.0-beta1
app.kubernetes.io/component: operator-role
app.kubernetes.io/part-of: litmus
app.kubernetes.io/managed-by: kubectl
name: litmus-namespace-scope-for-litmusportal-server
rules:
- apiGroups: [""]
resources: [replicationcontrollers, secrets]
verbs: [get, list]
- apiGroups: [apps.openshift.io]
resources: [deploymentconfigs]
verbs: [get, list]
- apiGroups: [apps]
resources: [deployments, daemonsets, replicasets, statefulsets]
verbs: [get, list, update]
- apiGroups: [batch]
resources: [jobs]
verbs: [get, list, create, deletecollection]
- apiGroups: [argoproj.io]
resources: [rollouts]
verbs: [get, list]
- apiGroups: [""]
resources: [pods, pods/exec, configmaps, events, services]
verbs: [get, create, update, patch, delete, list, watch, deletecollection]
- apiGroups: [litmuschaos.io]
resources: [chaosengines, chaosexperiments, chaosresults]
verbs: [get, create, update, patch, delete, list, watch, deletecollection]
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines/finalizers"]
verbs: ["update"]
- apiGroups: [ "coordination.k8s.io" ]
resources: [ "leases" ]
verbs: [ "get","create","list","update","delete" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: litmus-namespace-scope-rb-for-litmusportal-server
labels:
app.kubernetes.io/name: litmus
# provide unique instance-id if applicable
# app.kubernetes.io/instance: litmus-abcxzy
app.kubernetes.io/version: 3.0-beta1
app.kubernetes.io/component: operator-rolebinding
app.kubernetes.io/part-of: litmus
app.kubernetes.io/managed-by: kubectl
name: litmus-namespace-scope-rb-for-litmusportal-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: litmus-namespace-scope-for-litmusportal-server
subjects:
- kind: ServiceAccount
name: litmus-server-account
#these permissions are required to apply https://github.com/litmuschaos/litmus/blob/master/litmus-portal/graphql-server/manifests/namespace/3a_agents_rbac.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: subscriber-role-for-litmusportal-server
labels:
name: subscriber-role-for-litmusportal-server
rules:
- apiGroups: [""]
resources: [configmaps, secrets]
verbs: [get, create, delete, update]
- apiGroups: [""]
resources: [pods/log]
verbs: [get, list, watch]
- apiGroups: [""]
resources: [pods, services]
verbs: [get, list, watch]
- apiGroups: [litmuschaos.io]
resources: [chaosengines, chaosschedules, chaosresults]
verbs: [get, list, create, delete, update, watch]
- apiGroups: [apps.openshift.io]
resources: [deploymentconfigs]
verbs: [get, list]
- apiGroups: [apps]
resources: [deployments, daemonsets, replicasets, statefulsets]
verbs: [get, list, delete]
- apiGroups: [argoproj.io]
resources: [workflows, workflows/finalizers, workflowtemplates, workflowtemplates/finalizers, cronworkflows, cronworkflows/finalizers, rollouts]
verbs: [get, list, create, delete, update, watch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: subscriber-rb-for-litmusportal-server
subjects:
- kind: ServiceAccount
name: litmus-server-account
roleRef:
kind: Role
name: subscriber-role-for-litmusportal-server
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: litmus-admin-role-for-litmusportal-server
labels:
name: litmus-admin-role-for-litmusportal-server
rules:
# ***************************************************************************************
# Permissions needed for preparing and monitor the chaos resources by chaos-runner
# ***************************************************************************************
# The chaos operator watches the chaosengine resource and orchestartes the chaos experiment..
## .. by creating the chaos-runner
# for creating and monitoring the chaos-runner pods
- apiGroups: [""]
resources: [pods, events]
verbs: [create, delete, get, list, patch, update, deletecollection]
# for fetching configmaps and secrets to inject into chaos-runner pod (if specified)
- apiGroups: [""]
resources: [secrets, configmaps]
verbs: [get, list]
# for tracking & getting logs of the pods created by chaos-runner to implement individual steps in the runner
- apiGroups: [""]
resources: [pods/log]
verbs: [get, list, watch]
# for configuring and monitor the experiment job by chaos-runner pod
- apiGroups: [batch]
resources: [jobs]
verbs: [create, list, get, delete, deletecollection]
# ********************************************************************
# Permissions needed for creation and discovery of chaos experiments
# ********************************************************************
# The helper pods are created by experiment to perform the actual chaos injection ...
# ... for a period of chaos duration
# for creating and deleting the helper or target app pod and events by experiment
- apiGroups: [""]
resources: [pods]
verbs: [create, delete, deletecollection]
# for creating and monitoring the events for chaos operations
- apiGroups: [""]
resources: [events]
verbs: [create, delete, get, list, patch, update, deletecollection]
# for monitoring the helper and target app pod
- apiGroups: [""]
resources: [pods]
verbs: [get, list, patch, update]
# for creating and managing to execute comands inside target container
- apiGroups: [""]
resources: [pods/exec, pods/eviction, replicationcontrollers]
verbs: [get, list, create]
# for tracking & getting logs of the pods created by experiment pod to implement individual steps in the experiment
- apiGroups: [""]
resources: [pods/log]
verbs: [get, list, watch]
# for creating and monitoring liveness services or monitoring target app services during chaos injection
- apiGroups: [""]
resources: [services]
verbs: [create, delete, get, list, delete, deletecollection]
# for checking the app parent resources as deployments or sts and are eligible chaos candidates
- apiGroups: [apps]
resources: [deployments, statefulsets]
verbs: [list, get, patch, update, create, delete]
# for checking the app parent resources as replicasets and are eligible chaos candidates
- apiGroups: [apps]
resources: [replicasets]
verbs: [list, get]
# for checking the app parent resources as deamonsets and are eligible chaos candidates
- apiGroups: [apps]
resources: [daemonsets]
verbs: [list, get, delete]
# for checking (openshift) app parent resources if they are eligible chaos candidates
- apiGroups: [apps.openshift.io]
resources: [deploymentconfigs]
verbs: [list, get]
# for checking (argo) app parent resources if they are eligible chaos candidates
- apiGroups: [argoproj.io]
resources: [rollouts]
verbs: [list, get]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: [litmuschaos.io]
resources: [chaosengines, chaosexperiments, chaosresults]
verbs: [create, list, get, patch, update, delete]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: litmus-admin-rb-for-litmusportal-server
labels:
name: litmus-admin-rb-for-litmusportal-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: litmus-admin-role-for-litmusportal-server
subjects:
- kind: ServiceAccount
name: litmus-server-account
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: chaos-role-for-litmusportal-server
rules:
# for managing the pods created by workflow controller to implement individual steps in the workflow
- apiGroups: [""]
resources: [pods, services]
verbs: [create, get, watch, patch, delete, list]
# for tracking & getting logs of the pods created by workflow controller to implement individual steps in the workflow
- apiGroups: [""]
resources: [pods/log, secrets, configmaps]
verbs: [get, watch, create, delete, patch]
# for creation & deletion of application in predefined workflows
- apiGroups: [apps]
resources: [deployments, statefulsets]
verbs: [get, watch, patch , create, delete]
# for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
- apiGroups: [litmuschaos.io]
resources:
[chaosengines, chaosexperiments, chaosresults, chaosschedules]
verbs: [create, list, get, patch, delete, watch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: chaos-rb-for-litmusportal-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: chaos-role-for-litmusportal-server
subjects:
- kind: ServiceAccount
name: litmus-server-account
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: event-tracker-role-for-litmusportal-server
rules:
- apiGroups: [eventtracker.litmuschaos.io]
resources: [eventtrackerpolicies]
verbs: [create, delete, get, list, patch, update, watch]
- apiGroups: [eventtracker.litmuschaos.io]
resources: [eventtrackerpolicies/status]
verbs: [get, patch, update]
- apiGroups: [""]
resources: [pods, configmaps, secrets]
verbs: [get, list, watch]
- apiGroups: [extensions, apps]
resources: [deployments, daemonsets, statefulsets]
verbs: [get, list, watch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: event-tracker-rb-for-litmusportal-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: event-tracker-role-for-litmusportal-server
subjects:
- kind: ServiceAccount
name: litmus-server-account
# litmus-server-role is used by the litmusportal-server
# If SELF_AGENT=false, then only litmus-server-role and litmus-server-rb are required.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: litmus-server-role
rules:
- apiGroups: [networking.k8s.io, extensions]
resources: [ingresses]
verbs: [get]
- apiGroups: [""]
resources: [services, pods/log]
verbs: [get, watch]
- apiGroups: [apps]
resources: [deployments]
verbs: [create]
- apiGroups: [""]
resources: [configmaps]
verbs: [get]
- apiGroups: [""]
resources: [serviceaccounts]
verbs: [create]
- apiGroups: [rbac.authorization.k8s.io]
resources: [rolebindings, roles]
verbs: [create]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: litmus-server-rb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: litmus-server-role
subjects:
- kind: ServiceAccount
name: litmus-server-account
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: litmus-server-account
---
apiVersion: v1
kind: Secret
metadata:
name: litmus-portal-admin-secret
stringData:
JWT_SECRET: "litmus-portal@123"
DB_USER: "admin"
DB_PASSWORD: "1234"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: litmus-portal-admin-config
data:
AGENT_SCOPE: namespace
DB_SERVER: "mongodb://mongo-service:27017"
VERSION: "3.0-beta1"
SKIP_SSL_VERIFY: "false"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: litmusportal-frontend-nginx-configuration
data:
default.conf: |
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 8080;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_http_version 1.1;
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri /index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
location /auth/ {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "http://litmusportal-auth-server-service:9003/";
}
location /api/ {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "http://litmusportal-server-service:9002/";
}
location /ws/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass "http://litmusportal-server-service:9002/";
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-frontend
labels:
component: litmusportal-frontend
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-frontend
template:
metadata:
labels:
component: litmusportal-frontend
spec:
automountServiceAccountToken: false
containers:
- name: litmusportal-frontend
image: litmuschaos/litmusportal-frontend:3.0-beta1
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
imagePullPolicy: Always
ports:
- containerPort: 8080
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d/default.conf
subPath: default.conf
resources:
requests:
memory: "250Mi"
cpu: "125m"
ephemeral-storage: "500Mi"
limits:
memory: "512Mi"
cpu: "550m"
ephemeral-storage: "1Gi"
volumes:
- name: nginx-config
configMap:
name: litmusportal-frontend-nginx-configuration
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-frontend-service
spec:
type: NodePort
ports:
- name: http
port: 9091
targetPort: 8080
selector:
component: litmusportal-frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-server
labels:
component: litmusportal-server
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-server
template:
metadata:
labels:
component: litmusportal-server
spec:
initContainers:
- name: wait-for-mongodb
image: litmuschaos/curl:3.0-beta1
command: ["/bin/sh", "-c"]
args:
[
"while [[ $(curl -sw '%{http_code}' http://mongo-service:27017 -o /dev/null) -ne 200 ]]; do sleep 5; echo 'Waiting for the MongoDB to be ready...'; done; echo 'Connection with MongoDB established'",
]
resources:
requests:
memory: "150Mi"
cpu: "25m"
ephemeral-storage: "500Mi"
limits:
memory: "512Mi"
cpu: "250m"
ephemeral-storage: "1Gi"
volumes:
- name: gitops-storage
emptyDir: {}
- name: hub-storage
emptyDir: {}
containers:
- name: graphql-server
image: litmuschaos/litmusportal-server:3.0-beta1
volumeMounts:
- mountPath: /tmp/gitops
name: gitops-storage
- mountPath: /tmp/version
name: hub-storage
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
envFrom:
- configMapRef:
name: litmus-portal-admin-config
- secretRef:
name: litmus-portal-admin-secret
env:
- name: LITMUS_PORTAL_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: AGENT_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SELF_AGENT
value: "true"
- name: SELF_AGENT_NODE_SELECTOR
value: ""
- name: SELF_AGENT_TOLERATIONS
value: ""
# if self-signed certificate are used pass the base64 tls certificate, to allow agents to use tls for communication
- name: TLS_CERT_B64
value: ""
- name: CHAOS_CENTER_SCOPE
value: "namespace"
- name: AGENT_DEPLOYMENTS
value: "[\"app=chaos-exporter\", \"name=chaos-operator\", \"app=event-tracker\", \"app=workflow-controller\"]"
- name: SERVER_SERVICE_NAME
value: "litmusportal-server-service"
- name: CHAOS_CENTER_UI_ENDPOINT
value: ""
- name: SUBSCRIBER_IMAGE
value: "litmuschaos/litmusportal-subscriber:3.0-beta1"
- name: EVENT_TRACKER_IMAGE
value: "litmuschaos/litmusportal-event-tracker:3.0-beta1"
- name: ARGO_WORKFLOW_CONTROLLER_IMAGE
value: "litmuschaos/workflow-controller:v3.3.1"
- name: ARGO_WORKFLOW_EXECUTOR_IMAGE
value: "litmuschaos/argoexec:v3.3.1"
- name: LITMUS_CHAOS_OPERATOR_IMAGE
value: "litmuschaos/chaos-operator:3.0-beta1"
- name: LITMUS_CHAOS_RUNNER_IMAGE
value: "litmuschaos/chaos-runner:3.0-beta1"
- name: LITMUS_CHAOS_EXPORTER_IMAGE
value: "litmuschaos/chaos-exporter:3.0-beta1"
- name: CONTAINER_RUNTIME_EXECUTOR
value: "k8sapi"
- name: HUB_BRANCH_NAME
value: "v3.0-beta1"
- name: LITMUS_AUTH_GRPC_ENDPOINT
value: "litmusportal-auth-server-service"
- name: LITMUS_AUTH_GRPC_PORT
value: ":3030"
- name: WORKFLOW_HELPER_IMAGE_VERSION
value: "3.0-beta1"
- name: REMOTE_HUB_MAX_SIZE
value: "5000000"
- name: INGRESS
value: "false"
- name: INGRESS_NAME
value: "litmus-ingress"
ports:
- containerPort: 8080
- containerPort: 8000
imagePullPolicy: Always
resources:
requests:
memory: "250Mi"
cpu: "225m"
ephemeral-storage: "500Mi"
limits:
memory: "712Mi"
cpu: "550m"
ephemeral-storage: "1Gi"
serviceAccountName: litmus-server-account
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-server-service
spec:
type: NodePort
ports:
- name: graphql-server
port: 9002
targetPort: 8080
- name: graphql-rpc-server
port: 8000
targetPort: 8000
selector:
component: litmusportal-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: litmusportal-auth-server
labels:
component: litmusportal-auth-server
spec:
replicas: 1
selector:
matchLabels:
component: litmusportal-auth-server
template:
metadata:
labels:
component: litmusportal-auth-server
spec:
automountServiceAccountToken: false
initContainers:
- name: wait-for-mongodb
image: litmuschaos/curl:3.0-beta1
command: ["/bin/sh", "-c"]
args:
[
"while [[ $(curl -sw '%{http_code}' http://mongo-service:27017 -o /dev/null) -ne 200 ]]; do sleep 5; echo 'Waiting for the MongoDB to be ready...'; done; echo 'Connection with MongoDB established'",
]
resources:
requests:
memory: "150Mi"
cpu: "25m"
ephemeral-storage: "500Mi"
limits:
memory: "512Mi"
cpu: "250m"
ephemeral-storage: "1Gi"
containers:
- name: auth-server
image: litmuschaos/litmusportal-auth-server:3.0-beta1
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
envFrom:
- configMapRef:
name: litmus-portal-admin-config
- secretRef:
name: litmus-portal-admin-secret
env:
- name: STRICT_PASSWORD_POLICY
value: "false"
- name: ADMIN_USERNAME
value: "admin"
- name: ADMIN_PASSWORD
value: "litmus"
- name: LITMUS_GQL_GRPC_ENDPOINT
value: "litmusportal-server-service"
- name: LITMUS_GQL_GRPC_PORT
value: ":8000"
ports:
- containerPort: 3000
- containerPort: 3030
imagePullPolicy: Always
resources:
requests:
memory: "250Mi"
cpu: "125m"
ephemeral-storage: "500Mi"
limits:
memory: "712Mi"
cpu: "550m"
ephemeral-storage: "1Gi"
---
apiVersion: v1
kind: Service
metadata:
name: litmusportal-auth-server-service
spec:
type: NodePort
ports:
- name: auth-server
port: 9003
targetPort: 3000
- name: auth-rpc-server
port: 3030
targetPort: 3030
selector:
component: litmusportal-auth-server
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
labels:
app: mongo
spec:
selector:
matchLabels:
component: database
serviceName: mongo-headless-service
replicas: 1
template:
metadata:
labels:
component: database
spec:
automountServiceAccountToken: false
containers:
- name: mongo
image: litmuschaos/mongo:4.2.8
securityContext:
# runAsUser: 2000
allowPrivilegeEscalation: false
args: ["--ipv6"]
ports:
- containerPort: 27017
imagePullPolicy: Always
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: litmus-portal-admin-secret
key: DB_USER
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: litmus-portal-admin-secret
key: DB_PASSWORD
resources:
requests:
memory: "250Mi"
cpu: "125m"
ephemeral-storage: "500Mi"
limits:
memory: "712Mi"
cpu: "550m"
ephemeral-storage: "3Gi"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongo-service
spec:
ports:
- port: 27017
targetPort: 27017
selector:
component: database
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongo-headless-service
spec:
clusterIP: None
ports:
- port: 27017
targetPort: 27017
selector:
component: database

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,19 @@
apiVersion: batch/v1
kind: Job
metadata:
name: upgrade-agent
spec:
ttlSecondsAfterFinished: 60
backoffLimit: 0
template:
spec:
containers:
- name: upgrade-agent
image: litmuschaos/upgrade-agent-cp:3.0-beta1
envFrom:
- configMapRef:
name: litmus-portal-admin-config
- secretRef:
name: litmus-portal-admin-secret
imagePullPolicy: Always
restartPolicy: Never