Compare commits

...

34 Commits

Author SHA1 Message Date
sharvarikhamkar1304 fe8cec4075
feat : Added helm chart for ingress management (#285)
* Add ingress-management

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Update values.yaml

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Move ingress-management chart under charts directory

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Create README.md

* Update README.md

* Update README.md

* Update README.md

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Fix: Update Chart.yaml metadata with repo and maintainer info

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Fix maintainer name

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* update

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Delete charts/ingress-management/etc directory

* Delete charts/ingress-management/LICENSE

* Update README.md

* Update README.md

* fix: update helm chart for ingress-management

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Update values.yaml

* Update values.yaml

* Update values.yaml

* Fix: update values.yaml

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Update values.yaml

* Update httproute.yaml

* Update values.yaml

* Update httproute.yaml

* Update values.yaml

* Update values.yaml

* Update values.yaml

* Update httproute.yaml

* Update values.yaml

* Update values.yaml

* Update values.yaml

* Update values.yaml

---------

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>
2025-08-05 13:41:19 +05:30
Abhishek Dubey 22a3df98c8
[COE][Add] Added helm chart for worker based deployment (#281)
* Added helm chart for worker based deployment

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

* Added helm chart for worker based deployment

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

* Added helm chart for worker based deployment

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

---------

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>
2025-07-04 00:52:37 +05:30
Prashantdev780 9a6b440441
Helm fix for liveliness probe and readiness probe (#279) 2025-06-30 20:07:40 +05:30
Sandeep Rawat 6306426bed
Merge pull request #278 from OT-CONTAINER-KIT/helm_fix
Fixed secrets block in deployment.yml of microservice
2025-06-30 18:34:38 +05:30
Prashantdev780 ff645ddecd
Update Chart.yaml 2025-06-30 18:32:37 +05:30
Prashant Sharma 3619d89003 Fixed secrets block in deployment.yml of microservice 2025-06-30 17:13:05 +05:30
Abhishek Dubey 7706b2da4f
Added helm chart for web service deployment (#271)
Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>
2025-02-10 16:51:35 +05:30
hiteshmakol1 eaa6fc3aa6
Karpenter 0.3.0 Modifications (#270)
* Added new code for Karpenter 0.3.0 including new chart version, interruptionQueue parameter, modified README, modified template yaml

Signed-off-by: Hitesh Makol <hitesh.makol@opstree.com>

* Fixed Linting for Chart.yaml

Signed-off-by: Hitesh Makol <hitesh.makol@opstree.com>

* Fixed CI steps for testing chart

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

* Fixed CI steps for testing chart

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

---------

Signed-off-by: Hitesh Makol <hitesh.makol@opstree.com>
Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>
Co-authored-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>
2025-01-14 12:08:56 +05:30
Abhishek Dubey d03ba9f362
Added base helm chart for k8s (#269)
Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>
2025-01-13 13:10:37 +05:30
Abhishek Dubey 57fbc499a4
Fixed CI steps for testing chart (#268)
* Fixed CI steps for testing chart

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

* Fixed CI steps for testing chart

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

* Fixed CI steps for testing chart

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

---------

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>
2025-01-13 12:31:44 +05:30
hiteshmakol1 d50ec64e5c
Karpenter 0.2.0 Modifications (#266)
* Added Comment in Chart, Changed example.yaml file , Added comments in values file

* Incorporated Review Comments
2025-01-08 11:55:00 +05:30
Abhishek Dubey 92971b2ab9
Update release.yaml 2025-01-07 15:44:49 +05:30
Abhishek Dubey d491b314b2
Update release.yaml 2025-01-07 15:42:45 +05:30
Abhishek Dubey c8fac9acd1
Update ct.yaml 2025-01-07 15:40:48 +05:30
hiteshmakol1 021f437068
Enhanced README for the Karpenter Helm chart (#264)
* Modifie README for karpenter helm chart

* Incorporated Review Comments
2025-01-07 15:12:15 +05:30
hiteshmakol1 44318da8ef
Addition of Readme File for Helm Chart (#263)
* ADDED README FILE

* Removed Extra nodepool yaml file, fixed spacing

* Added example field and comments in values.yaml

* Add comments in ReadMe File

* Modifed README file

* Added example in README file

* Added Example folder and modified README file
2025-01-02 15:02:52 +05:30
hiteshmakol1 905122a30a
Added fix for karpenter (#261)
* Karpenter Helm Chart - includes prerequisites - IAM_Role, Tagging, AWS_AUth as well

* Update values.yaml

Updated Values.yaml

* Added Dependency Chart, updated values.yaml

* Added template folder

* Removed extra Files

* Incorporated Review Comments

* Added nodePool YAML template , modified values yaml

* Modified nodePool and values yaml files

* Added comments in values.yaml

---------

Co-authored-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>
2024-12-31 13:24:34 +05:30
hiteshmakol1 088280e7c8
Added support for karpenter helm (#260)
* Karpenter Helm Chart - includes prerequisites - IAM_Role, Tagging, AWS_AUth as well

* Update values.yaml

Updated Values.yaml

* Added Dependency Chart, updated values.yaml

* Added template folder

* Removed extra Files

* Incorporated Review Comments

* Added nodePool YAML template , modified values yaml

* Added functionality to loop over different node pools

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

---------

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>
Co-authored-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>
2024-12-30 23:42:29 +05:30
hiteshmakol1 b84aaf41ee
Karpenter Helm Chart (#256)
* Karpenter Helm Chart - includes prerequisites - IAM_Role, Tagging, AWS_AUth as well

* Update values.yaml

Updated Values.yaml

* Added Dependency Chart, updated values.yaml

* Added template folder

* Removed extra Files

* Incorporated Review Comments
2024-12-27 13:29:25 +05:30
tarunsinghot 03d5a2dfcd
Added Percona MongoDB helm chart with backup support (#230)
* added helm chart for psmdb operator and db

* tested backup and restore

* updated doc

* added end of line in all files

* updated values file

* added templates for backup and restore files

* worked on feedbacks
2024-12-13 17:37:23 +05:30
Shubham Gupta 5dbad2900f
Remove (#246)
redis-operator, redis-cluster, redis-replication, redis-sentinel helm chart

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2024-12-13 16:58:36 +05:30
Ashwani Singh f105d47a46
Fix the parameter variable type 2024-10-15 12:50:29 +05:30
Ashwani Singh a1c4cd456a
Fix the variable name for msteams 2024-10-15 12:42:46 +05:30
Ashwani Singh bdc8a635d4
MS Team notification 2024-10-12 11:24:44 +05:30
Ashwani Singh 9ae246a8ad
Merge pull request #248 from OT-CONTAINER-KIT/fix-hardcoded-values
Remove hardcoded values
2024-09-29 22:16:40 +05:30
Ashwani Singh 150a3a3703 Bump helm chart version
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-09-29 22:14:47 +05:30
Ashwani Singh 2b71e438dc Remove extra line
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-09-29 22:14:47 +05:30
Ashwani Singh 8ebc884cdc Remove hardcoded values
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-09-29 22:14:47 +05:30
Ashwani Singh a524e4d67e
Merge pull request #247 from OT-CONTAINER-KIT/victoriametrics
Victoriametrics
2024-09-23 10:10:41 +05:30
Ashwani Singh 6970ab0047 Fix the helm lint
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-09-23 10:08:11 +05:30
Ashwani Singh d12dafe780 New release for victoriametrics
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-09-23 10:08:11 +05:30
Ashwani Singh 595310ad8f Tune the victoriametrics storage
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-09-23 10:08:11 +05:30
Ashwani Singh b1addb8156 Create victoriametrics chart
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-09-23 10:08:11 +05:30
Ashwani Singh a443cc1027 VictoriaMetrics
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-09-23 10:08:11 +05:30
99 changed files with 2763 additions and 33974 deletions

View File

@ -8,14 +8,14 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install Helm
uses: azure/setup-helm@v3
with:
version: v3.5.4
version: v3.16.2
- uses: actions/setup-python@v4
with:

View File

@ -25,3 +25,4 @@ jobs:
VALIDATE_YAML: false
DEFAULT_BRANCH: main
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FILTER_REGEX_EXCLUDE: .*(README\.md|NOTES.txt).*

View File

@ -25,7 +25,7 @@ jobs:
- name: Install Helm
uses: azure/setup-helm@v3
with:
version: v3.5.4
version: v3.16.2
- uses: actions/setup-python@v4
with:
@ -42,8 +42,8 @@ jobs:
- name: Update Helm Repositories
run: helm repo update
- name: Update Chart Dependencies for redis-operator
run: helm dependency update charts/redis-operator
- name: Update Chart Dependencies for karpenter
run: helm dependency update charts/karpenter
- name: List Changed Charts
id: list-changed

View File

@ -8,34 +8,30 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Create k8s Kind Cluster
uses: helm/kind-action@v1.5.0
uses: helm/kind-action@v1.8.0
with:
cluster_name: kind
- name: Install Helm
uses: azure/setup-helm@v3
with:
version: v3.5.4
version: v3.16.2
- name: Install yq
run: |
sudo snap install yq
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.6.0
- name: Install and test Redis Related Helm charts
- uses: actions/setup-python@v4
with:
python-version: '3.9'
check-latest: true
- name: Install and test Helm charts
run: |
kubectl cluster-info --context kind-kind
chart_dirs=("redis-operator" "redis" "redis-cluster" "redis-replication" "redis-sentinel")
for dir in "${chart_dirs[@]}"
do
if [[ -f ./charts/$dir/Chart.yaml ]]; then
helm dependency update ./charts/$dir/
fi
chart_version=$(yq e .version ./charts/$dir/Chart.yaml)
echo "Installing $dir chart with version $chart_version..."
helm install $dir ./charts/$dir/
helm test $dir
done
echo "Listing installed Helm charts..."
changed=$(ct list-changed --config ct.yaml)
ct install --config ct.yaml || true

View File

@ -14,14 +14,6 @@ helm repo add ot-helm https://ot-container-kit.github.io/helm-charts
You can then run `helm search repo ot-helm` to see the charts.
### Helm Charts List
Currently supported helm charts are:-
- [Redis Operator](./charts/redis-operator)
- [Redis Standalone](./charts/redis)
- [Redis Cluster](./charts/redis-cluster)
- [K8s Vault Webhook](./charts/k8s-vault-webhook)
### Pre-Requisities

21
charts/base/Chart.yaml Normal file
View File

@ -0,0 +1,21 @@
---
apiVersion: v1
description: A base helm chart which will be used by different helm charts
engine: gotpl
maintainers:
- name: iamabhishek-dubey
email: "abhishek.dubey@opstree.com"
url: https://github.com/iamabhishek-dubey
name: base
sources:
- https://github.com/ot-container-kit/helm-charts
version: 0.1.0
appVersion: "0.1.0"
home: https://github.com/ot-container-kit/helm-charts
keywords:
- deployment
- base
- opstree
- kubernetes
- openshift
icon: https://raw.githubusercontent.com/OT-CONTAINER-KIT/helm-charts/main/static/helm-chart-logo.svg

28
charts/base/README.md Normal file
View File

@ -0,0 +1,28 @@
# base
![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square) ![AppVersion: 0.1.0](https://img.shields.io/badge/AppVersion-0.1.0-informational?style=flat-square)
A base helm chart which will be used by different helm charts.
**Homepage:** <https://github.com/ot-container-kit/helm-charts>
## Maintainers
| Name | Email | Url |
|-------------------|----------------------------|----------------------------------------|
| iamabhishek-dubey | abhishek.dubey@opstree.com | <https://github.com/iamabhishek-dubey> |
## Source Code
* <https://github.com/ot-container-kit/helm-charts>
## Values
| Key | Type | Default | Description |
|----------------------------|--------|---------|--------------------------------------------------------------------------------|
| config | object | `{}` | ConfigMap key value pair to create configs |
| serviceAccount.annotations | object | `{}` | Annotations to add to the service account |
| serviceAccount.name | string | `""` | If not set and create is true, a name is generated using the fullname template |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2)

View File

@ -0,0 +1,13 @@
{{- define "configmap" -}}
{{- if .Values.base.config -}}
{{- $top := . -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "base.fullname" . }}
labels:
{{- include "base.labels" . | nindent 4 }}
data:
{{- toYaml .Values.base.config | nindent 2 -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,42 @@
{{/*
Create a default fully qualified app name.
We truncate service name aka .Release.Name at 59 chars because some Kubernetes name fields are limited to 63 (by the DNS naming spec).
We append 4 characters for chart type at the end which is -web or -crn or -wrk or -job or -sts.
*/}}
{{- define "base.fullname" -}}
{{- $name := .Release.Name | trunc 59 | trimSuffix "-" }}
{{- printf "%s-%s" $name .Chart.Name }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "base.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "base.labels" -}}
helm.sh/chart: {{ include "base.chart" . }}
{{ include "base.selectorLabels" . }}
{{- if .Release.Revision }}
app.kubernetes.io/version: {{ .Release.Revision | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "base.selectorLabels" -}}
app.kubernetes.io/name: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "base.serviceAccountName" -}}
{{- default (include "base.fullname" .) .Values.base.serviceAccount.name }}
{{- end }}

View File

@ -0,0 +1,12 @@
{{- define "serviceAccount" -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "base.serviceAccountName" . }}
labels:
{{- include "base.labels" . | nindent 4 }}
{{- with .Values.base.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

13
charts/base/values.yaml Normal file
View File

@ -0,0 +1,13 @@
# Default values for base template.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
serviceAccount:
# -- Annotations to add to the service account
annotations: {}
# -- The name of the service account to use.
# -- If not set and create is true, a name is generated using the fullname template
name: ""
# -- ConfigMap key value pair to create configs
config: {}

View File

@ -0,0 +1,16 @@
apiVersion: v2
name: ingress-management
description: A Helm chart to manage Ingress traffic
version: 0.1.0
appVersion: "1.0"
home: https://github.com/ot-container-kit/helm-charts
maintainers:
- name: sharvarikhamkar1304
keywords:
- ingress
- kong
- httpRoute
- kubernetes
icon: https://raw.githubusercontent.com/OT-CONTAINER-KIT/helm-charts/main/static/helm-chart-logo.svg
sources:
- https://github.com/ot-container-kit/helm-charts

View File

@ -0,0 +1,49 @@
# Ingress Management Helm Chart
A simple and reusable Helm chart to manage Kubernetes Gateway API HTTPRoutes for routing traffic to backend services.
This chart helps manage HTTPRoute resources to expose services using the Kubernetes Gateway API. You can customize host, path, service, and namespace via values.
## Homepage
[https://github.com/ot-container-kit/helm-charts](https://github.com/ot-container-kit/helm-charts)
## Maintainers
| Name | URL |
| ---------------- | --------------------------------------------- |
| sharvari-khamkar | [GitHub](https://github.com/sharvari-khamkar) |
## Source Code
[GitHub - ot-container-kit/helm-charts](https://github.com/ot-container-kit/helm-charts)
## Requirements
| Repository | Name | Version |
| ------------------------------------------------------------------------------------------------ | ---- | ------- |
| [https://ot-container-kit.github.io/helm-charts](https://ot-container-kit.github.io/helm-charts) | base | 0.1.0 |
## Values
| **Attribute** | **Scope** | **Example** | **Description** | **Default** |
|------------------|------------------|------------------------|------------------------------------------------------------------------|--------------|
| <br> `name` <br> <br> | <br> Global <br> <br> | <br> `"my-app"` <br> <br> | <br> Name of the HTTPRoute and backend service (the app name)<br><br> | `""` |
| <br> `namespace` <br> <br> | <br> Global <br> <br> | <br> `"default"` <br> <br> | <br> Kubernetes namespace where resources like HTTPRoute will be deployed<br><br> | `""` |
| <br> `host` <br> <br> | Routing | `"app.example.com"` | Hostname to expose the app<br><br> | `""` |
| <br>`path` <br> <br> | Routing | `"/api"` | Path under the host<br><br> | `""` |
| <br>`service.name` <br> <br> | Service Config | `"my-backend-svc"` | Name of the backend service to which traffic will be routed<br><br> | `""` |
| <br>`service.kind` <br> <br> | Service Config | `"Service"` | Kind of backend resource (Service by default)<br><br> | `"Service"` |
| <br>`service.port` <br> <br> | Service Config | `80` | Port on which the backend service listens<br><br> | `80` |

View File

@ -0,0 +1,46 @@
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: {{ required "A valid 'name' is required!" .Values.name }}
{{- if .Values.labels }}
labels:
{{ toYaml .Values.labels | indent 4 }}
{{- end }}
{{- if .Values.annotations }}
annotations:
{{ toYaml .Values.annotations | indent 4 }}
{{- end }}
spec:
{{- if .Values.parentRefs }}
parentRefs:
{{- range .Values.parentRefs }}
- name: {{ .name }}
{{- if .namespace }}
namespace: {{ .namespace }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.hostnames }}
hostnames:
{{- range .Values.hostnames }}
- "{{ . }}"
{{- end }}
{{- end }}
rules:
{{- range .Values.rules }}
- matches:
{{- range .matches }}
- path:
type: {{ .path.type }}
value: {{ .path.value | quote }}
{{- end }}
backendRefs:
{{- range .backendRefs }}
- name: {{ .name }}
kind: {{ .kind | default "Service" }}
port: {{ .port }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,60 @@
---
# charts/ingress-management/values.yaml
# -- Name of the HTTPRoute and backend service (typically the app name)
name: ""
# -- Labels to apply to the HTTPRoute metadata
labels:
app: ""
# -- Optional annotations to apply to the HTTPRoute resource
annotations: {}
# -- Reference to the Gateway (parentRefs)
parentRefs:
- name: ""
namespace: ""
# -- Hostnames to be matched in the HTTPRoute
hostnames:
- ""
# -- Routing rules for HTTPRoute
rules:
- matches:
- path:
type: PathPrefix
value: ""
backendRefs:
- name: ""
kind: Service
port: 80
# -----------------------------------------------------
# Example values.yaml File
# -----------------------------------------------------
# name: open-webui
# labels:
# app: open-webui
# annotations:
# konghq.com/protocols: https
# konghq.com/https-redirect-status-code: "301"
# parentRefs:
# - name: kong
# namespace: default
# hostnames:
# - bp-ai.opstree.dev
# rules:
# - matches:
# - path:
# type: PathPrefix
# value: /
# backendRefs:
# - name: open-webui
# kind: Service
# port: 80

View File

@ -0,0 +1,9 @@
apiVersion: v2
name: ot-karpenter
version: 0.3.0
maintainers:
- name: opstree
dependencies:
- name: karpenter
version: 1.1.1
repository: oci://public.ecr.aws/karpenter

View File

@ -0,0 +1,78 @@
# Karpenter
Karpenter is an open-source Kubernetes cluster autoscaler built for efficiency and speed. This Helm chart installs Karpenter in your Kubernetes cluster and can be used to manage your node pools for dynamically scaling your infrastructure. This chart supports automated deployment of Karpenter, including the creation of NodePools, EC2NodeClasses, IAM roles, and other necessary resources.
To install Karpenter, use the following commands:
```shell
$ helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
$ helm install karpenter ot-helm/karpenter --namespace <namespace> --dependency-update --create-namespace
```
Adds the ot-helm repository to Helm, which contains the Karpenter Helm chart.
Installs the Karpenter chart from the ot-helm repository.
To upgrade the setup:
```shell
$ helm upgrade karpenter ot-helm/karpenter --install --namespace <namespace> --create-namespace
```
Upgrades an existing Karpenter release or installs it if it doesn't exist.
To uninstall the chart:
```shell
$ helm delete karpenter --namespace <namespace>
```
Deletes the Karpenter release from the specified namespace.
Replace <namespace> with the namespace where Karpenter is installed.
### Pre-Requisites
- Kubernetes => 1.18+
- Helm => 3.X
- Karpenter Operator => 0.1.0
- Open ID Connector (EKS) => https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html
- IAM Roles for Karpenter
- Add tags to subnets and security groups
- Update aws-auth ConfigMap
### Parameters
| **Name** | **Value** | **Description** |
|--------------------------------------------------------------------|:-------------------------------|------------------------------------------------|
| `karpenter.settings.clusterName` | `my-cluster` | The name of your Kubernetes cluster |
| `karpenter.serviceAccount.annotations.eks.amazonaws.com/role-arn` | Required | IAM role ARN for Karpenter controller |
| `karpenter.controller.resources.requests.cpu` | `1` | CPU request for Karpenter controller |
| `karpenter.controller.resources.requests.memory` | `1Gi` | Memory request for Karpenter controller |
| `karpenter.controller.resources.limits.cpu` | `1` | CPU limit for Karpenter controller |
| `karpenter.controller.resources.limits.memory` | `1Gi` | Memory limit for Karpenter controller |
| `nodePools` | [] | List of NodePools to be created |
| `nodePools.name` | default-nodepool | Name of the NodePool |
| `nodePools.labels` - If not required can be omitted | {} | Labels for the NodePool |
| `nodePools.annotations` - If not required can be omitted | {} | Annotations for the NodePool |
| `nodePools.requirements` - Can be empty [] | [] | Node requirements like CPU, memory, etc. |
| `nodePools.taints` - If not required can be omitted | [] | Taints for the NodePool |
| `nodePools.expireAfter` | 720h | Expiration duration for idle NodePools |
| `nodePools.limits.cpu` - Required Field | "1000m" | CPU limit for the NodePool |
| `nodePools.limits.memory`- If not required can be omitted | "2Gi" | Memory limit for the NodePool |
| `nodePools.disruption.consolidationPolicy` - Required Field | WhenEmptyOrUnderutilized | Consolidation policy for underutilized nodes |
| `nodePools.disruption.consolidateAfter` - Required Field | 1m | Time before consolidating underutilized nodes |
### Notes:
- Refer to Example Folder for a example values.yaml file
- Karpenter automatically creates and manages NodePools as part of the installation process.
- Make sure to configure the IAM roles required by Karpenter for it to interact with EC2 instances and manage resources along with all prerequisites.
- The chart will ensure the Karpenter controller and NodePools are deployed correctly with all required configurations.

View File

@ -0,0 +1,82 @@
#This example below has 2 nodepools for reference
# Custom values for your chart
clusterName: "" # Name of the EKS cluster (for identification in the chart and Karpenter)
awsPartition: "" # AWS partition, default is 'aws' (used in multi-region or partitioned environments)
awsAccountId: 3333 # AWS account ID where the resources will be provisioned
# Karpenter chart overrides
karpenter:
settings:
clusterName: "" # Cluster name for the Karpenter controller to identify and manage nodes in this cluster
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::3333:role/KarpenterControllerRole-demo-eks # IAM role for Karpenter controller's access to AWS services
controller:
resources:
requests:
cpu: "1" # CPU resource request for the Karpenter controller (minimum resources Karpenter will be allocated)
memory: "1Gi" # Memory resource request for the Karpenter controller
limits:
cpu: "1" # CPU resource limit for the Karpenter controller (maximum resources Karpenter can consume)
memory: "1Gi" # Memory resource limit for the Karpenter controller
# NodePools define groups of nodes with specific requirements
nodePools:
- name: default # Name of the node pool, used for identification
limits: # Required Field
cpu: "1000"
memory: "1000Gi"
disruption: # Required Field
consolidationPolicy: WhenEmptyOrUnderutilized
consolidateAfter: 1m
requirements: # Node pool requirements for instance types and other properties
- key: kubernetes.io/arch
operator: In # Specifies the architecture for nodes
values:
- "amd64"
- key: kubernetes.io/os
operator: In # Specifies the OS type for nodes
values:
- "linux" # The node pool requires Linux OS
- key: karpenter.sh/capacity-type
operator: In # Specifies the capacity type for nodes
values:
- "on-demand"
- key: karpenter.k8s.aws/instance-category
operator: In # Specifies allowed EC2 instance categories
values:
- "t" # Instance category t (e.g., T2, T3)
- "m"
- "r"
minValues: 2 # Minimum number of instances of each category
- key: karpenter.k8s.aws/instance-family
operator: Exists # Specifies that instances in the family must exist (e.g., m5, r5)
minValues: 5 # Minimum number of instances in the specified family
- key: karpenter.k8s.aws/instance-family
operator: In # Specifies that the instance family must match one of the listed values
values:
- "m5"
- "m5d"
- "c5"
- "c5d"
- "c4"
- "r4"
minValues: 3 # Minimum number of instances from these families
- key: node.kubernetes.io/instance-type
operator: Exists # Ensures that the node pool has specific instance types
minValues: 10 # Minimum number of instances of the specified types
- key: karpenter.k8s.aws/instance-generation
operator: Gt # Specifies that the instance generation must be greater than a particular value
values:
- "2" # Instance generation must be greater than 2 (i.e., newer generation)
nodeClass:
group: karpenter.k8s.aws # Node class group for Karpenter
kind: EC2NodeClass # Kind of node class, EC2NodeClass indicates AWS EC2 instances
name: default # The name of the node class (default for this pool)

View File

@ -0,0 +1,33 @@
{{- range .Values.ec2NodeClasses }}
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
name: {{ .name }}
spec:
amiFamily: {{ .amiFamily | default "AL2" }}
role: {{ .role }}
{{- if .detailedMonitoring }}
detailedMonitoring: {{ .detailedMonitoring }}
{{- end }}
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: "{{ $.Values.clusterName }}"
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: "{{ $.Values.clusterName }}"
amiSelectorTerms:
- id: "{{ .amiSelector.arm }}"
- id: "{{ .amiSelector.amd }}"
{{- if .amiSelector.gpu }}
- id: "{{ .amiSelector.gpu }}"
{{- end }}
{{- if .amiSelector.name }}
- name: "{{ .amiSelector.name }}"
{{- end }}
{{- if .tags }}
tags:
{{- range $key, $value := .tags }}
{{ $key }}: "{{ $value }}"
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,73 @@
{{- range .Values.nodePools }}
---
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: {{ .name }}
spec:
template:
metadata:
labels:
{{- if .labels }}
{{- range $key, $value := .labels }}
{{ $key }}: {{ $value }}
{{- end }}
{{- else }}
{} # Empty labels object if no labels are defined
{{- end }}
annotations:
{{- if .annotations }}
{{- range $key, $value := .annotations }}
{{ $key }}: {{ $value }}
{{- end }}
{{- else }}
{} # Empty annotations object if no annotations are defined
{{- end }}
spec:
requirements:
{{- if .requirements }}
{{- if gt (len .requirements) 0 }}
{{- range .requirements }}
- key: {{ .key }}
operator: {{ .operator }}
values:
{{ toYaml .values | indent 12 }}
{{- if .minValues }}
minValues: {{ .minValues }}
{{- end }}
{{- end }}
{{- else }}
[] # Render an empty array explicitly when no requirements are defined
{{- end }}
{{- else }}
[] # Ensure that an empty array is rendered even if the user does not specify requirements
{{- end }}
taints:
{{- if .taints }}
{{- range .taints }}
- key: {{ .key }}
{{- if .value }}
value: {{ .value }}
{{- end }}
effect: {{ .effect }}
{{- end }}
{{- else }}
[] # Empty taints array if no taints are defined
{{- end }}
nodeClassRef:
group: {{ .nodeClass.group | default "karpenter.k8s.aws" }}
kind: {{ .nodeClass.kind | default "EC2NodeClass" }}
name: {{ .nodeClass.name }}
expireAfter: {{ .expireAfter | default "720h" }}
limits:
{{- if .limits.cpu }}
cpu: {{ .limits.cpu }}
{{- end }}
{{- if .limits.memory }}
memory: {{ .limits.memory }}
{{- end }}
disruption:
consolidationPolicy: {{ .disruption.consolidationPolicy | default "WhenEmptyOrUnderutilized" }}
consolidateAfter: {{ .disruption.consolidateAfter | default "1m" }}
{{- end }}

View File

@ -0,0 +1,110 @@
# Custom values for your chart
# Name of the EKS cluster (for identification in the chart and Karpenter)
clusterName: ""
# AWS partition, default is 'aws' (used in multi-region or partitioned environments)
awsPartition: ""
# AWS account ID where the resources will be provisioned
awsAccountId: 3333
# Karpenter chart overrides
karpenter:
settings:
# Cluster name for the Karpenter controller to identify and manage nodes in this cluster
clusterName: ""
# Name of SQS queue for handling EC2 instance interruptions
# interruptionQueue: ""
serviceAccount:
annotations:
# IAM role ARN for Karpenter controller's access to AWS services
eks.amazonaws.com/role-arn: arn:aws:iam::3333:role/KarpenterControllerRole-demo-eks
# Karpenter controller resources can be customized in this section below
# controller:
# resources:
# requests:
# cpu: "1" # CPU resource request for the Karpenter controller (minimum resources Karpenter will be allocated)
# memory: "1Gi" # Memory resource request for the Karpenter controller
# limits:
# cpu: "1" # CPU resource limit for the Karpenter controller (maximum resources Karpenter can consume)
# memory: "1Gi" # Memory resource limit for the Karpenter controller
# EC2NodeClasses define the EC2 instance classes that Karpenter can use
ec2NodeClasses:
- name: default
# Amazon Linux 2 AMI family
amiFamily: AL2
# "KarpenterNodeRole-my-eks-cluster" # Name of karpenter Node Role ( NOT THE ARN )
role:
amiSelector:
# To get the AMI ID, run the commands below in the AWS CLI and replace the AMI ID in the values.yaml file
# ARM_AMI_ID="$(aws ssm get-parameter --name /aws/service/eks/optimized-ami/${K8S_VERSION}/amazon-linux-2-arm64/recommended/image_id --query Parameter.Value --output text)"
arm:
# AMD_AMI_ID="$(aws ssm get-parameter --name /aws/service/eks/optimized-ami/${K8S_VERSION}/amazon-linux-2/recommended/image_id --query Parameter.Value --output text)"
amd:
# GPU_AMI_ID="$(aws ssm get-parameter --name /aws/service/eks/optimized-ami/${K8S_VERSION}/amazon-linux-2-gpu/recommended/image_id --query Parameter.Value --output text)"
# gpu: ami-gpu-id
# amazon-eks-node-1.27-* # Optional: EKS Node AMI Name
# name:
# Optional, propagates tags to underlying EC2 resources
# tags:
# environment: production
# team: "engineering"
# owner: "admin@company.com"
# Enable detailed monitoring for the EC2 instance
# detailedMonitoring: true
# NodePools define groups of nodes with specific requirements
nodePools:
- name: default # Name of the node pool, preset here is set to default nodepool
requirements: # List of node requirements for scheduling
- key: kubernetes.io/arch # Architecture requirement (e.g., amd64, arm64)
operator: In # Only nodes with the specified architecture will be selected
values:
- "amd64" # Specifies that the node should have an amd64 architecture
- key: kubernetes.io/os # OS requirement (e.g., linux, windows)
operator: In # Only nodes with the specified OS will be selected
values:
- "linux" # Specifies that the node should run Linux
- key: karpenter.sh/capacity-type # Defines the instance's capacity type
operator: In # Only nodes with the specified capacity type will be selected
values:
- "on-demand" # Specifies that the node should be an on-demand instance, can be "spot" as well
- key: karpenter.k8s.aws/instance-category # Defines the instance category (e.g., t, m, r)
operator: In # Only nodes with the specified instance category will be selected
values:
- "t" # These can be customized as per need
- "m"
- "r"
# - key: karpenter.k8s.aws/instance-family # Uncomment to define the instance family (e.g., t3, m5, r5)
# operator: In
# values:
# - "t3a"
- key: karpenter.k8s.aws/instance-generation # Instance generation requirement
operator: Gt # Greater than the specified value
values:
- "2" # Specifies that only instance generations greater than 2 are allowed
nodeClass: # Defines the node class, which is linked to EC2NodeClass
group: karpenter.k8s.aws # Group of the EC2NodeClass
kind: EC2NodeClass # Type of node class, which is EC2NodeClass in this case
name: default # Name of the EC2NodeClass to use for the node pool (name of the EC2 instance class)
expireAfter: 720h # Maximum lifetime of the node pool before it expires (720 hours = 30 days)
limits: # Resource limits for the node pool
cpu: "1000" # Maximum CPU limit for the node pool
memory: "1Gi"
disruption: # Policy for handling disruption in the node pool
consolidationPolicy: WhenEmptyOrUnderutilized # Consolidate nodes when they are empty or underutilized
consolidateAfter: 1m # Time after which consolidation will occur, in this case, 1 minute
# Uncomment Below annotations key ( next 3 Lines ) if you want to use annotations
# annotations: # Annotations are key-value pairs that provide additional metadata for the node pool
# example.com/owner: "my-team" # An example annotation that associates the node pool with a team
# example.com/maintainer: "admin@company.com" # Example annotation for the maintainer's contact information
# Uncomment below taint key ( next 4 Lines ) if you want to use taints
# taints: # Taints are used to control which pods can be scheduled on the node pool
# - key: "example.com/special-taint" # Taint key that identifies the taint
# value: "special-value" # Value associated with the taint
# effect: "NoExecute" # Effect of the taint. In this case, NoExecute means pods won't be scheduled on tainted nodes
# Comment Labels Key below if you dont want to use Labels
labels: # Labels are key-value pairs used for categorizing the node pool
environment: production # Label indicating that this node pool is for production use
team: "engineering" # Label associating the node pool with the engineering team

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: microservice
description: Basic helm chart for deploying microservices on kubernetes with best practices
type: application
version: 0.1.6
version: 0.1.8
appVersion: "0.1.2"
maintainers:
- name: ashwani-opstree

View File

@ -33,7 +33,7 @@ spec:
{{- end }}
{{- end }}
spec:
{{- with .Values.imagePullSecrets }}
{{- with .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
@ -67,11 +67,11 @@ spec:
value: {{ $value | quote }}
{{- end }}
{{- end }}
{{- if and .Values.deployment.healthProbes.enabled .Values.deployment.livenessProbe }}
{{- if and .Values.deployment.healthProbes.enabled .Values.deployment.livenessProbe.httpGet }}
livenessProbe:
{{- toYaml .Values.deployment.livenessProbe | nindent 12 }}
{{- end }}
{{- if and .Values.deployment.healthProbes.enabled .Values.deployment.readinessProbe }}
{{- if and .Values.deployment.healthProbes.enabled .Values.deployment.readinessProbe.httpGet }}
readinessProbe:
{{- toYaml .Values.deployment.readinessProbe | nindent 12 }}
{{- end }}

View File

@ -0,0 +1,21 @@
apiVersion: v2
name: psmdb-operator-db
description: A Helm chart for Percona Operator and Percona Server for MongoDB
type: application
version: 1.0.0
appVersion: 1.0.0
dependencies:
- name: psmdb-operator
version: 1.18.0
repository: https://percona.github.io/percona-helm-charts/
alias: psmdb-operator
tags:
- psmdb-operator
condition: psmdb-operator.enabled
- name: psmdb-db
version: 1.18.0
repository: https://percona.github.io/percona-helm-charts/
alias: psmdb-db
tags:
- psmdb-db
condition: psmdb-db.enabled

View File

@ -0,0 +1,2 @@
Backup and Restore have been tested using backup.yaml and restore.yaml files respectively using Azure Blob Storage.
For using cloud storage as backup, a Kubernetes secret need to be made: https://docs.percona.com/percona-operator-for-mongodb/backup-tutorial.html#configure-backup-storage

View File

@ -0,0 +1,266 @@
# Percona Server for MongoDB
This chart deploys Percona Operator and Percona Server for MongoDB Cluster on Kubernetes controlled by Percona Operator for MongoDB.
Useful links:
- [Operator Github repository](https://github.com/percona/percona-server-mongodb-operator)
- [Operator Documentation](https://www.percona.com/doc/kubernetes-operator-for-psmongodb/index.html)
## Pre-requisites
* Kubernetes 1.26+
* Helm v3
# Chart Details
This chart will deploy the Operator Pod and Percona Server for MongoDB Cluster in Kubernetes. It will create a Custom Resource, and the Operator will trigger the creation of corresponding Kubernetes primitives: StatefulSets, Pods, Secrets, etc.
## Installing the Chart
To install the chart with the `psmdb` release name using a dedicated namespace (recommended):
```sh
helm dependency build
helm install my-db <path-to-chart> --namespace my-namespace
```
The chart can be customized using the following configurable parameters:
| Parameter | Description | Default |
| ------------------------------- | ------------------------------------------------------------------------------|---------------------------------------|
| `crVersion` | CR Cluster Manifest version | `1.16.2` |
| `pause` | Stop PSMDB Database safely | `false` |
| `unmanaged` | Start cluster and don't manage it (cross cluster replication) | `false` |
| `unsafeFlags.tls` | Allows users from configuring a cluster without TLS/SSL certificates | `false` |
| `unsafeFlags.replsetSize` | Allows users from configuring a cluster with unsafe parameters: starting it with less than 3 replica set instances or with an even number of replica set instances without additional arbiter | `false` |
| `unsafeFlags.mongosSize` | Allows users from configuring a sharded cluster with less than 3 config server Pods or less than 2 mongos Pods | `false` |
| `unsafeFlags.terminationGracePeriod` | Allows users from configuring a sharded cluster without termination grace period for replica set | `false` |
| `unsafeFlags.backupIfUnhealthy` | Allows running backup on a cluster with failed health checks | `false` |
| `clusterServiceDNSSuffix` | The (non-standard) cluster domain to be used as a suffix of the Service name | `""` |
| `clusterServiceDNSMode` | Mode for the cluster service dns (Internal/ServiceMesh) | `""` |
| `annotations` | PSMDB custom resource annotations | `{}` |
| `ignoreAnnotations` | The list of annotations to be ignored by the Operator | `[]` |
| `ignoreLabels` | The list of labels to be ignored by the Operator | `[]` |
| `multiCluster.enabled` | Enable Multi Cluster Services (MCS) cluster mode | `false` |
| `multiCluster.DNSSuffix` | The cluster domain to be used as a suffix for multi-cluster Services used by Kubernetes | `""` |
| `updateStrategy` | Regulates the way how PSMDB Cluster Pods will be updated after setting a new image | `SmartUpdate` |
| `upgradeOptions.versionServiceEndpoint` | Endpoint for actual PSMDB Versions provider | `https://check.percona.com/versions/` |
| `upgradeOptions.apply` | PSMDB image to apply from version service - recommended, latest, actual version like 4.4.2-4 | `disabled` |
| `upgradeOptions.schedule` | Cron formatted time to execute the update | `"0 2 * * *"` |
| `upgradeOptions.setFCV` | Set feature compatibility version on major upgrade | `false` |
| `finalizers:delete-psmdb-pvc` | Set this if you want to delete database persistent volumes on cluster deletion | `[]` |
| `finalizers:delete-psmdb-pods-in-order` | Set this if you want to delete PSMDB pods in order (primary last) | `[]` |
| `image.repository` | PSMDB Container image repository | `percona/percona-server-mongodb` |
| `image.tag` | PSMDB Container image tag | `6.0.9-7` |
| `imagePullPolicy` | The policy used to update images | `Always` |
| `imagePullSecrets` | PSMDB Container pull secret | `[]` |
| `initImage.repository` | Repository for custom init image | `""` |
| `initImage.tag` | Tag for custom init image | `""` |
| `initContainerSecurityContext` | A custom Kubernetes Security Context for a Container for the initImage | `{}` |
| `tls.mode` | Control usage of TLS (allowTLS, preferTLS, requireTLS, disabled) | `preferTLS` |
| `tls.certValidityDuration` | The validity duration of the external certificate for cert manager | `""` |
| `tls.allowInvalidCertificates` | If enabled the mongo shell will not attempt to validate the server certificates | `true` |
| `tls.issuerConf.name` | A cert-manager issuer name | `""` |
| `tls.issuerConf.kind` | A cert-manager issuer kind | `""` |
| `tls.issuerConf.group` | A cert-manager issuer group | `""` |
| `secrets.users` | The name of the Secrets object for the MongoDB users required to run the operator | `""` |
| `secrets.encryptionKey` | Set secret for data at rest encryption key | `""` |
| `secrets.vault` | Specifies a secret object to provide integration with HashiCorp Vault | `""` |
| `secrets.ldapSecret` | Specifies a secret object for LDAP over TLS connection between MongoDB and OpenLDAP server | `""` |
| `secrets.sse` | The name of the Secrets object for server side encryption credentials | `""` |
| `secrets.ssl` | A secret with TLS certificate generated for external communications | `""` |
| `secrets.sslInternal` | A secret with TLS certificate generated for internal communications | `""` |
| `pmm.enabled` | Enable integration with [Percona Monitoring and Management software](https://www.percona.com/blog/2020/07/23/using-percona-kubernetes-operators-with-percona-monitoring-and-management/) | `false` |
| `pmm.image.repository` | PMM Container image repository | `percona/pmm-client` |
| `pmm.image.tag` | PMM Container image tag | `2.41.2` |
| `pmm.serverHost` | PMM server related K8S service hostname | `monitoring-service` |
||
| `replsets.rs0.name` | ReplicaSet name | `rs0` |
| `replsets.rs0.size` | ReplicaSet size (pod quantity) | `3` |
| `replsets.rs0.terminationGracePeriodSeconds` | The amount of seconds Kubernetes will wait for a clean replica set Pods termination | `""` |
| `replsets.rs0.externalNodes` | ReplicaSet external nodes (cross cluster replication) | `[]` |
| `replsets.rs0.configuration` | Custom config for mongod in replica set | `""` |
| `replsets.rs0.topologySpreadConstraints` | Control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains | `{}` |
| `replsets.rs0.serviceAccountName` | Run replicaset Containers under specified K8S SA | `""` |
| `replsets.rs0.affinity.antiAffinityTopologyKey` | ReplicaSet Pod affinity | `kubernetes.io/hostname` |
| `replsets.rs0.affinity.advanced` | ReplicaSet Pod advanced affinity | `{}` |
| `replsets.rs0.tolerations` | ReplicaSet Pod tolerations | `[]` |
| `replsets.rs0.priorityClass` | ReplicaSet Pod priorityClassName | `""` |
| `replsets.rs0.annotations` | ReplicaSet Pod annotations | `{}` |
| `replsets.rs0.labels` | ReplicaSet Pod labels | `{}` |
| `replsets.rs0.nodeSelector` | ReplicaSet Pod nodeSelector labels | `{}` |
| `replsets.rs0.livenessProbe` | ReplicaSet Pod livenessProbe structure | `{}` |
| `replsets.rs0.readinessProbe` | ReplicaSet Pod readinessProbe structure | `{}` |
| `replsets.rs0.storage` | Set cacheSizeRatio or other custom MongoDB storage options | `{}` |
| `replsets.rs0.podSecurityContext` | Set the security context for a Pod | `{}` |
| `replsets.rs0.containerSecurityContext` | Set the security context for a Container | `{}` |
| `replsets.rs0.runtimeClass` | ReplicaSet Pod runtimeClassName | `""` |
| `replsets.rs0.sidecars` | ReplicaSet Pod sidecars | `{}` |
| `replsets.rs0.sidecarVolumes` | ReplicaSet Pod sidecar volumes | `[]` |
| `replsets.rs0.sidecarPVCs` | ReplicaSet Pod sidecar PVCs | `[]` |
| `replsets.rs0.podDisruptionBudget.maxUnavailable` | ReplicaSet failed Pods maximum quantity | `1` |
| `replsets.rs0.splitHorizons` | External URI for Split-horizon for replica set Pods of the exposed cluster | `{}` |
| `replsets.rs0.expose.enabled` | Allow access to replicaSet from outside of Kubernetes | `false` |
| `replsets.rs0.expose.exposeType` | Network service access point type | `ClusterIP` |
| `replsets.rs0.expose.loadBalancerSourceRanges` | Limit client IP's access to Load Balancer | `{}` |
| `replsets.rs0.expose.serviceAnnotations` | ReplicaSet service annotations | `{}` |
| `replsets.rs0.expose.serviceLabels` | ReplicaSet service labels | `{}` |
| `replsets.rs0.schedulerName` | ReplicaSet Pod schedulerName | `""` |
| `replsets.rs0.resources` | ReplicaSet Pods resource requests and limits | `{}` |
| `replsets.rs0.volumeSpec` | ReplicaSet Pods storage resources | `{}` |
| `replsets.rs0.volumeSpec.emptyDir` | ReplicaSet Pods emptyDir K8S storage | `{}` |
| `replsets.rs0.volumeSpec.hostPath` | ReplicaSet Pods hostPath K8S storage | |
| `replsets.rs0.volumeSpec.hostPath.path` | ReplicaSet Pods hostPath K8S storage path | `""` |
| `replsets.rs0.volumeSpec.hostPath.type` | Type for hostPath volume | `Directory` |
| `replsets.rs0.volumeSpec.pvc` | ReplicaSet Pods PVC request parameters | |
| `replsets.rs0.volumeSpec.pvc.annotations` | The Kubernetes annotations metadata for Persistent Volume Claim | `{}` |
| `replsets.rs0.volumeSpec.pvc.labels` | The Kubernetes labels metadata for Persistent Volume Claim | `{}` |
| `replsets.rs0.volumeSpec.pvc.storageClassName` | ReplicaSet Pods PVC target storageClass | `""` |
| `replsets.rs0.volumeSpec.pvc.accessModes` | ReplicaSet Pods PVC access policy | `[]` |
| `replsets.rs0.volumeSpec.pvc.resources.requests.storage` | ReplicaSet Pods PVC storage size | `3Gi` |
| `replsets.rs0.hostAliases` | The IP address for Kubernetes host aliases | `[]` |
| `replsets.rs0.nonvoting.enabled` | Add MongoDB nonvoting Pods | `false` |
| `replsets.rs0.nonvoting.podSecurityContext` | Set the security context for a Pod | `{}` |
| `replsets.rs0.nonvoting.containerSecurityContext` | Set the security context for a Container | `{}` |
| `replsets.rs0.nonvoting.size` | Number of nonvoting Pods | `1` |
| `replsets.rs0.nonvoting.configuration` | Custom config for mongod nonvoting member | `""` |
| `replsets.rs0.nonvoting.serviceAccountName` | Run replicaset nonvoting Container under specified K8S SA | `""` |
| `replsets.rs0.nonvoting.affinity.antiAffinityTopologyKey` | Nonvoting Pods affinity | `kubernetes.io/hostname` |
| `replsets.rs0.nonvoting.affinity.advanced` | Nonvoting Pods advanced affinity | `{}` |
| `replsets.rs0.nonvoting.tolerations` | Nonvoting Pod tolerations | `[]` |
| `replsets.rs0.nonvoting.priorityClass` | Nonvoting Pod priorityClassName | `""` |
| `replsets.rs0.nonvoting.annotations` | Nonvoting Pod annotations | `{}` |
| `replsets.rs0.nonvoting.labels` | Nonvoting Pod labels | `{}` |
| `replsets.rs0.nonvoting.nodeSelector` | Nonvoting Pod nodeSelector labels | `{}` |
| `replsets.rs0.nonvoting.podDisruptionBudget.maxUnavailable` | Nonvoting failed Pods maximum quantity | `1` |
| `replsets.rs0.nonvoting.resources` | Nonvoting Pods resource requests and limits | `{}` |
| `replsets.rs0.nonvoting.volumeSpec` | Nonvoting Pods storage resources | `{}` |
| `replsets.rs0.nonvoting.volumeSpec.emptyDir` | Nonvoting Pods emptyDir K8S storage | `{}` |
| `replsets.rs0.nonvoting.volumeSpec.hostPath` | Nonvoting Pods hostPath K8S storage | |
| `replsets.rs0.nonvoting.volumeSpec.hostPath.path` | Nonvoting Pods hostPath K8S storage path | `""` |
| `replsets.rs0.nonvoting.volumeSpec.hostPath.type` | Type for hostPath volume | `Directory` |
| `replsets.rs0.nonvoting.volumeSpec.pvc` | Nonvoting Pods PVC request parameters | |
| `replsets.rs0.nonvoting.volumeSpec.pvc.annotations` | The Kubernetes annotations metadata for Persistent Volume Claim | `{}` |
| `replsets.rs0.nonvoting.volumeSpec.pvc.labels` | The Kubernetes labels metadata for Persistent Volume Claim | `{}` |
| `replsets.rs0.nonvoting.volumeSpec.pvc.storageClassName` | Nonvoting Pods PVC target storageClass | `""` |
| `replsets.rs0.nonvoting.volumeSpec.pvc.accessModes` | Nonvoting Pods PVC access policy | `[]` |
| `replsets.rs0.nonvoting.volumeSpec.pvc.resources.requests.storage` | Nonvoting Pods PVC storage size | `3Gi` |
| `replsets.rs0.arbiter.enabled` | Create MongoDB arbiter service | `false` |
| `replsets.rs0.arbiter.size` | MongoDB arbiter Pod quantity | `1` |
| `replsets.rs0.arbiter.serviceAccountName` | Run replicaset arbiter Container under specified K8S SA | `""` |
| `replsets.rs0.arbiter.affinity.antiAffinityTopologyKey` | MongoDB arbiter Pod affinity | `kubernetes.io/hostname` |
| `replsets.rs0.arbiter.affinity.advanced` | MongoDB arbiter Pod advanced affinity | `{}` |
| `replsets.rs0.arbiter.tolerations` | MongoDB arbiter Pod tolerations | `[]` |
| `replsets.rs0.arbiter.priorityClass` | MongoDB arbiter priorityClassName | `""` |
| `replsets.rs0.arbiter.annotations` | MongoDB arbiter Pod annotations | `{}` |
| `replsets.rs0.arbiter.labels` | MongoDB arbiter Pod labels | `{}` |
| `replsets.rs0.arbiter.nodeSelector` | MongoDB arbiter Pod nodeSelector labels | `{}` |
| |
| `sharding.enabled` | Enable sharding setup | `true` |
| `sharding.balancer.enabled` | Enable/disable balancer | `true` |
| `sharding.configrs.size` | Config ReplicaSet size (pod quantity) | `3` |
| `sharding.configrs.terminationGracePeriodSeconds` | The amount of seconds Kubernetes will wait for a clean replica set Pods termination | `""` |
| `sharding.configrs.externalNodes` | Config ReplicaSet external nodes (cross cluster replication) | `[]` |
| `sharding.configrs.configuration` | Custom config for mongod in config replica set | `""` |
| `sharding.configrs.topologySpreadConstraints` | Control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains | `{}` |
| `sharding.configrs.serviceAccountName` | Run sharding configrs Containers under specified K8S SA | `""` |
| `sharding.configrs.affinity.antiAffinityTopologyKey` | Config ReplicaSet Pod affinity | `kubernetes.io/hostname` |
| `sharding.configrs.affinity.advanced` | Config ReplicaSet Pod advanced affinity | `{}` |
| `sharding.configrs.tolerations` | Config ReplicaSet Pod tolerations | `[]` |
| `sharding.configrs.priorityClass` | Config ReplicaSet Pod priorityClassName | `""` |
| `sharding.configrs.annotations` | Config ReplicaSet Pod annotations | `{}` |
| `sharding.configrs.labels` | Config ReplicaSet Pod labels | `{}` |
| `sharding.configrs.nodeSelector` | Config ReplicaSet Pod nodeSelector labels | `{}` |
| `sharding.configrs.livenessProbe` | Config ReplicaSet Pod livenessProbe structure | `{}` |
| `sharding.configrs.readinessProbe` | Config ReplicaSet Pod readinessProbe structure | `{}` |
| `sharding.configrs.storage` | Set cacheSizeRatio or other custom MongoDB storage options | `{}` |
| `sharding.configrs.podSecurityContext` | Set the security context for a Pod | `{}` |
| `sharding.configrs.containerSecurityContext` | Set the security context for a Container | `{}` |
| `sharding.configrs.runtimeClass` | Config ReplicaSet Pod runtimeClassName | `""` |
| `sharding.configrs.sidecars` | Config ReplicaSet Pod sidecars | `{}` |
| `sharding.configrs.sidecarVolumes` | Config ReplicaSet Pod sidecar volumes | `[]` |
| `sharding.configrs.sidecarPVCs` | Config ReplicaSet Pod sidecar PVCs | `[]` |
| `sharding.configrs.podDisruptionBudget.maxUnavailable` | Config ReplicaSet failed Pods maximum quantity | `1` |
| `sharding.configrs.expose.enabled` | Allow access to cfg replica from outside of Kubernetes | `false` |
| `sharding.configrs.expose.exposeType` | Network service access point type | `ClusterIP` |
| `sharding.configrs.expose.loadBalancerSourceRanges` | Limit client IP's access to Load Balancer | `{}` |
| `sharding.configrs.expose.serviceAnnotations` | Config ReplicaSet service annotations | `{}` |
| `sharding.configrs.expose.serviceLabels` | Config ReplicaSet service labels | `{}` |
| `sharding.configrs.resources.limits.cpu` | Config ReplicaSet resource limits CPU | `300m` |
| `sharding.configrs.resources.limits.memory` | Config ReplicaSet resource limits memory | `0.5G` |
| `sharding.configrs.resources.requests.cpu` | Config ReplicaSet resource requests CPU | `300m` |
| `sharding.configrs.resources.requests.memory` | Config ReplicaSet resource requests memory | `0.5G` |
| `sharding.configrs.volumeSpec.hostPath` | Config ReplicaSet hostPath K8S storage | |
| `sharding.configrs.volumeSpec.hostPath.path` | Config ReplicaSet hostPath K8S storage path | `""` |
| `sharding.configrs.volumeSpec.hostPath.type` | Type for hostPath volum | `Directory` |
| `sharding.configrs.volumeSpec.emptyDir` | Config ReplicaSet Pods emptyDir K8S storage | |
| `sharding.configrs.volumeSpec.pvc` | Config ReplicaSet Pods PVC request parameters | |
| `sharding.configrs.volumeSpec.pvc.annotations` | The Kubernetes annotations metadata for Persistent Volume Claim | `{}` |
| `sharding.configrs.volumeSpec.pvc.labels` | The Kubernetes labels metadata for Persistent Volume Claim | `{}` |
| `sharding.configrs.volumeSpec.pvc.storageClassName` | Config ReplicaSet Pods PVC storageClass | `""` |
| `sharding.configrs.volumeSpec.pvc.accessModes` | Config ReplicaSet Pods PVC access policy | `[]` |
| `sharding.configrs.volumeSpec.pvc.resources.requests.storage` | Config ReplicaSet Pods PVC storage size | `3Gi` |
| `sharding.configrs.hostAliases` | The IP address for Kubernetes host aliases | `[]` |
| `sharding.mongos.size` | Mongos size (pod quantity) | `3` |
| `sharding.mongos.terminationGracePeriodSeconds` | The amount of seconds Kubernetes will wait for a clean mongos Pods termination | `""` |
| `sharding.mongos.configuration` | Custom config for mongos | `""` |
| `sharding.mongos.topologySpreadConstraints` | Control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains | `{}` |
| `sharding.mongos.serviceAccountName` | Run sharding mongos Containers under specified K8S SA | `""` |
| `sharding.mongos.affinity.antiAffinityTopologyKey` | Mongos Pods affinity | `kubernetes.io/hostname` |
| `sharding.mongos.affinity.advanced` | Mongos Pods advanced affinity | `{}` |
| `sharding.mongos.tolerations` | Mongos Pods tolerations | `[]` |
| `sharding.mongos.priorityClass` | Mongos Pods priorityClassName | `""` |
| `sharding.mongos.annotations` | Mongos Pods annotations | `{}` |
| `sharding.mongos.labels` | Mongos Pods labels | `{}` |
| `sharding.mongos.nodeSelector` | Mongos Pods nodeSelector labels | `{}` |
| `sharding.mongos.livenessProbe` | Mongos Pod livenessProbe structure | `{}` |
| `sharding.mongos.readinessProbe` | Mongos Pod readinessProbe structure | `{}` |
| `sharding.mongos.podSecurityContext` | Set the security context for a Pod | `{}` |
| `sharding.mongos.containerSecurityContext` | Set the security context for a Container | `{}` |
| `sharding.mongos.runtimeClass` | Mongos Pod runtimeClassName | `""` |
| `sharding.mongos.sidecars` | Mongos Pod sidecars | `{}` |
| `sharding.mongos.sidecarVolumes` | Mongos Pod sidecar volumes | `[]` |
| `sharding.mongos.sidecarPVCs` | Mongos Pod sidecar PVCs | `[]` |
| `sharding.mongos.podDisruptionBudget.maxUnavailable` | Mongos failed Pods maximum quantity | `1` |
| `sharding.mongos.resources.limits.cpu` | Mongos Pods resource limits CPU | `300m` |
| `sharding.mongos.resources.limits.memory` | Mongos Pods resource limits memory | `0.5G` |
| `sharding.mongos.resources.requests.cpu` | Mongos Pods resource requests CPU | `300m` |
| `sharding.mongos.resources.requests.memory` | Mongos Pods resource requests memory | `0.5G` |
| `sharding.mongos.expose.exposeType` | Mongos service exposeType | `ClusterIP` |
| `sharding.mongos.expose.servicePerPod` | Create a separate ClusterIP Service for each mongos instance | `false` |
| `sharding.mongos.expose.loadBalancerSourceRanges` | Limit client IP's access to Load Balancer | `{}` |
| `sharding.mongos.expose.serviceAnnotations` | Mongos service annotations | `{}` |
| `sharding.mongos.expose.serviceLabels` | Mongos service labels | `{}` |
| `sharding.mongos.expose.nodePort` | Custom port if exposing mongos via NodePort | `""` |
| `sharding.mongos.hostAliases` | The IP address for Kubernetes host aliases | `[]` |
| |
| `backup.enabled` | Enable backup PBM agent | `true` |
| `backup.annotations` | Backup job annotations | `{}` |
| `backup.podSecurityContext` | Set the security context for a Pod | `{}` |
| `backup.containerSecurityContext` | Set the security context for a Container | `{}` |
| `backup.restartOnFailure` | Backup Pods restart policy | `true` |
| `backup.image.repository` | PBM Container image repository | `percona/percona-backup-mongodb` |
| `backup.image.tag` | PBM Container image tag | `2.3.0` |
| `backup.storages` | Local/remote backup storages settings | `{}` |
| `backup.pitr.enabled` | Enable point in time recovery for backup | `false` |
| `backup.pitr.oplogOnly` | Start collecting oplogs even if full logical backup doesn't exist | `false` |
| `backup.pitr.oplogSpanMin` | Number of minutes between the uploads of oplogs | `10` |
| `backup.pitr.compressionType` | The point-in-time-recovery chunks compression format | `""` |
| `backup.pitr.compressionLevel` | The point-in-time-recovery chunks compression level | `""` |
| `backup.configuration.backupOptions` | Custom configuration settings for backup | `{}` |
| `backup.configuration.restoreOptions` | Custom configuration settings for restore | `{}` |
| `backup.tasks` | Backup working schedule | `{}` |
| `users` | PSMDB essential users | `{}` |
Specify parameters using `--set key=value[,key=value]` argument to `helm install`
Notice that you can use multiple replica sets only with sharding enabled.
## Examples
### Deploy a replica set with disabled backups and no mongos pods
This is great for a dev PSMDB/MongoDB cluster as it doesn't bother with backups and sharding setup.
```bash
$ helm install dev --namespace psmdb . \
--set runUid=1001 --set "replsets.rs0.volumeSpec.pvc.resources.requests.storage=20Gi" \
--set backup.enabled=false --set sharding.enabled=false
```

View File

@ -0,0 +1,18 @@
{{- if .Values.backup.enabled }}
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDBBackup
metadata:
name: {{ .Values.backup.name }}
{{- if .Values.backup.annotations }}
annotations:
{{ .Values.backup.annotations | toYaml | indent 4 }}
{{- end }}
{{- if .Values.backup.labels }}
labels:
{{ .Values.backup.labels | toYaml | indent 4 }}
{{- end }}
spec:
clusterName: {{ .Values.backup.clusterName }}
storageName: {{ .Values.backup.storageName }}
type: {{ .Values.backup.type }}
{{- end }}

View File

@ -0,0 +1,17 @@
{{- if .Values.restore.enabled }}
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDBRestore
metadata:
name: {{ .Values.restore.name }}
{{- if .Values.restore.annotations }}
annotations:
{{ .Values.restore.annotations | toYaml | indent 4 }}
{{- end }}
{{- if .Values.restore.labels }}
labels:
{{ .Values.restore.labels | toYaml | indent 4 }}
{{- end }}
spec:
clusterName: {{ .Values.restore.clusterName }}
backupName: {{ .Values.restore.backupName }}
{{- end }}

View File

@ -0,0 +1,805 @@
psmdb-operator:
enabled: true
# Default values for psmdb-operator.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: percona/percona-server-mongodb-operator
tag: 1.18.0
pullPolicy: IfNotPresent
# disableTelemetry: according to
# https://docs.percona.com/percona-operator-for-mongodb/telemetry.html
# this is how you can disable telemetry collection
# default is false which means telemetry will be collected
disableTelemetry: false
# set if you want to specify a namespace to watch
# defaults to `.Release.namespace` if left blank
# multiple namespaces can be specified and separated by comma
# watchNamespace:
# set if you want that watched namespaces are created by helm
# createNamespace: false
# set if operator should be deployed in cluster wide mode. defaults to false
watchAllNamespaces: false
# rbac: settings for deployer RBAC creation
rbac:
# rbac.create: if false RBAC resources should be in place
create: true
# serviceAccount: settings for Service Accounts used by the deployer
serviceAccount:
# serviceAccount.create: Whether to create the Service Accounts or not
create: true
# annotations to add to the service account
annotations: {}
# annotations to add to the operator deployment
annotations: {}
# labels to add to the operator deployment
labels: {}
# annotations to add to the operator pod
podAnnotations: {}
# prometheus.io/scrape: "true"
# prometheus.io/port: "8080"
# labels to the operator pod
podLabels: {}
podSecurityContext: {}
# runAsNonRoot: true
# runAsUser: 2
# runAsGroup: 2
# fsGroup: 2
# fsGroupChangePolicy: "OnRootMismatch"
securityContext: {}
# allowPrivilegeEscalation: false
# capabilities:
# drop:
# - ALL
# seccompProfile:
# type: RuntimeDefault
# set if you want to use a different operator name
# defaults to `percona-server-mongodb-operator`
# operatorName:
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
env:
resyncPeriod: 5s
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
logStructured: false
logLevel: "INFO"
psmdb-db:
enabled: true
# Default values for psmdb-cluster.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# Platform type: kubernetes, openshift
# platform: kubernetes
# Cluster DNS Suffix
# clusterServiceDNSSuffix: svc.cluster.local
# clusterServiceDNSMode: "Internal"
finalizers:
## Set this if you want that operator deletes the primary pod last
- percona.com/delete-psmdb-pods-in-order
## Set this if you want to delete database persistent volumes on cluster deletion
# - percona.com/delete-psmdb-pvc
## Set this if you want to delete all pitr chunks on cluster deletion
# - percona.com/delete-pitr-chunks
nameOverride: ""
fullnameOverride: ""
crVersion: 1.18.0
pause: false
unmanaged: false
unsafeFlags:
tls: false
replsetSize: true
mongosSize: false
terminationGracePeriod: false
backupIfUnhealthy: false
enableVolumeExpansion: false
annotations: {}
# ignoreAnnotations:
# - service.beta.kubernetes.io/aws-load-balancer-backend-protocol
# ignoreLabels:
# - rack
multiCluster:
enabled: false
# DNSSuffix: svc.clusterset.local
updateStrategy: SmartUpdate
upgradeOptions:
versionServiceEndpoint: https://check.percona.com
apply: disabled
schedule: "0 2 * * *"
setFCV: false
image:
repository: percona/percona-server-mongodb
tag: 7.0.14-8-multi
imagePullPolicy: Always
# imagePullSecrets: []
# initImage:
# repository: percona/percona-server-mongodb-operator
# tag: 1.18.0
# initContainerSecurityContext: {}
# tls:
# mode: preferTLS
# # 90 days in hours
# certValidityDuration: 2160h
# allowInvalidCertificates: true
# issuerConf:
# name: special-selfsigned-issuer
# kind: ClusterIssuer
# group: cert-manager.io
secrets: {}
# If you set users secret here the operator will use existing one or generate random values
# If not set the operator generates the default secret with name <cluster_name>-secrets
# users: my-cluster-name-secrets
# encryptionKey: my-cluster-name-mongodb-encryption-key
# keyFile: my-cluster-name-mongodb-keyfile
# vault: my-cluster-name-vault
# ldapSecret: my-ldap-secret
# sse: my-cluster-name-sse
pmm:
enabled: false
image:
repository: percona/pmm-client
tag: 2.43.2
serverHost: monitoring-service
# mongodParams: ""
# mongosParams: ""
# resources: {}
# containerSecurityContext: {}
replsets:
rs0:
name: rs0
size: 3
# terminationGracePeriodSeconds: 300
# externalNodes:
# - host: 34.124.76.90
# - host: 34.124.76.91
# port: 27017
# votes: 0
# priority: 0
# - host: 34.124.76.92
# configuration: |
# operationProfiling:
# mode: slowOp
# systemLog:
# verbosity: 1
# serviceAccountName: percona-server-mongodb-operator
# topologySpreadConstraints:
# - labelSelector:
# matchLabels:
# app.kubernetes.io/name: percona-server-mongodb
# maxSkew: 1
# topologyKey: kubernetes.io/hostname
# whenUnsatisfiable: DoNotSchedule
# replsetOverrides:
# my-cluster-name-rs0-0:
# host: my-cluster-name-rs0-0.example.net:27017
# tags:
# key: value-0
# my-cluster-name-rs0-1:
# host: my-cluster-name-rs0-1.example.net:27017
# tags:
# key: value-1
# my-cluster-name-rs0-2:
# host: my-cluster-name-rs0-2.example.net:27017
# tags:
# key: value-2
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
# advanced:
# podAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# - labelSelector:
# matchExpressions:
# - key: security
# operator: In
# values:
# - S1
# topologyKey: failure-domain.beta.kubernetes.io/zone
# tolerations: []
# primaryPreferTagSelector:
# region: us-west-2
# zone: us-west-2c
# priorityClass: ""
# annotations: {}
# labels: {}
# podSecurityContext: {}
# containerSecurityContext: {}
# nodeSelector: {}
# livenessProbe:
# failureThreshold: 4
# initialDelaySeconds: 60
# periodSeconds: 30
# timeoutSeconds: 10
# startupDelaySeconds: 7200
# readinessProbe:
# failureThreshold: 8
# initialDelaySeconds: 10
# periodSeconds: 3
# successThreshold: 1
# timeoutSeconds: 2
# runtimeClassName: image-rc
# storage:
# engine: wiredTiger
# wiredTiger:
# engineConfig:
# cacheSizeRatio: 0.5
# directoryForIndexes: false
# journalCompressor: snappy
# collectionConfig:
# blockCompressor: snappy
# indexConfig:
# prefixCompression: true
# inMemory:
# engineConfig:
# inMemorySizeRatio: 0.5
# sidecars:
# - image: busybox
# command: ["/bin/sh"]
# args: ["-c", "while true; do echo echo $(date -u) 'test' >> /dev/null; sleep 5;done"]
# name: rs-sidecar-1
# volumeMounts:
# - mountPath: /volume1
# name: sidecar-volume-claim
# - mountPath: /secret
# name: sidecar-secret
# - mountPath: /configmap
# name: sidecar-config
# sidecarVolumes:
# - name: sidecar-secret
# secret:
# secretName: mysecret
# - name: sidecar-config
# configMap:
# name: myconfigmap
# sidecarPVCs:
# - apiVersion: v1
# kind: PersistentVolumeClaim
# metadata:
# name: sidecar-volume-claim
# spec:
# resources:
# requests:
# storage: 1Gi
# volumeMode: Filesystem
# accessModes:
# - ReadWriteOnce
podDisruptionBudget:
maxUnavailable: 1
# splitHorizons:
# my-cluster-name-rs0-0:
# external: rs0-0.mycluster.xyz
# external-2: rs0-0.mycluster2.xyz
# my-cluster-name-rs0-1:
# external: rs0-1.mycluster.xyz
# external-2: rs0-1.mycluster2.xyz
# my-cluster-name-rs0-2:
# external: rs0-2.mycluster.xyz
# external-2: rs0-2.mycluster2.xyz
expose:
enabled: false
type: ClusterIP
# loadBalancerIP: 10.0.0.0
# loadBalancerSourceRanges:
# - 10.0.0.0/8
# annotations:
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# labels:
# some-label: some-key
# internalTrafficPolicy: Local
# schedulerName: ""
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
# emptyDir: {}
# hostPath:
# path: /data
# type: Directory
pvc:
# annotations:
# volume.beta.kubernetes.io/storage-class: example-hostpath
# labels:
# rack: rack-22
# storageClassName: standard
# accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
# hostAliases:
# - ip: "10.10.0.2"
# hostnames:
# - "host1"
# - "host2"
nonvoting:
enabled: false
# podSecurityContext: {}
# containerSecurityContext: {}
size: 3
# configuration: |
# operationProfiling:
# mode: slowOp
# systemLog:
# verbosity: 1
# serviceAccountName: percona-server-mongodb-operator
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
# advanced:
# podAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# - labelSelector:
# matchExpressions:
# - key: security
# operator: In
# values:
# - S1
# topologyKey: failure-domain.beta.kubernetes.io/zone
# tolerations: []
# priorityClass: ""
# annotations: {}
# labels: {}
# nodeSelector: {}
podDisruptionBudget:
maxUnavailable: 1
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
# emptyDir: {}
# hostPath:
# path: /data
# type: Directory
pvc:
# annotations:
# volume.beta.kubernetes.io/storage-class: example-hostpath
# labels:
# rack: rack-22
# storageClassName: standard
# accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
arbiter:
enabled: false
size: 1
# serviceAccountName: percona-server-mongodb-operator
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
# advanced:
# podAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# - labelSelector:
# matchExpressions:
# - key: security
# operator: In
# values:
# - S1
# topologyKey: failure-domain.beta.kubernetes.io/zone
# tolerations: []
# priorityClass: ""
# annotations: {}
# labels: {}
# nodeSelector: {}
sharding:
enabled: true
balancer:
enabled: true
configrs:
size: 3
# terminationGracePeriodSeconds: 300
# externalNodes:
# - host: 34.124.76.90
# - host: 34.124.76.91
# port: 27017
# votes: 0
# priority: 0
# - host: 34.124.76.92
# configuration: |
# operationProfiling:
# mode: slowOp
# systemLog:
# verbosity: 1
# serviceAccountName: percona-server-mongodb-operator
# topologySpreadConstraints:
# - labelSelector:
# matchLabels:
# app.kubernetes.io/name: percona-server-mongodb
# maxSkew: 1
# topologyKey: kubernetes.io/hostname
# whenUnsatisfiable: DoNotSchedule
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
# advanced:
# podAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# - labelSelector:
# matchExpressions:
# - key: security
# operator: In
# values:
# - S1
# topologyKey: failure-domain.beta.kubernetes.io/zone
# tolerations: []
# priorityClass: ""
# annotations: {}
# labels: {}
# podSecurityContext: {}
# containerSecurityContext: {}
# nodeSelector: {}
# livenessProbe: {}
# readinessProbe: {}
# runtimeClassName: image-rc
# sidecars:
# - image: busybox
# command: ["/bin/sh"]
# args: ["-c", "while true; do echo echo $(date -u) 'test' >> /dev/null; sleep 5;done"]
# name: rs-sidecar-1
# volumeMounts:
# - mountPath: /volume1
# name: sidecar-volume-claim
# sidecarPVCs: []
# sidecarVolumes: []
podDisruptionBudget:
maxUnavailable: 1
expose:
enabled: false
type: ClusterIP
# loadBalancerIP: 10.0.0.0
# loadBalancerSourceRanges:
# - 10.0.0.0/8
# annotations:
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# labels:
# some-label: some-key
# internalTrafficPolicy: Local
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
# emptyDir: {}
# hostPath:
# path: /data
# type: Directory
pvc:
# annotations:
# volume.beta.kubernetes.io/storage-class: example-hostpath
# labels:
# rack: rack-22
# storageClassName: standard
# accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
# hostAliases:
# - ip: "10.10.0.2"
# hostnames:
# - "host1"
# - "host2"
mongos:
size: 3
# terminationGracePeriodSeconds: 300
# configuration: |
# systemLog:
# verbosity: 1
# serviceAccountName: percona-server-mongodb-operator
# topologySpreadConstraints:
# - labelSelector:
# matchLabels:
# app.kubernetes.io/name: percona-server-mongodb
# maxSkew: 1
# topologyKey: kubernetes.io/hostname
# whenUnsatisfiable: DoNotSchedule
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
# advanced:
# podAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# - labelSelector:
# matchExpressions:
# - key: security
# operator: In
# values:
# - S1
# topologyKey: failure-domain.beta.kubernetes.io/zone
# tolerations: []
# priorityClass: ""
# annotations: {}
# labels: {}
# podSecurityContext: {}
# containerSecurityContext: {}
# nodeSelector: {}
# livenessProbe: {}
# readinessProbe: {}
# runtimeClassName: image-rc
# sidecars:
# - image: busybox
# command: ["/bin/sh"]
# args: ["-c", "while true; do echo echo $(date -u) 'test' >> /dev/null; sleep 5;done"]
# name: rs-sidecar-1
# volumeMounts:
# - mountPath: /volume1
# name: sidecar-volume-claim
# sidecarPVCs: []
# sidecarVolumes: []
podDisruptionBudget:
maxUnavailable: 1
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
expose:
enabled: false
type: ClusterIP
# loadBalancerIP: 10.0.0.0/8
# loadBalancerSourceRanges:
# - 10.0.0.0/8
# annotations:
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# labels:
# some-label: some-key
# internalTrafficPolicy: Local
# nodePort: 32017
# auditLog:
# destination: file
# format: BSON
# filter: '{}'
# hostAliases:
# - ip: "10.10.0.2"
# hostnames:
# - "host1"
# - "host2"
# users:
# - name: my-user
# db: admin
# passwordSecretRef:
# name: my-user-password
# key: my-user-password-key
# roles:
# - name: clusterAdmin
# db: admin
# - name: userAdminAnyDatabase
# db: admin
# - name: my-usr
# db: admin
# passwordSecretRef:
# name: my-user-pwd
# key: my-user-pwd-key
# roles:
# - name: dbOwner
# db: sometest
# roles:
# - role: myClusterwideAdmin
# db: admin
# privileges:
# - resource:
# cluster: true
# actions:
# - addShard
# - resource:
# db: config
# collection: ''
# actions:
# - find
# - update
# - insert
# - remove
# roles:
# - role: read
# db: admin
# - role: my-role
# db: myDb
# privileges:
# - resource:
# db: ''
# collection: ''
# actions:
# - find
# authenticationRestrictions:
# - clientSource:
# - 127.0.0.1
# serverAddress:
# - 127.0.0.1
backup:
enabled: false
image:
repository: percona/percona-backup-mongodb
tag: 2.7.0-multi
# annotations:
# iam.amazonaws.com/role: role-arn
# podSecurityContext: {}
# containerSecurityContext: {}
# resources:
# limits:
# cpu: "300m"
# memory: "1.2G"
# requests:
# cpu: "300m"
# memory: "1G"
storages:
# s3-us-west:
# type: s3
# s3:
# bucket: S3-BACKUP-BUCKET-NAME-HERE
# credentialsSecret: my-cluster-name-backup-s3
# serverSideEncryption:
# kmsKeyID: 1234abcd-12ab-34cd-56ef-1234567890ab
# sseAlgorithm: aws:kms
# sseCustomerAlgorithm: AES256
# sseCustomerKey: Y3VzdG9tZXIta2V5
# retryer:
# numMaxRetries: 3
# minRetryDelay: 30ms
# maxRetryDelay: 5m
# region: us-west-2
# prefix: ""
# uploadPartSize: 10485760
# maxUploadParts: 10000
# storageClass: STANDARD
# insecureSkipTLSVerify: false
# minio:
# type: s3
# s3:
# bucket: MINIO-BACKUP-BUCKET-NAME-HERE
# region: us-east-1
# credentialsSecret: my-cluster-name-backup-minio
# endpointUrl: http://minio.psmdb.svc.cluster.local:9000/minio/
# prefix: ""
# azure-blob:
# type: azure
# azure:
# container: percona-container
# prefix: backups
# endpointUrl: https://perconasa.blob.core.windows.net
# credentialsSecret: perconasasecret
pitr:
enabled: false
oplogOnly: false
# oplogSpanMin: 10
# compressionType: gzip
# compressionLevel: 6
# configuration:
# backupOptions:
# priority:
# "localhost:28019": 2.5
# "localhost:27018": 2.5
# timeouts:
# startingStatus: 33
# oplogSpanMin: 10
# restoreOptions:
# batchSize: 500
# numInsertionWorkers: 10
# numDownloadWorkers: 4
# maxDownloadBufferMb: 0
# downloadChunkMb: 32
# mongodLocation: /usr/bin/mongo
# mongodLocationMap:
# "node01:2017": /usr/bin/mongo
# "node03:27017": /usr/bin/mongo
tasks:
# - name: daily-s3-us-west
# enabled: true
# schedule: "0 0 * * *"
# keep: 3
# storageName: s3-us-west
# compressionType: gzip
# - name: weekly-s3-us-west
# enabled: false
# schedule: "0 0 * * 0"
# keep: 5
# storageName: s3-us-west
# compressionType: gzip
# - name: weekly-s3-us-west-physical
# enabled: false
# schedule: "0 5 * * 0"
# keep: 5
# type: physical
# storageName: s3-us-west
# compressionType: gzip
# compressionLevel: 6
# If you set systemUsers here the secret will be constructed by helm with these values
# systemUsers:
# MONGODB_BACKUP_USER: backup
# MONGODB_BACKUP_PASSWORD: backup123456
# MONGODB_DATABASE_ADMIN_USER: databaseAdmin
# MONGODB_DATABASE_ADMIN_PASSWORD: databaseAdmin123456
# MONGODB_CLUSTER_ADMIN_USER: clusterAdmin
# MONGODB_CLUSTER_ADMIN_PASSWORD: clusterAdmin123456
# MONGODB_CLUSTER_MONITOR_USER: clusterMonitor
# MONGODB_CLUSTER_MONITOR_PASSWORD: clusterMonitor123456
# MONGODB_USER_ADMIN_USER: userAdmin
# MONGODB_USER_ADMIN_PASSWORD: userAdmin123456
# PMM_SERVER_API_KEY: apikey
# # PMM_SERVER_USER: admin
# # PMM_SERVER_PASSWORD: admin
backup:
enabled: true
annotations:
description: "test"
name: backup
labels:
app: mongo-backup
environment: testing
clusterName: mdb-db-psmdb-db
storageName: azure-blob
type: logical
restore:
enabled: true
annotations:
description: "test"
name: restore1
labels:
app: mongo-restore
environment: testing
clusterName: mdb-db-psmdb-db
backupName: backup

View File

@ -1,21 +0,0 @@
apiVersion: v2
name: redis-cluster
description: Provides easy redis setup definitions for Kubernetes services, and deployment.
version: 0.16.0
appVersion: "0.16.0"
home: https://github.com/ot-container-kit/redis-operator
sources:
- https://github.com/ot-container-kit/redis-operator
maintainers:
- name: iamabhishek-dubey
- name: sandy724
- name: shubham-cmyk
keywords:
- operator
- redis
- opstree
- kubernetes
- openshift
- redis-exporter
icon: https://github.com/OT-CONTAINER-KIT/redis-operator/raw/master/static/redis-operator-logo.svg
type: application

View File

@ -1,66 +0,0 @@
# Redis Cluster
Redis is a key-value based distributed database, this helm chart is for redis cluster setup. This helm chart needs [Redis Operator](../redis-operator) inside Kubernetes cluster. The redis cluster definition can be modified or changed by [values.yaml](./values.yaml).
```shell
helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
helm install <my-release> ot-helm/redis-cluster \
--set redisCluster.clusterSize=3 --namespace <namespace>
```
Redis setup can be upgraded by using `helm upgrade` command:-
```shell
helm upgrade <my-release> ot-helm/redis-cluster --install \
--set redisCluster.clusterSize=5 --namespace <namespace>
```
For uninstalling the chart:-
```shell
helm delete <my-release> --namespace <namespace>
```
## Pre-Requisities
- Kubernetes 1.15+
- Helm 3.X
- Redis Operator 0.7.0
## Parameters
| **Name** | **Default Value** | **Description** |
|------------------------------------|--------------------------------|-----------------------------------------------------------------------------------------------|
| `imagePullSecrets` | [] | List of image pull secrets, in case redis image is getting pull from private registry |
| `redisCluster.clusterSize` | 3 | Size of the redis cluster leader and follower nodes |
| `redisCluster.clusterVersion` | v7 | Major version of Redis setup, values can be v6 or v7 |
| `redisCluster.persistenceEnabled` | true | Persistence should be enabled or not in the Redis cluster setup |
| `redisCluster.secretName` | redis-secret | Name of the existing secret in Kubernetes |
| `redisCluster.secretKey` | password | Name of the existing secret key in Kubernetes |
| `redisCluster.image` | quay.io/opstree/redis | Name of the redis image |
| `redisCluster.tag` | v6.2 | Tag of the redis image |
| `redisCluster.imagePullPolicy` | IfNotPresent | Image Pull Policy of the redis image |
| `redisCluster.leaderServiceType` | ClusterIP | Kubernetes service type for Redis Leader |
| `redisCluster.followerServiceType` | ClusterIP | Kubernetes service type for Redis Follower |
| `redisCluster.name` | "" | Overwrites the name for the charts resources instead of the Release name |
| `externalService.enabled` | false | If redis service needs to be exposed using LoadBalancer or NodePort |
| `externalService.annotations` | {} | Kubernetes service related annotations |
| `externalService.serviceType` | NodePort | Kubernetes service type for exposing service, values - ClusterIP, NodePort, and LoadBalancer |
| `externalService.port` | 6379 | Port number on which redis external service should be exposed |
| `serviceMonitor.enabled` | false | Servicemonitor to monitor redis with Prometheus |
| `serviceMonitor.interval` | 30s | Interval at which metrics should be scraped. |
| `serviceMonitor.scrapeTimeout` | 10s | Timeout after which the scrape is ended |
| `serviceMonitor.namespace` | monitoring | Namespace in which Prometheus operator is running |
| `redisExporter.enabled` | true | Redis exporter should be deployed or not |
| `redisExporter.image` | quay.io/opstree/redis-exporter | Name of the redis exporter image |
| `redisExporter.tag` | v6.2 | Tag of the redis exporter image |
| `redisExporter.imagePullPolicy` | IfNotPresent | Image Pull Policy of the redis exporter image |
| `redisExporter.env` | [] | Extra environment variables which needs to be added in redis exporter |
| `sidecars` | [] | Sidecar for redis pods |
| `nodeSelector` | {} | NodeSelector for redis statefulset |
| `priorityClassName` | "" | Priority class name for the redis statefulset |
| `storageSpec` | {} | Storage configuration for redis setup |
| `securityContext` | {} | Security Context for redis pods for changing system or kernel level parameters |
| `affinity` | {} | Affinity for node and pods for redis statefulset |
| `tolerations` | [] | Tolerations for redis statefulset |

View File

@ -1,90 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/* Define common labels */}}
{{- define "common.labels" -}}
app.kubernetes.io/name: {{ .Values.redisCluster.name | default .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Values.redisCluster.name | default .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
{{- if .Values.labels }}
{{- range $labelkey, $labelvalue := .Values.labels }}
{{ $labelkey}}: {{ $labelvalue }}
{{- end }}
{{- end }}
{{- end -}}
{{/* Helper for Redis Cluster (leader & follower) */}}
{{- define "redis.role" -}}
{{- if .affinity }}
affinity:
{{- toYaml .affinity | nindent 2 }}
{{- end }}
{{- if .tolerations }}
tolerations:
{{- toYaml .tolerations | nindent 2 }}
{{- end }}
{{- if .pdb.enabled }}
pdb:
enabled: {{ .pdb.enabled }}
maxUnavailable: {{ .pdb.maxUnavailable }}
minAvailable: {{ .pdb.minAvailable }}
{{- end }}
{{- if .nodeSelector }}
nodeSelector:
{{- toYaml .nodeSelector | nindent 2 }}
{{- end }}
{{- if .securityContext }}
securityContext:
{{- toYaml .securityContext | nindent 2 }}
{{- end }}
{{- end -}}
{{/* Generate sidecar properties */}}
{{- define "sidecar.properties" -}}
{{- with .Values.sidecars }}
name: {{ .name }}
image: {{ .image }}
{{- if .imagePullPolicy }}
imagePullPolicy: {{ .imagePullPolicy }}
{{- end }}
{{- if .resources }}
resources:
{{ toYaml .resources | nindent 2 }}
{{- end }}
{{- if .env }}
env:
{{ toYaml .env | nindent 2 }}
{{- end }}
{{- end }}
{{- end -}}
{{/* Generate init container properties */}}
{{- define "initContainer.properties" -}}
{{- with .Values.initContainer }}
{{- if .enabled }}
image: {{ .image }}
{{- if .imagePullPolicy }}
imagePullPolicy: {{ .imagePullPolicy }}
{{- end }}
{{- if .resources }}
resources:
{{ toYaml .resources | nindent 2 }}
{{- end }}
{{- if .env }}
env:
{{ toYaml .env | nindent 2 }}
{{- end }}
{{- if .command }}
command:
{{ toYaml .command | nindent 2 }}
{{- end }}
{{- if .args }}
args:
{{ toYaml .args | nindent 2 }}
{{- end }}
{{- end }}
{{- end }}
{{- end -}}

View File

@ -1,17 +0,0 @@
{{- if eq .Values.externalConfig.enabled true }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.redisCluster.name | default .Release.Name }}-ext-config
labels:
app.kubernetes.io/name: {{ .Values.redisCluster.name | default .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Values.redisCluster.name | default .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
data:
redis-additional.conf: |
{{ .Values.externalConfig.data | nindent 4 }}
{{- end }}

View File

@ -1,29 +0,0 @@
{{- if and (gt (int .Values.redisCluster.follower.replicas) 0) (eq .Values.externalService.enabled true) }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.redisCluster.name | default .Release.Name }}-follower-external-service
{{- if .Values.externalService.annotations }}
annotations:
{{ toYaml .Values.externalService.annotations | indent 4 }}
{{- end }}
labels:
app.kubernetes.io/name: {{ .Values.redisCluster.name | default .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Values.redisCluster.name | default .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
spec:
type: {{ .Values.externalService.serviceType }}
selector:
app: {{ .Values.redisCluster.name | default .Release.Name }}-follower
redis_setup_type: cluster
role: follower
ports:
- protocol: TCP
port: {{ .Values.externalService.port }}
targetPort: 6379
name: client
{{- end }}

View File

@ -1,27 +0,0 @@
{{- if and (eq .Values.serviceMonitor.enabled true) (gt (int .Values.redisCluster.follower.replicas) 0) }}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ .Values.redisCluster.name | default .Release.Name }}-follower-prometheus-monitoring
labels:
app.kubernetes.io/name: {{ .Values.redisCluster.name | default .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Values.redisCluster.name | default .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
spec:
selector:
matchLabels:
app: {{ .Values.redisCluster.name | default .Release.Name }}-follower
redis_setup_type: cluster
role: follower
endpoints:
- port: redis-exporter
interval: {{ .Values.serviceMonitor.interval }}
scrapeTimeout: {{ .Values.serviceMonitor.scrapeTimeout }}
namespaceSelector:
matchNames:
- {{ .Values.serviceMonitor.namespace }}
{{- end }}

View File

@ -1,29 +0,0 @@
{{- if and (gt (int .Values.redisCluster.leader.replicas) 0) (eq .Values.externalService.enabled true) }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.redisCluster.name | default .Release.Name }}-leader-external-service
{{- if .Values.externalService.annotations }}
annotations:
{{ toYaml .Values.externalService.annotations | indent 4 }}
{{- end }}
labels:
app.kubernetes.io/name: {{ .Values.redisCluster.name | default .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Values.redisCluster.name | default .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
spec:
type: {{ .Values.externalService.serviceType }}
selector:
app: {{ .Values.redisCluster.name | default .Release.Name }}-leader
redis_setup_type: cluster
role: leader
ports:
- protocol: TCP
port: {{ .Values.externalService.port }}
targetPort: 6379
name: client
{{- end }}

View File

@ -1,27 +0,0 @@
{{- if and (eq .Values.serviceMonitor.enabled true) (gt (int .Values.redisCluster.leader.replicas) 0) }}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ .Values.redisCluster.name | default .Release.Name }}-leader-prometheus-monitoring
labels:
app.kubernetes.io/name: {{ .Values.redisCluster.name | default .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Values.redisCluster.name | default .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
spec:
selector:
matchLabels:
app: {{ .Values.redisCluster.name | default .Release.Name }}-leader
redis_setup_type: cluster
role: leader
endpoints:
- port: redis-exporter
interval: {{ .Values.serviceMonitor.interval }}
scrapeTimeout: {{ .Values.serviceMonitor.scrapeTimeout }}
namespaceSelector:
matchNames:
- {{ .Values.serviceMonitor.namespace }}
{{- end }}

View File

@ -1,85 +0,0 @@
---
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisCluster
metadata:
name: {{ .Values.redisCluster.name | default .Release.Name }}
labels: {{- include "common.labels" . | nindent 4 }}
spec:
clusterSize: {{ .Values.redisCluster.clusterSize }}
persistenceEnabled: {{ .Values.redisCluster.persistenceEnabled }}
clusterVersion: {{ .Values.redisCluster.clusterVersion }}
redisLeader: {{- include "redis.role" .Values.redisCluster.leader | nindent 4 }}
replicas: {{ .Values.redisCluster.leader.replicas }}
{{- if .Values.externalConfig.enabled }}
redisConfig:
additionalRedisConfig: "{{ .Values.redisCluster.name | default .Release.Name }}-ext-config"
{{- end }}
redisFollower: {{- include "redis.role" .Values.redisCluster.follower | nindent 4 }}
replicas: {{ .Values.redisCluster.follower.replicas }}
{{- if .Values.externalConfig.enabled }}
redisConfig:
additionalRedisConfig: "{{ .Values.redisCluster.name | default .Release.Name }}-ext-config"
{{- end }}
redisExporter:
enabled: {{ .Values.redisExporter.enabled }}
image: "{{ .Values.redisExporter.image }}:{{ .Values.redisExporter.tag }}"
imagePullPolicy: "{{ .Values.redisExporter.imagePullPolicy }}"
{{- if .Values.redisExporter.resources}}
resources: {{ toYaml .Values.redisExporter.resources | nindent 6 }}
{{- end }}
{{- if .Values.redisExporter.env }}
env: {{ toYaml .Values.redisExporter.env | nindent 6 }}
{{- end }}
kubernetesConfig:
image: "{{ .Values.redisCluster.image }}:{{ .Values.redisCluster.tag }}"
imagePullPolicy: "{{ .Values.redisCluster.imagePullPolicy }}"
{{- if .Values.redisCluster.imagePullSecrets}}
imagePullSecrets: {{ toYaml .Values.redisCluster.imagePullSecrets | nindent 4 }}
{{- end }}
{{- if .Values.redisCluster.resources}}
resources: {{ toYaml .Values.redisCluster.resources | nindent 6 }}
{{- end }}
{{- if and .Values.redisCluster.redisSecret.secretName .Values.redisCluster.redisSecret.secretKey }}
redisSecret:
name: {{ .Values.redisCluster.redisSecret.secretName | quote }}
key: {{ .Values.redisCluster.redisSecret.secretKey | quote }}
{{- end }}
{{- if .Values.storageSpec }}
storage: {{ toYaml .Values.storageSpec | nindent 4 }}
{{- end }}
{{- if and .Values.priorityClassName (ne .Values.priorityClassName "") }}
priorityClassName: "{{ .Values.priorityClassName }}"
{{- end }}
{{- if .Values.podSecurityContext }}
podSecurityContext: {{ toYaml .Values.podSecurityContext | nindent 4 }}
{{- end }}
{{- if and .Values.TLS.ca .Values.TLS.cert .Values.TLS.key .Values.TLS.secret.secretName }}
TLS:
ca: {{ .Values.TLS.ca | quote }}
cert: {{ .Values.TLS.cert | quote }}
key: {{ .Values.TLS.key | quote }}
secret:
secretName: {{ .Values.TLS.secret.secretName | quote }}
{{- end }}
{{- if and .Values.acl.secret (ne .Values.acl.secret.secretName "") }}
acl:
secret:
secretName: {{ .Values.acl.secret.secretName | quote }}
{{- end }}
{{- if and .Values.sidecars (ne .Values.sidecars.name "") (ne .Values.sidecars.image "") }}
sidecars: {{ include "sidecar.properties" | nindent 4 }}
{{- end }}
{{- if and .Values.initContainer .Values.initContainer.enabled (ne .Values.initContainer.image "") }}
initContainer: {{ include "initContainer.properties" | nindent 4 }}
{{- end }}
{{- if .Values.env }}
env: {{ toYaml .Values.env | nindent 4 }}
{{- end }}
{{- if and .Values.serviceAccountName (ne .Values.serviceAccountName "") }}
serviceAccountName: "{{ .Values.serviceAccountName }}"
{{- end }}

View File

@ -1,184 +0,0 @@
---
redisCluster:
name: ""
clusterSize: 3
clusterVersion: v7
persistenceEnabled: true
image: quay.io/opstree/redis
tag: v7.0.12
imagePullPolicy: IfNotPresent
imagePullSecrets: {}
# - name: Secret with Registry credentials
redisSecret:
secretName: ""
secretKey: ""
resources: {}
# requests:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 100m
# memory: 128Mi
leader:
replicas: 3
serviceType: ClusterIP
affinity: {}
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: disktype
# operator: In
# values:
# - ssd
tolerations: []
# - key: "key"
# operator: "Equal"
# value: "value"
# effect: "NoSchedule"
nodeSelector: null
# memory: medium
securityContext: {}
pdb:
enabled: false
maxUnavailable: 1
minAvailable: 1
follower:
replicas: 3
serviceType: ClusterIP
affinity: null
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: disktype
# operator: In
# values:
# - ssd
tolerations: []
# - key: "key"
# operator: "Equal"
# value: "value"
# effect: "NoSchedule"
nodeSelector: null
# memory: medium
securityContext: {}
pdb:
enabled: false
maxUnavailable: 1
minAvailable: 1
labels: {}
# foo: bar
# test: echo
externalConfig:
enabled: false
data: |
tcp-keepalive 400
slowlog-max-len 158
stream-node-max-bytes 2048
externalService:
enabled: false
# annotations:
# foo: bar
serviceType: LoadBalancer
port: 6379
serviceMonitor:
enabled: false
interval: 30s
scrapeTimeout: 10s
namespace: monitoring
redisExporter:
enabled: false
image: quay.io/opstree/redis-exporter
tag: "v1.44.0"
imagePullPolicy: IfNotPresent
resources: {}
# requests:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 100m
# memory: 128Mi
env: []
# - name: VAR_NAME
# value: "value1"
sidecars:
name: ""
image: ""
imagePullPolicy: "IfNotPresent"
resources:
limits:
cpu: "100m"
memory: "128Mi"
requests:
cpu: "50m"
memory: "64Mi"
env: {}
# - name: MY_ENV_VAR
# value: "my-env-var-value"
initContainer:
enabled: false
image: ""
imagePullPolicy: "IfNotPresent"
resources: {}
# requests:
# memory: "64Mi"
# cpu: "250m"
# limits:
# memory: "128Mi"
# cpu: "500m"
env: []
command: []
args: []
priorityClassName: ""
storageSpec:
volumeClaimTemplate:
spec:
# storageClassName: standard
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
nodeConfVolume: true
nodeConfVolumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
# selector: {}
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
# serviceAccountName: redis-sa
TLS:
ca: ca.key
cert: tls.crt
key: tls.key
secret:
secretName: ""
acl:
secret:
secretName: ""
env: []
# - name: VAR_NAME
# value: "value1"
serviceAccountName: ""

View File

@ -1 +0,0 @@
*.tgz

View File

@ -1,21 +0,0 @@
---
apiVersion: v2
version: 0.16.0
appVersion: "0.16.0"
description: Provides easy redis setup definitions for Kubernetes services, and deployment.
engine: gotpl
maintainers:
- name: iamabhishek-dubey
- name: sandy724
- name: shubham-cmyk
name: redis-operator
sources:
- https://github.com/OT-CONTAINER-KIT/redis-operator
home: https://github.com/OT-CONTAINER-KIT/redis-operator
icon: https://github.com/OT-CONTAINER-KIT/redis-operator/raw/master/static/redis-operator-logo.svg
keywords:
- operator
- redis
- opstree
- kubernetes
- openshift

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,112 +0,0 @@
# Redis Operator Helm Chart
## Introduction
This Helm chart deploys the redis-operator into your Kubernetes cluster. The operator facilitates the deployment, scaling, and management of Redis clusters and other Redis resources provided by the OpsTree Solutions team.
## Pre-requisites
- Helm v3+
- Kubernetes v1.16+
- If you intend to use the cert-manager, ensure that the cert-manager CRDs are installed before deploying the redis-operator.
## Installation Steps
### 1. Add Helm Repository
```bash
helm repo add ot-helm https://ot-container-kit.github.io/helm-charts
```
### 2. Install Cert-Manager CRDs (if using cert-manager)
If you plan to use cert-manager with the redis-operator, you need to install the cert-manager CRDs before deploying the operator.
```bash
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.4/cert-manager.crds.yaml
```
### 3. Install Redis Operator
Replace `<YourCertSecretName>` and `<YourPrivateKey>` with your specific values.
```bash
helm install <redis-operator> ot-helm/redis-operator --version=0.15.5 --appVersion=0.15.1 --set certificate.secretName=<YourCertSecretName> --set certmanager.enabled=true --set redisOperator.webhook=true --namespace <redis-operator> --create-namespace
```
> Note: If `certificate.secretName` is not provided, the operator will generate a self-signed certificate and use it for webhook server.
---
> Note : If you want to disable the webhook you have to pass the `--set webhook=false` and `--set certmanager.enabled=false` while installing the redis-operator.
### 4. Patch the CA Bundle (if using cert-manager)
Cert-manager injects the CA bundle into the webhook configuration.
```bash
kubectl patch crd redis.redis.redis.opstreelabs.in -p '{"metadata":{"annotations":{"cert-manager.io/inject-ca-from":"<redis-operator>/<serving-cert>"}}}'
kubectl patch crd redisclusters.redis.redis.opstreelabs.in -p '{"metadata":{"annotations":{"cert-manager.io/inject-ca-from":"<redis-operator>/<serving-cert>"}}}'
kubectl patch crd redisreplications.redis.redis.opstreelabs.in -p '{"metadata":{"annotations":{"cert-manager.io/inject-ca-from":"<redis-operator>/<serving-cert>"}}}'
kubectl patch crd redissentinels.redis.redis.opstreelabs.in -p '{"metadata":{"annotations":{"cert-manager.io/inject-ca-from":"<redis-operator>/<serving-cert>"}}}'
```
> Note: Replace `<redis-operator>` and `<serving-cert>` with your specific values i.e. release name and certificate name.
#### You can verify the patch by running the following commands
```bash
kubectl get crd redis.redis.redis.opstreelabs.in -o=jsonpath='{.metadata.annotations}'
kubectl get crd redisclusters.redis.redis.opstreelabs.in -o=jsonpath='{.metadata.annotations}'
kubectl get crd redisreplications.redis.redis.opstreelabs.in -o=jsonpath='{.metadata.annotations}'
kubectl get crd redissentinels.redis.redis.opstreelabs.in -o=jsonpath='{.metadata.annotations}'
```
### How to generate private key( Optional )
```bash
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt
kubectl create secret tls <webhook-server-cert> --key tls.key --cert tls.crt -n <redis-operator>
```
> Note: This secret will be used for webhook server certificate so generate it before installing the redis-operator.
## Default Values
| Parameter | Description | Default |
|-------------------------------------|------------------------------------|--------------------------------------------------------------|
| `redisOperator.name` | Operator name | `redis-operator` |
| `redisOperator.imageName` | Image repository | `quay.io/opstree/redis-operator` |
| `redisOperator.imageTag` | Image tag | `{{appVersion}}` |
| `redisOperator.imagePullPolicy` | Image pull policy | `Always` |
| `redisOperator.podAnnotations` | Additional pod annotations | `{}` |
| `redisOperator.podLabels` | Additional Pod labels | `{}` |
| `redisOperator.extraArgs` | Additional arguments for the operator | `{}` |
| `redisOperator.watch_namespace` | Namespace for the operator to watch | `""` |
| `redisOperator.env` | Environment variables for the operator | `{}` |
| `redisOperator.webhook` | Enable webhook | `false` |
| `resources.limits.cpu` | CPU limit | `500m` |
| `resources.limits.memory` | Memory limit | `500Mi` |
| `resources.requests.cpu` | CPU request | `500m` |
| `resources.requests.memory` | Memory request | `500Mi` |
| `replicas` | Number of replicas | `1` |
| `serviceAccountName` | Service account name | `redis-operator` |
| `certificate.name` | Certificate name | `serving-cert` |
| `certificate.secretName` | Certificate secret name | `webhook-server-cert` |
| `issuer.type` | Issuer type | `selfSigned` |
| `issuer.name` | Issuer name | `redis-operator-issuer` |
| `issuer.email` | Issuer email | `shubham.gupta@opstree.com` |
| `issuer.server` | Issuer server URL | `https://acme-v02.api.letsencrypt.org/directory` |
| `issuer.privateKeySecretName` | Private key secret name | `letsencrypt-prod` |
| `certManager.enabled` | Enable cert-manager | `false` |
## Scheduling Parameters
| Parameter | Description | Default |
|-------------------------|--------------------------------------------|----------|
| `priorityClassName` | Priority class name for the pods | `""` |
| `nodeSelector` | Labels for pod assignment | `{}` |
| `tolerateAllTaints` | Whether to tolerate all node taints | `false` |
| `tolerations` | Taints to tolerate | `[]` |
| `affinity` | Affinity rules for pod assignment | `{}` |

View File

@ -1,34 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/* Define issuer spec based on the type */}}
{{- define "redis-operator.issuerSpec" -}}
{{- if eq .Values.issuer.type "acme" }}
acme:
email: {{ .Values.issuer.email }}
server: {{ .Values.issuer.server }}
privateKeySecretRef:
name: {{ .Values.issuer.privateKeySecretName }}
solvers:
- http01:
ingress:
class: {{ .Values.issuer.solver.ingressClass }}
{{- else }}
selfSigned: {}
{{- end }}
{{- end -}}
{{/* Common labels */}}
{{- define "redisOperator.labels" -}}
app.kubernetes.io/name: {{ .Values.redisOperator.name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: operator
app.kubernetes.io/part-of: {{ .Release.Name }}
{{- end }}
{{/* Selector labels */}}
{{- define "redisOperator.selectorLabels" -}}
name: {{ .Values.redisOperator.name }}
{{- end }}

View File

@ -1,43 +0,0 @@
{{ if .Values.certmanager.enabled }}
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: {{ .Values.issuer.name }}
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: {{ .Values.redisOperator.name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: issuer
app.kubernetes.io/part-of: {{ .Release.Name }}
spec:
{{- include "redis-operator.issuerSpec" . | nindent 2 }}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{ .Values.certificate.name }}
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: {{ .Values.redisOperator.name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: certificate
app.kubernetes.io/part-of: {{ .Release.Name }}
spec:
dnsNames:
- {{ .Values.service.name }}.{{ .Values.service.namespace }}.svc
- {{ .Values.service.name }}.{{ .Values.service.namespace }}.svc.cluster.local
issuerRef:
kind: Issuer
name: {{ .Values.issuer.name }}
secretName: {{ .Values.certificate.secretName }}
{{ end }}

View File

@ -1,76 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.redisOperator.name }}
namespace: {{ .Release.Namespace }}
labels: {{- include "redisOperator.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels: {{- include "redisOperator.selectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ .Values.certificate.name }}
{{- with .Values.redisOperator.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
labels: {{- include "redisOperator.selectorLabels" . | nindent 8 }}
{{- with .Values.redisOperator.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
containers:
- name: "{{ .Values.redisOperator.name }}"
image: "{{ .Values.redisOperator.imageName }}:{{ .Values.redisOperator.imageTag | default (printf "v%s" .Chart.AppVersion) }}"
imagePullPolicy: {{ .Values.redisOperator.imagePullPolicy }}
command:
- /manager
args:
- --leader-elect
{{- range $arg := .Values.redisOperator.extraArgs }}
- {{ $arg }}
{{- end }}
{{- if .Values.redisOperator.webhook }}
ports:
- containerPort: 9443
name: webhook-server
protocol: TCP
volumeMounts:
- mountPath: /tmp/k8s-webhook-server/serving-certs
name: cert
readOnly: true
{{- end }}
env:
- name: ENABLE_WEBHOOKS
value: "{{ .Values.redisOperator.webhook | toString }}"
{{- if .WATCH_NAMESPACE }}
- name: WATCH_NAMESPACE
value: {{ .WATCH_NAMESPACE }}
{{- end }}
{{- range $env := .Values.redisOperator.env }}
- name: {{ $env.name }}
value: {{ $env.value | quote }}
{{- end }}
{{- if .Values.resources }}
resources: {{ toYaml .Values.resources | nindent 10 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector: {{ toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.priorityClassName}}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
{{- with .Values.affinity }}
affinity: {{ toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: "{{ .Values.serviceAccountName }}"
serviceAccount: "{{ .Values.serviceAccountName }}"
{{- if .Values.redisOperator.webhook }}
volumes:
- name: cert
secret:
defaultMode: 420
secretName: {{ .Values.certificate.secretName }}
{{- end }}

View File

@ -1,21 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Values.redisOperator.name }}
labels:
app.kubernetes.io/name : {{ .Values.redisOperator.name }}
helm.sh/chart : {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by : {{ .Release.Service }}
app.kubernetes.io/instance : {{ .Release.Name }}
app.kubernetes.io/version : {{ .Chart.AppVersion }}
app.kubernetes.io/component: role-binding
app.kubernetes.io/part-of : {{ .Release.Name }}
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccountName }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ .Values.redisOperator.name }}
apiGroup: rbac.authorization.k8s.io

View File

@ -1,126 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Values.redisOperator.name }}
labels:
app.kubernetes.io/name : {{ .Values.redisOperator.name }}
helm.sh/chart : {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by : {{ .Release.Service }}
app.kubernetes.io/instance : {{ .Release.Name }}
app.kubernetes.io/version : {{ .Chart.AppVersion }}
app.kubernetes.io/component: role
app.kubernetes.io/part-of : {{ .Release.Name }}
rules:
- apiGroups:
- redis.redis.opstreelabs.in
resources:
- rediss
- redisclusters
- redisreplications
- redis
- rediscluster
- redissentinel
- redissentinels
- redisreplication
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- nonResourceURLs:
- '*'
verbs:
- get
- apiGroups:
- "apiextensions.k8s.io"
resources:
- "customresourcedefinitions"
verbs:
- "get"
- "list"
- "watch"
- apiGroups:
- redis.redis.opstreelabs.in
resources:
- redis/finalizers
- rediscluster/finalizers
- redisclusters/finalizers
- redissentinel/finalizers
- redissentinels/finalizers
- redisreplication/finalizers
- redisreplications/finalizers
verbs:
- update
- apiGroups:
- redis.redis.opstreelabs.in
resources:
- redis/status
- rediscluster/status
- redisclusters/status
- redissentinel/status
- redissentinels/status
- redisreplication/status
- redisreplications/status
verbs:
- get
- patch
- update
- apiGroups:
- ""
resources:
- secrets
- pods/exec
- pods
- services
- configmaps
- events
- persistentvolumeclaims
- namespace
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- "coordination.k8s.io"
resources:
- leases
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- "policy"
resources:
- poddisruptionbudgets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch

View File

@ -1,14 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.redisOperator.name }}
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name : {{ .Values.redisOperator.name }}
helm.sh/chart : {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by : {{ .Release.Service }}
app.kubernetes.io/instance : {{ .Release.Name }}
app.kubernetes.io/version : {{ .Chart.AppVersion }}
app.kubernetes.io/component: service-account
app.kubernetes.io/part-of : {{ .Release.Name }}

View File

@ -1,20 +0,0 @@
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name : {{ .Values.redisOperator.name }}
helm.sh/chart : {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by : {{ .Release.Service }}
app.kubernetes.io/instance : {{ .Release.Name }}
app.kubernetes.io/version : {{ .Chart.AppVersion }}
app.kubernetes.io/component: webhook
app.kubernetes.io/part-of : {{ .Release.Name }}
name: {{ .Values.service.name }}
namespace: {{ .Release.Namespace }}
spec:
ports:
- port: 443
protocol: TCP
targetPort: 9443
selector:
name: {{ .Values.redisOperator.name }}

View File

@ -1,60 +0,0 @@
---
redisOperator:
name: redis-operator
imageName: quay.io/opstree/redis-operator
# Overrides the image tag whose default is the chart appVersion.
imageTag: ""
imagePullPolicy: Always
# Additional pod annotations
podAnnotations: {}
# Additional Pod labels (e.g. for filtering Pod by custom labels)
podLabels: {}
# Additional arguments for redis-operator container
extraArgs: []
# - -zap-log-level=error
watch_namespace: ""
env: []
webhook: false
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 500m
memory: 500Mi
replicas: 1
serviceAccountName: redis-operator
service:
name: webhook-service
namespace: redis-operator
certificate:
name: serving-cert
secretName: webhook-server-cert
issuer:
type: selfSigned
name: redis-operator-issuer
email: shubham.gupta@opstree.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretName: letsencrypt-prod
solver:
enabled: true
ingressClass: nginx
certmanager:
enabled: false
priorityClassName: ""
nodeSelector: {}
tolerateAllTaints: false
tolerations: []
affinity: {}

View File

@ -1,22 +0,0 @@
apiVersion: v2
name: redis-replication
description: Provides easy redis setup definitions for Kubernetes services, and deployment.
type: application
engine: gotpl
maintainers:
- name: iamabhishek-dubey
- name: sandy724
- name: shubham-cmyk
sources:
- https://github.com/ot-container-kit/redis-operator
version: 0.16.0
appVersion: "0.16.0"
home: https://github.com/ot-container-kit/redis-operator
keywords:
- operator
- redis
- opstree
- kubernetes
- openshift
- redis-exporter
icon: https://github.com/OT-CONTAINER-KIT/redis-operator/raw/master/static/redis-operator-logo.svg

View File

@ -1,63 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/* Define common labels */}}
{{- define "common.labels" -}}
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
{{- if .Values.labels }}
{{- range $labelkey, $labelvalue := .Values.labels }}
{{ $labelkey}}: {{ $labelvalue }}
{{- end }}
{{- end }}
{{- end -}}
{{/* Generate init container properties */}}
{{- define "initContainer.properties" -}}
{{- with .Values.initContainer }}
{{- if .enabled }}
image: {{ .image }}
{{- if .imagePullPolicy }}
imagePullPolicy: {{ .imagePullPolicy }}
{{- end }}
{{- if .resources }}
resources:
{{ toYaml .resources | nindent 2 }}
{{- end }}
{{- if .env }}
env:
{{ toYaml .env | nindent 2 }}
{{- end }}
{{- if .command }}
command:
{{ toYaml .command | nindent 2 }}
{{- end }}
{{- if .args }}
args:
{{ toYaml .args | nindent 2 }}
{{- end }}
{{- end }}
{{- end }}
{{- end -}}
{{/* Generate sidecar properties */}}
{{- define "sidecar.properties" -}}
{{- with .Values.sidecars }}
name: {{ .name }}
image: {{ .image }}
{{- if .imagePullPolicy }}
imagePullPolicy: {{ .imagePullPolicy }}
{{- end }}
{{- if .resources }}
resources:
{{ toYaml .resources | nindent 2 }}
{{- end }}
{{- if .env }}
env:
{{ toYaml .env | nindent 2 }}
{{- end }}
{{- end }}
{{- end -}}

View File

@ -1,17 +0,0 @@
{{- if eq .Values.externalConfig.enabled true }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.redisReplication.name | default .Release.Name }}-ext-config
labels:
app.kubernetes.io/name: {{ .Values.redisReplication.name | default .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Values.redisReplication.name | default .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
data:
redis-additional.conf: |
{{ .Values.externalConfig.data | nindent 4 }}
{{- end }}

View File

@ -1,88 +0,0 @@
---
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisReplication
metadata:
name: {{ .Values.redisReplication.name | default .Release.Name }}
labels: {{- include "common.labels" . | nindent 4 }}
spec:
clusterSize: {{ .Values.redisReplication.clusterSize }}
kubernetesConfig:
image: "{{ .Values.redisReplication.image }}:{{ .Values.redisReplication.tag }}"
imagePullPolicy: "{{ .Values.redisReplication.imagePullPolicy }}"
{{- if .Values.redisReplication.imagePullSecrets }}
imagePullSecrets: {{ toYaml .Values.redisReplication.imagePullSecrets | nindent 4 }}
{{- end }}
{{- if .Values.redisReplication.resources}}
resources: {{ toYaml .Values.redisReplication.resources | nindent 6 }}
{{- end }}
{{- if and .Values.redisReplication.redisSecret.secretName .Values.redisReplication.redisSecret.secretKey }}
redisSecret:
name: {{ .Values.redisReplication.redisSecret.secretName | quote }}
key: {{ .Values.redisReplication.redisSecret.secretKey | quote }}
{{- end }}
{{- if .Values.redisReplication.ignoreAnnotations}}
ignoreAnnotations: {{ toYaml .Values.redisReplication.ignoreAnnotations | nindent 6 }}
{{- end }}
redisExporter:
enabled: {{ .Values.redisExporter.enabled }}
image: "{{ .Values.redisExporter.image }}:{{ .Values.redisExporter.tag }}"
imagePullPolicy: "{{ .Values.redisExporter.imagePullPolicy }}"
{{- if .Values.redisExporter.resources}}
resources: {{ toYaml .Values.redisExporter.resources | nindent 6 }}
{{- end }}
{{- if .Values.redisExporter.env }}
env: {{ toYaml .Values.redisExporter.env | nindent 6 }}
{{- end }}
{{- if .Values.externalConfig.enabled }}
redisConfig:
additionalRedisConfig: "{{ .Values.redisReplication.name | default .Release.Name }}-ext-config"
{{- end }}
{{- if .Values.storageSpec }}
storage: {{ toYaml .Values.storageSpec | nindent 4 }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector: {{ toYaml .Values.nodeSelector | nindent 4 }}
{{- end }}
{{- if .Values.podSecurityContext }}
podSecurityContext: {{ toYaml .Values.podSecurityContext | nindent 4 }}
{{- end }}
{{- if .Values.securityContext }}
securityContext: {{ toYaml .Values.securityContext | nindent 4 }}
{{- end }}
{{- if and .Values.priorityClassName (ne .Values.priorityClassName "") }}
priorityClassName: "{{ .Values.priorityClassName }}"
{{- end }}
{{- if .Values.affinity }}
affinity: {{ toYaml .Values.affinity | nindent 4 }}
{{- end }}
{{- if .Values.tolerations }}
tolerations: {{ toYaml .Values.tolerations | nindent 4 }}
{{- end }}
{{- if and .Values.TLS.ca .Values.TLS.cert .Values.TLS.key .Values.TLS.secret.secretName }}
TLS:
ca: {{ .Values.TLS.ca | quote }}
cert: {{ .Values.TLS.cert | quote }}
key: {{ .Values.TLS.key | quote }}
secret:
secretName: {{ .Values.TLS.secret.secretName | quote }}
{{- end }}
{{- if and .Values.acl.secret (ne .Values.acl.secret.secretName "") }}
acl:
secret:
secretName: {{ .Values.acl.secret.secretName | quote }}
{{- end }}
{{- if and .Values.initContainer .Values.initContainer.enabled (ne .Values.initContainer.image "") }}
initContainer: {{ include "initContainer.properties" | nindent 4 }}
{{- end }}
{{- if and .Values.sidecars (ne .Values.sidecars.name "") (ne .Values.sidecars.image "") }}
sidecars: {{ include "sidecar.properties" | nindent 4 }}
{{- end }}
{{- if and .Values.serviceAccountName (ne .Values.serviceAccountName "") }}
serviceAccountName: "{{ .Values.serviceAccountName }}"
{{- end }}
{{- if .Values.env }}
env: {{ toYaml .Values.env | nindent 4 }}
{{- end }}

View File

@ -1,29 +0,0 @@
{{- if eq .Values.externalService.enabled true }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.redisReplication.name | default .Release.Name }}-external-service
{{- if .Values.externalService.annotations }}
annotations:
{{ toYaml .Values.externalService.annotations | indent 4 }}
{{- end }}
labels:
app.kubernetes.io/name: {{ .Values.redisReplication.name | default .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Values.redisReplication.name | default .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
spec:
type: {{ .Values.externalService.serviceType }}
selector:
app: {{ .Values.redisReplication.name | default .Release.Name }}
redis_setup_type: replication
role: replication
ports:
- protocol: TCP
port: {{ .Values.externalService.port }}
targetPort: 6379
name: client
{{- end }}

View File

@ -1,27 +0,0 @@
{{- if eq .Values.serviceMonitor.enabled true }}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ .Values.redisReplication.name | default .Release.Name }}-prometheus-monitoring
labels:
app.kubernetes.io/name: {{ .Values.redisReplication.name | default .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Values.redisReplication.name | default .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
spec:
selector:
matchLabels:
app: {{ .Values.redisReplication.name | default .Release.Name }}
redis_setup_type: replication
role: replication
endpoints:
- port: redis-exporter
interval: {{ .Values.serviceMonitor.interval }}
scrapeTimeout: {{ .Values.serviceMonitor.scrapeTimeout }}
namespaceSelector:
matchNames:
- {{ .Values.serviceMonitor.namespace }}
{{- end }}

View File

@ -1,149 +0,0 @@
---
redisReplication:
name: ""
clusterSize: 3
image: quay.io/opstree/redis
tag: v7.0.12
imagePullPolicy: IfNotPresent
imagePullSecrets: []
# - name: Secret with Registry credentials
redisSecret:
secretName: ""
secretKey: ""
serviceType: ClusterIP
resources: {}
# requests:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 100m
# memory: 128Mi
ignoreAnnotations: []
# - "redis.opstreelabs.in/ignore"
# Overwite name for resources
# name: ""
labels: {}
# foo: bar
# test: echo
externalConfig:
enabled: false
data: |
tcp-keepalive 400
slowlog-max-len 158
stream-node-max-bytes 2048
externalService:
enabled: false
# annotations:
# foo: bar
serviceType: NodePort
port: 6379
serviceMonitor:
enabled: false
interval: 30s
scrapeTimeout: 10s
namespace: monitoring
redisExporter:
enabled: false
image: quay.io/opstree/redis-exporter
tag: "v1.44.0"
imagePullPolicy: IfNotPresent
resources: {}
# requests:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 100m
# memory: 128Mi
env: []
# - name: VAR_NAME
# value: "value1"
initContainer:
enabled: false
image: ""
imagePullPolicy: "IfNotPresent"
resources: {}
# requests:
# memory: "64Mi"
# cpu: "250m"
# limits:
# memory: "128Mi"
# cpu: "500m"
env: []
command: []
args: []
sidecars:
name: ""
image: ""
imagePullPolicy: "IfNotPresent"
resources:
limits:
cpu: "100m"
memory: "128Mi"
requests:
cpu: "50m"
memory: "64Mi"
env: []
# - name: MY_ENV_VAR
# value: "my-env-var-value"
priorityClassName: ""
nodeSelector: {}
# memory: medium
storageSpec:
volumeClaimTemplate:
spec:
# storageClassName: standard
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
# selector: {}
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
securityContext: {}
affinity: {}
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: disktype
# operator: In
# values:
# - ssd
tolerations: []
# - key: "key"
# operator: "Equal"
# value: "value"
# effect: "NoSchedule"
serviceAccountName: ""
TLS:
ca: ca.key
cert: tls.crt
key: tls.key
secret:
secretName: ""
acl:
secret:
secretName: ""
env: []
# - name: VAR_NAME
# value: "value1"

View File

@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -1,21 +0,0 @@
apiVersion: v2
name: redis-sentinel
description: Provides easy redis setup definitions for Kubernetes services, and deployment.
version: 0.16.0
appVersion: "0.16.0"
home: https://github.com/ot-container-kit/redis-operator
sources:
- https://github.com/ot-container-kit/redis-operator
maintainers:
- name: iamabhishek-dubey
- name: sandy724
- name: shubham-cmyk
keywords:
- operator
- redis
- opstree
- kubernetes
- openshift
- redis-exporter
icon: https://github.com/OT-CONTAINER-KIT/redis-operator/raw/master/static/redis-operator-logo.svg
type: application

View File

@ -1,63 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/* Define common labels */}}
{{- define "common.labels" -}}
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
{{- if .Values.labels }}
{{- range $labelkey, $labelvalue := .Values.labels }}
{{ $labelkey}}: {{ $labelvalue }}
{{- end }}
{{- end }}
{{- end -}}
{{/* Generate init container properties */}}
{{- define "initContainer.properties" -}}
{{- with .Values.initContainer }}
{{- if .enabled }}
image: {{ .image }}
{{- if .imagePullPolicy }}
imagePullPolicy: {{ .imagePullPolicy }}
{{- end }}
{{- if .resources }}
resources:
{{ toYaml .resources | nindent 2 }}
{{- end }}
{{- if .env }}
env:
{{ toYaml .env | nindent 2 }}
{{- end }}
{{- if .command }}
command:
{{ toYaml .command | nindent 2 }}
{{- end }}
{{- if .args }}
args:
{{ toYaml .args | nindent 2 }}
{{- end }}
{{- end }}
{{- end }}
{{- end -}}
{{/* Generate sidecar properties */}}
{{- define "sidecar.properties" -}}
{{- with .Values.sidecars }}
name: {{ .name }}
image: {{ .image }}
{{- if .imagePullPolicy }}
imagePullPolicy: {{ .imagePullPolicy }}
{{- end }}
{{- if .resources }}
resources:
{{ toYaml .resources | nindent 2 }}
{{- end }}
{{- if .env }}
env:
{{ toYaml .env | nindent 2 }}
{{- end }}
{{- end }}
{{- end -}}

View File

@ -1,17 +0,0 @@
{{- if eq .Values.externalConfig.enabled true }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.redisSentinel.name | default .Release.Name }}-ext-config
labels:
app.kubernetes.io/name: {{ .Values.redisSentinel.name | default .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Values.redisSentinel.name | default .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
data:
redis-sentinel-additional.conf: |
{{ .Values.externalConfig.data | nindent 4 }}
{{- end }}

View File

@ -1,97 +0,0 @@
---
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisSentinel
metadata:
name: {{ .Values.redisSentinel.name | default .Release.Name }}
labels: {{- include "common.labels" . | nindent 4 }}
spec:
clusterSize: {{ .Values.redisSentinel.clusterSize }}
# Sentinel Config
redisSentinelConfig:
redisReplicationName: {{ .Values.redisSentinelConfig.redisReplicationName}}
masterGroupName : {{ .Values.redisSentinelConfig.masterGroupName | default "myMaster" | quote}}
redisPort: {{ .Values.redisSentinelConfig.redisPort | default "6379" | quote}}
quorum: {{ .Values.redisSentinelConfig.quorum | default "2" | quote}}
parallelSyncs: {{ .Values.redisSentinelConfig.parallelSyncs | default "1" | quote}}
failoverTimeout: {{ .Values.redisSentinelConfig.failoverTimeout | default "180000" | quote}}
downAfterMilliseconds: {{ .Values.redisSentinelConfig.downAfterMilliseconds | default "30000" | quote}}
{{- if eq .Values.externalConfig.enabled true }}
additionalSentinelConfig: {{ .Values.redisSentinel.name | default .Release.Name }}-ext-config
{{- end }}
kubernetesConfig:
image: "{{ .Values.redisSentinel.image }}:{{ .Values.redisSentinel.tag }}"
imagePullPolicy: "{{ .Values.redisSentinel.imagePullPolicy }}"
{{- if .Values.redisSentinel.imagePullSecrets }}
imagePullSecrets: {{ toYaml .Values.redisSentinel.imagePullSecrets | nindent 4 }}
{{- end }}
{{- if .Values.redisSentinel.resources}}
resources: {{ toYaml .Values.redisSentinel.resources | nindent 6 }}
{{- end }}
{{- if and .Values.redisSentinel.redisSecret.secretName .Values.redisSentinel.redisSecret.secretKey }}
redisSecret:
name: {{ .Values.redisSentinel.redisSecret.secretName | quote }}
key: {{ .Values.redisSentinel.redisSecret.secretKey | quote }}
{{- end }}
{{- if .Values.redisSentinel.ignoreAnnotations}}
ignoreAnnotations: {{ toYaml .Values.redisSentinel.ignoreAnnotations | nindent 6 }}
{{- end }}
redisExporter:
enabled: {{ .Values.redisExporter.enabled }}
image: "{{ .Values.redisExporter.image }}:{{ .Values.redisExporter.tag }}"
imagePullPolicy: "{{ .Values.redisExporter.imagePullPolicy }}"
{{- if .Values.redisExporter.resources}}
resources: {{ toYaml .Values.redisExporter.resources | nindent 6 }}
{{- end }}
{{- if .Values.redisExporter.env }}
env: {{ toYaml .Values.redisExporter.env | nindent 6 }}
{{- end }}
{{- if .Values.externalConfig.enabled }}
redisConfig:
additionalRedisConfig: "{{ .Values.redisSentinel.name | default .Release.Name }}-ext-config"
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector: {{ toYaml .Values.nodeSelector | nindent 4 }}
{{- end }}
{{- if .Values.podSecurityContext }}
podSecurityContext: {{ toYaml .Values.podSecurityContext | nindent 4 }}
{{- end }}
{{- if .Values.securityContext }}
securityContext: {{ toYaml .Values.securityContext | nindent 4 }}
{{- end }}
{{- if and .Values.priorityClassName (ne .Values.priorityClassName "") }}
priorityClassName: "{{ .Values.priorityClassName }}"
{{- end }}
{{- if .Values.affinity }}
affinity: {{ toYaml .Values.affinity | nindent 4 }}
{{- end }}
{{- if .Values.tolerations }}
tolerations: {{ toYaml .Values.tolerations | nindent 4 }}
{{- end }}
{{- if and .Values.TLS.ca .Values.TLS.cert .Values.TLS.key .Values.TLS.secret.secretName }}
TLS:
ca: {{ .Values.TLS.ca | quote }}
cert: {{ .Values.TLS.cert | quote }}
key: {{ .Values.TLS.key | quote }}
secret:
secretName: {{ .Values.TLS.secret.secretName | quote }}
{{- end }}
{{- if and .Values.acl.secret (ne .Values.acl.secret.secretName "") }}
acl:
secret:
secretName: {{ .Values.acl.secret.secretName | quote }}
{{- end }}
{{- if and .Values.initContainer .Values.initContainer.enabled (ne .Values.initContainer.image "") }}
initContainer: {{ include "initContainer.properties" | nindent 4 }}
{{- end }}
{{- if and .Values.sidecars (ne .Values.sidecars.name "") (ne .Values.sidecars.image "") }}
sidecars: {{ include "sidecar.properties" | nindent 4 }}
{{- end }}
{{- if and .Values.serviceAccountName (ne .Values.serviceAccountName "") }}
serviceAccountName: "{{ .Values.serviceAccountName }}"
{{- end }}
{{- if .Values.env }}
env: {{ toYaml .Values.env | nindent 4 }}
{{- end }}

View File

@ -1,29 +0,0 @@
{{- if eq .Values.externalService.enabled true }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.redisSentinel.name | default .Release.Name }}-external-service
{{- if .Values.externalService.annotations }}
annotations:
{{ toYaml .Values.externalService.annotations | indent 4 }}
{{- end }}
labels:
app.kubernetes.io/name: {{ .Values.redisSentinel.name | default .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Values.redisSentinel.name | default .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
spec:
type: {{ .Values.externalService.serviceType }}
selector:
app: {{ .Values.redisSentinel.name | default .Release.Name }}
redis_setup_type: sentinel
role: sentinel
ports:
- protocol: TCP
port: {{ .Values.externalService.port }}
targetPort: 26379
name: client
{{- end }}

View File

@ -1,148 +0,0 @@
---
redisSentinel:
name: ""
clusterSize: 3
image: quay.io/opstree/redis-sentinel
tag: v7.0.12
imagePullPolicy: IfNotPresent
imagePullSecrets: []
# - name: Secret with Registry credentials
redisSecret:
secretName: ""
secretKey: ""
serviceType: ClusterIP
resources: {}
# requests:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 100m
# memory: 128Mi
ignoreAnnotations: []
# - "redis.opstreelabs.in/ignore"
# Overwite name for resources
# name: ""
labels: {}
# foo: bar
# test: echo
redisSentinelConfig:
redisReplicationName: "redis-replication"
masterGroupName: ""
redisPort: ""
quorum: ""
parallelSyncs: ""
failoverTimeout: ""
downAfterMilliseconds: ""
externalConfig:
enabled: false
data: |
tcp-keepalive 400
slowlog-max-len 158
stream-node-max-bytes 2048
externalService:
enabled: false
# annotations:
# foo: bar
serviceType: NodePort
port: 26379
serviceMonitor:
enabled: false
interval: 30s
scrapeTimeout: 10s
namespace: monitoring
redisExporter:
enabled: false
image: quay.io/opstree/redis-exporter
tag: "v1.44.0"
imagePullPolicy: IfNotPresent
resources: {}
# requests:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 100m
# memory: 128Mi
env: []
# - name: VAR_NAME
# value: "value1"
initContainer:
enabled: false
image: ""
imagePullPolicy: "IfNotPresent"
resources: {}
# requests:
# memory: "64Mi"
# cpu: "250m"
# limits:
# memory: "128Mi"
# cpu: "500m"
env: []
command: []
args: []
sidecars:
name: ""
image: ""
imagePullPolicy: "IfNotPresent"
resources:
limits:
cpu: "100m"
memory: "128Mi"
requests:
cpu: "50m"
memory: "64Mi"
env: []
# - name: MY_ENV_VAR
# value: "my-env-var-value"
priorityClassName: ""
nodeSelector: {}
# memory: medium
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
securityContext: {}
affinity: {}
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: disktype
# operator: In
# values:
# - ssd
tolerations: []
# - key: "key"
# operator: "Equal"
# value: "value"
# effect: "NoSchedule"
serviceAccountName: ""
TLS:
ca: ca.key
cert: tls.crt
key: tls.key
secret:
secretName: ""
acl:
secret:
secretName: ""
env: []
# - name: VAR_NAME
# value: "value1"

View File

@ -1,22 +0,0 @@
---
apiVersion: v2
name: redis
description: Provides easy redis setup definitions for Kubernetes services, and deployment.
version: 0.16.0
appVersion: "0.16.0"
home: https://github.com/ot-container-kit/redis-operator
sources:
- https://github.com/ot-container-kit/redis-operator
maintainers:
- name: iamabhishek-dubey
- name: sandy724
- name: shubham-cmyk
keywords:
- operator
- redis
- opstree
- kubernetes
- openshift
- redis-exporter
icon: https://github.com/OT-CONTAINER-KIT/redis-operator/raw/master/static/redis-operator-logo.svg
type: application

View File

@ -1,60 +0,0 @@
# Redis
Redis is a key-value based distributed database, this helm chart is for only standalone setup. This helm chart needs [Redis Operator](../redis-operator) inside Kubernetes cluster. The redis definition can be modified or changed by [values.yaml](./values.yaml).
```shell
helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
helm install <my-release> ot-helm/redis --namespace <namespace>
```
Redis setup can be upgraded by using `helm upgrade` command:-
```shell
helm upgrade <my-release> ot-helm/redis --install --namespace <namespace>
```
For uninstalling the chart:-
```shell
helm delete <my-release> --namespace <namespace>
```
## Pre-Requisities
- Kubernetes 1.15+
- Helm 3.X
- Redis Operator 0.7.0
## Parameters
| **Name** | **Value** | **Description** |
|-----------------------------------|--------------------------------|-----------------------------------------------------------------------------------------------|
| `imagePullSecrets` | [] | List of image pull secrets, in case redis image is getting pull from private registry |
| `redisStandalone.secretName` | redis-secret | Name of the existing secret in Kubernetes |
| `redisStandalone.secretKey` | password | Name of the existing secret key in Kubernetes |
| `redisStandalone.image` | quay.io/opstree/redis | Name of the redis image |
| `redisStandalone.tag` | v6.2 | Tag of the redis image |
| `redisStandalone.imagePullPolicy` | IfNotPresent | Image Pull Policy of the redis image |
| `redisStandalone.serviceType` | ClusterIP | Kubernetes service type for Redis |
| `redisStandalone.resources` | {} | Request and limits for redis statefulset |
| `redisStandalone.name` | "" | Overwrites the name for the charts resources instead of the Release name |
| `externalService.enabled` | false | If redis service needs to be exposed using LoadBalancer or NodePort |
| `externalService.annotations` | {} | Kubernetes service related annotations |
| `externalService.serviceType` | NodePort | Kubernetes service type for exposing service, values - ClusterIP, NodePort, and LoadBalancer |
| `externalService.port` | 6379 | Port number on which redis external service should be exposed |
| `serviceMonitor.enabled` | false | Servicemonitor to monitor redis with Prometheus |
| `serviceMonitor.interval` | 30s | Interval at which metrics should be scraped. |
| `serviceMonitor.scrapeTimeout` | 10s | Timeout after which the scrape is ended |
| `serviceMonitor.namespace` | monitoring | Namespace in which Prometheus operator is running |
| `redisExporter.enabled` | true | Redis exporter should be deployed or not |
| `redisExporter.image` | quay.io/opstree/redis-exporter | Name of the redis exporter image |
| `redisExporter.tag` | v6.2 | Tag of the redis exporter image |
| `redisExporter.imagePullPolicy` | IfNotPresent | Image Pull Policy of the redis exporter image |
| `redisExporter.env` | [] | Extra environment variables which needs to be added in redis exporter |
| `sidecars` | [] | Sidecar for redis pods |
| `nodeSelector` | {} | NodeSelector for redis statefulset |
| `priorityClassName` | "" | Priority class name for the redis statefulset |
| `storageSpec` | {} | Storage configuration for redis setup |
| `securityContext` | {} | Security Context for redis pods for changing system or kernel level parameters |
| `affinity` | {} | Affinity for node and pod for redis statefulset |
| `tolerations` | [] | Tolerations for redis statefulset |

View File

@ -1,63 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/* Define common labels */}}
{{- define "common.labels" -}}
app.kubernetes.io/name: {{ .Values.redisStandalone.name | default .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Values.redisStandalone.name | default .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
{{- if .Values.labels }}
{{- range $labelkey, $labelvalue := .Values.labels }}
{{ $labelkey}}: {{ $labelvalue }}
{{- end }}
{{- end }}
{{- end -}}
{{/* Generate init container properties */}}
{{- define "initContainer.properties" -}}
{{- with .Values.initContainer }}
{{- if .enabled }}
image: {{ .image }}
{{- if .imagePullPolicy }}
imagePullPolicy: {{ .imagePullPolicy }}
{{- end }}
{{- if .resources }}
resources:
{{ toYaml .resources | nindent 2 }}
{{- end }}
{{- if .env }}
env:
{{ toYaml .env | nindent 2 }}
{{- end }}
{{- if .command }}
command:
{{ toYaml .command | nindent 2 }}
{{- end }}
{{- if .args }}
args:
{{ toYaml .args | nindent 2 }}
{{- end }}
{{- end }}
{{- end }}
{{- end -}}
{{/* Generate sidecar properties */}}
{{- define "sidecar.properties" -}}
{{- with .Values.sidecars }}
name: {{ .name }}
image: {{ .image }}
{{- if .imagePullPolicy }}
imagePullPolicy: {{ .imagePullPolicy }}
{{- end }}
{{- if .resources }}
resources:
{{ toYaml .resources | nindent 2 }}
{{- end }}
{{- if .env }}
env:
{{ toYaml .env | nindent 2 }}
{{- end }}
{{- end }}
{{- end -}}

View File

@ -1,17 +0,0 @@
{{- if eq .Values.externalConfig.enabled true }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.redisStandalone.name | default .Release.Name }}-ext-config
labels:
app.kubernetes.io/name: {{ .Values.redisStandalone.name | default .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Values.redisStandalone.name | default .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
data:
redis-additional.conf: |
{{ .Values.externalConfig.data | nindent 4 }}
{{- end }}

View File

@ -1,86 +0,0 @@
---
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: Redis
metadata:
name: {{ .Values.redisStandalone.name | default .Release.Name }}
labels: {{- include "common.labels" . | nindent 4 }}
spec:
kubernetesConfig:
image: "{{ .Values.redisStandalone.image }}:{{ .Values.redisStandalone.tag }}"
imagePullPolicy: "{{ .Values.redisStandalone.imagePullPolicy }}"
{{- if .Values.redisStandalone.imagePullSecrets }}
imagePullSecrets: {{ toYaml .Values.redisStandalone.imagePullSecrets | nindent 4 }}
{{- end }}
{{- if .Values.redisStandalone.resources}}
resources: {{ toYaml .Values.redisStandalone.resources | nindent 6 }}
{{- end }}
{{- if and .Values.redisStandalone.redisSecret.secretName .Values.redisStandalone.redisSecret.secretKey }}
redisSecret:
name: {{ .Values.redisStandalone.redisSecret.secretName | quote }}
key: {{ .Values.redisStandalone.redisSecret.secretKey | quote }}
{{- end }}
{{- if .Values.redisStandalone.ignoreAnnotations}}
ignoreAnnotations: {{ toYaml .Values.redisStandalone.ignoreAnnotations | nindent 6 }}
{{- end }}
redisExporter:
enabled: {{ .Values.redisExporter.enabled }}
image: "{{ .Values.redisExporter.image }}:{{ .Values.redisExporter.tag }}"
imagePullPolicy: "{{ .Values.redisExporter.imagePullPolicy }}"
{{- if .Values.redisExporter.resources}}
resources: {{ toYaml .Values.redisExporter.resources | nindent 6 }}
{{- end }}
{{- if .Values.redisExporter.env }}
env: {{ toYaml .Values.redisExporter.env | nindent 6 }}
{{- end }}
{{- if .Values.externalConfig.enabled }}
redisConfig:
additionalRedisConfig: "{{ .Values.redisStandalone.name | default .Release.Name }}-ext-config"
{{- end }}
{{- if .Values.storageSpec }}
storage: {{ toYaml .Values.storageSpec | nindent 4 }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector: {{ toYaml .Values.nodeSelector | nindent 4 }}
{{- end }}
{{- if .Values.podSecurityContext }}
podSecurityContext: {{ toYaml .Values.podSecurityContext | nindent 4 }}
{{- end }}
{{- if .Values.securityContext }}
securityContext: {{ toYaml .Values.securityContext | nindent 4 }}
{{- end }}
{{- if and .Values.priorityClassName (ne .Values.priorityClassName "") }}
priorityClassName: "{{ .Values.priorityClassName }}"
{{- end }}
{{- if .Values.affinity }}
affinity: {{ toYaml .Values.affinity | nindent 4 }}
{{- end }}
{{- if .Values.tolerations }}
tolerations: {{ toYaml .Values.tolerations | nindent 4 }}
{{- end }}
{{- if and .Values.TLS.ca .Values.TLS.cert .Values.TLS.key .Values.TLS.secret.secretName }}
TLS:
ca: {{ .Values.TLS.ca | quote }}
cert: {{ .Values.TLS.cert | quote }}
key: {{ .Values.TLS.key | quote }}
secret:
secretName: {{ .Values.TLS.secret.secretName | quote }}
{{- end }}
{{- if and .Values.acl.secret (ne .Values.acl.secret.secretName "") }}
acl:
secret:
secretName: {{ .Values.acl.secret.secretName | quote }}
{{- end }}
{{- if and .Values.initContainer .Values.initContainer.enabled (ne .Values.initContainer.image "") }}
initContainer: {{ include "initContainer.properties" | nindent 4 }}
{{- end }}
{{- if and .Values.sidecars (ne .Values.sidecars.name "") (ne .Values.sidecars.image "") }}
sidecars: {{ include "sidecar.properties" | nindent 4 }}
{{- end }}
{{- if and .Values.serviceAccountName (ne .Values.serviceAccountName "") }}
serviceAccountName: "{{ .Values.serviceAccountName }}"
{{- end }}
{{- if .Values.env }}
env: {{ toYaml .Values.env | nindent 4 }}
{{- end }}

View File

@ -1,29 +0,0 @@
{{- if eq .Values.externalService.enabled true }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.redisStandalone.name | default .Release.Name }}-external-service
{{- if .Values.externalService.annotations }}
annotations:
{{ toYaml .Values.externalService.annotations | indent 4 }}
{{- end }}
labels:
app.kubernetes.io/name: {{ .Values.redisStandalone.name | default .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Values.redisStandalone.name | default .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
spec:
type: {{ .Values.externalService.serviceType }}
selector:
app: {{ .Values.redisStandalone.name | default .Release.Name }}
redis_setup_type: standalone
role: standalone
ports:
- protocol: TCP
port: {{ .Values.externalService.port }}
targetPort: 6379
name: client
{{- end }}

View File

@ -1,27 +0,0 @@
{{- if eq .Values.serviceMonitor.enabled true }}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ .Values.redisStandalone.name | default .Release.Name }}-prometheus-monitoring
labels:
app.kubernetes.io/name: {{ .Values.redisStandalone.name | default .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Values.redisStandalone.name | default .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
spec:
selector:
matchLabels:
app: {{ .Values.redisStandalone.name | default .Release.Name }}
redis_setup_type: standalone
role: standalone
endpoints:
- port: redis-exporter
interval: {{ .Values.serviceMonitor.interval }}
scrapeTimeout: {{ .Values.serviceMonitor.scrapeTimeout }}
namespaceSelector:
matchNames:
- {{ .Values.serviceMonitor.namespace }}
{{- end }}

View File

@ -1,145 +0,0 @@
---
redisStandalone:
name: ""
image: quay.io/opstree/redis
tag: v7.0.12
imagePullPolicy: IfNotPresent
imagePullSecrets: []
# - name: Secret with Registry credentials
redisSecret:
secretName: ""
secretKey: ""
serviceType: ClusterIP
resources: {}
# requests:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 100m
# memory: 128Mi
ignoreAnnotations: []
# - "redis.opstreelabs.in/ignore"
labels: {}
# foo: bar
# test: echo
externalConfig:
enabled: false
data: |
tcp-keepalive 400
slowlog-max-len 158
stream-node-max-bytes 2048
externalService:
enabled: false
# annotations:
# foo: bar
serviceType: NodePort
port: 6379
serviceMonitor:
enabled: false
interval: 30s
scrapeTimeout: 10s
namespace: monitoring
redisExporter:
enabled: false
image: quay.io/opstree/redis-exporter
tag: "v1.44.0"
imagePullPolicy: IfNotPresent
resources: {}
# requests:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 100m
# memory: 128Mi
env: []
# - name: VAR_NAME
# value: "value1"
initContainer:
enabled: false
image: ""
imagePullPolicy: "IfNotPresent"
resources: {}
# requests:
# memory: "64Mi"
# cpu: "250m"
# limits:
# memory: "128Mi"
# cpu: "500m"
env: []
command: []
args: []
sidecars:
name: ""
image: ""
imagePullPolicy: "IfNotPresent"
resources:
limits:
cpu: "100m"
memory: "128Mi"
requests:
cpu: "50m"
memory: "64Mi"
env: []
# - name: MY_ENV_VAR
# value: "my-env-var-value"
priorityClassName: ""
nodeSelector: {}
# memory: medium
storageSpec:
volumeClaimTemplate:
spec:
# storageClassName: standard
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
# selector: {}
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
securityContext: {}
affinity: {}
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: disktype
# operator: In
# values:
# - ssd
tolerations: []
# - key: "key"
# operator: "Equal"
# value: "value"
# effect: "NoSchedule"
serviceAccountName: ""
TLS:
ca: ca.key
cert: tls.crt
key: tls.key
secret:
secretName: ""
acl:
secret:
secretName: ""
env: []
# - name: VAR_NAME
# value: "value1"

View File

@ -0,0 +1,31 @@
apiVersion: v2
name: vm
description: A Helm chart for victoriametrics
type: application
version: 0.0.3
appVersion: v0.0.3
maintainers:
- name: ashwani-opstree
dependencies:
- name: victoria-metrics-k8s-stack
version: 0.25.5
repository: https://victoriametrics.github.io/helm-charts/
alias: vm
tags:
- monitoring
condition: vm.enabled
- name: prometheus-blackbox-exporter
version: 8.17.0
repository: https://prometheus-community.github.io/helm-charts/
tags:
- blackbox
alias: blackbox
condition: blackbox.enabled
- name: prometheus-msteams
version: 1.3.4
repository: https://prometheus-msteams.github.io/prometheus-msteams/
tags:
- msteams
alias: msteams
condition: msteams.enabled

View File

@ -0,0 +1,233 @@
vm:
victoria-metrics-operator:
cleanupCRD: false
defaultDashboardsEnabled: false
experimentalDashboardsEnabled: false
prometheus-node-exporter:
enabled: true
node:
enabled: true
kubeStateMetrics:
enabled: true
grafana:
enabled: true
testFramework:
enabled: false
sidecar:
datasources:
defaultDatasourceEnabled: false
resources:
requests:
cpu: "0.5"
memory: 1Gi
limits:
cpu: 1
memory: 2Gi
persistence:
enabled: true
type: sts
storageClassName: buildpiper-storage
accessModes:
- ReadWriteOnce
size: 1Gi
finalizers:
- kubernetes.io/pvc-protection
# nodeSelector:
# node_group: o11y
# tolerations:
# - key: o11y
# operator: Equal
# value: "true"
# effect: NoSchedule
alertmanager:
enabled: true
config:
global:
resolve_timeout: 5m
templates:
- "/etc/vm/configs/**/*.tmpl"
route:
receiver: "blackhole"
receivers:
- name: blackhole
spec:
configNamespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
replicaCount: 2
retention: 240h
resources:
requests:
cpu: 250m
memory: 500Mi
limits:
cpu: 250m
memory: 500Mi
# nodeSelector:
# node_group: o11y
# tolerations:
# - key: o11y
# operator: Equal
# value: "true"
# effect: NoSchedule
storage:
volumeClaimTemplate:
spec:
storageClassName: buildpiper-storage
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
vmsingle:
enabled: false
defaultRules:
create: false
kubeApiServer:
enabled: false
kubeControllerManager:
enabled: false
kubeEtcd:
enabled: false
kubeScheduler:
enabled: false
crds:
enabled: false
dashboards:
node-exporter-full: false
vmcluster:
enabled: true
spec:
retentionPeriod: "14d"
replicationFactor: 1
vmstorage:
replicaCount: 1
extraArgs:
search.maxUniqueTimeseries: "10000000000000"
resources:
limits:
cpu: "0.5"
memory: 500Mi
requests:
cpu: "0.5"
memory: 500Mi
storage:
volumeClaimTemplate:
spec:
storageClassName: buildpiper-storage
resources:
requests:
storage: 20Gi
# nodeSelector: {}
# tolerations: {}
vmselect:
replicaCount: 1
extraArgs:
memory.allowedPercent: "75"
search.cacheTimestampOffset: 60m
search.maxLabelsAPISeries: "10000000000000"
search.maxMemoryPerQuery: 2GB
search.maxPointsPerTimeseries: "10000000000000"
search.maxQueryDuration: 10m
search.maxQueryLen: "10000000000000"
search.maxSeries: "10000000000000"
search.maxUniqueTimeseries: "10000000000000"
storage:
volumeClaimTemplate:
spec:
storageClassName: buildpiper-storage
resources:
requests:
storage: 2Gi
resources:
limits:
cpu: "1"
memory: "1Gi"
requests:
cpu: "1"
memory: "1Gi"
# nodeSelector: {}
# tolerations: {}
vminsert:
replicaCount: 1
extraArgs:
maxLabelsPerTimeseries: "100"
image:
tag: v1.103.0-cluster
resources:
limits:
cpu: "0.5"
memory: 500Mi
requests:
cpu: "0.5"
memory: "500Mi"
# nodeSelector: {}
# tolerations: {}
vmagent:
enabled: true
spec:
serviceScrapeNamespaceSelector:
matchLabels:
kubernetes.io/metadata.name: vm
extraArgs:
promscrape.maxScrapeSize: 200MB
promscrape.streamParse: "true"
promscrape.dropOriginalLabels: "true"
resources:
limits:
cpu: "0.5"
memory: 500Mi
requests:
cpu: "0.5"
memory: 500Mi
scrapeInterval: 30s
# nodeSelector: {}
# tolerations: {}
vmalert:
enabled: true
spec:
resources:
limits:
cpu: "0.5"
memory: 500Mi
requests:
cpu: "0.5"
memory: 500Mi
# nodeSelector: {}
# tolerations: {}
blackbox:
enabled: false
serviceMonitor:
enabled: false
config:
modules:
http_2xx:
prober: http
timeout: 5s
http:
valid_http_versions:
- "HTTP/1.0"
- "HTTP/1.1"
- "HTTP/2.0"
no_follow_redirects: false
preferred_ip_protocol: "ip4"
fail_if_ssl: false
fail_if_not_ssl: false
msteams:
enabled: false
image:
repository: quay.io/prometheusmsteams/prometheus-msteams
tag: 1.5.2
resources:
limits:
cpu: 200m
memory: 500Mi
requests:
cpu: 100m
memory: 250Mi
connectors: []

26
charts/web/Chart.yaml Normal file
View File

@ -0,0 +1,26 @@
---
apiVersion: v2
description: A deployment helm chart which will be used to deploy any type of stateless application
maintainers:
- name: iamabhishek-dubey
email: "abhishek.dubey@opstree.com"
url: https://github.com/iamabhishek-dubey
name: web
sources:
- https://github.com/ot-container-kit/helm-charts
dependencies:
- name: base
version: 0.1.0
repository: https://ot-container-kit.github.io/helm-charts
version: 0.1.0
appVersion: "0.1.0"
home: https://github.com/ot-container-kit/helm-charts
keywords:
- deployment
- base
- opstree
- kubernetes
- openshift
- microservice
- stateless
icon: https://raw.githubusercontent.com/OT-CONTAINER-KIT/helm-charts/main/static/helm-chart-logo.svg

46
charts/web/README.md Normal file
View File

@ -0,0 +1,46 @@
# Web Deployment Helm Chart
![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square) ![AppVersion: 0.1.0](https://img.shields.io/badge/AppVersion-0.1.0-informational?style=flat-square)
A deployment helm chart which will be used to deploy any type of stateless application
**Homepage:** <https://github.com/ot-container-kit/helm-charts>
## Maintainers
| Name | Email | Url |
|-------------------|------------------------------|----------------------------------------|
| iamabhishek-dubey | <abhishek.dubey@opstree.com> | <https://github.com/iamabhishek-dubey> |
## Source Code
* <https://github.com/ot-container-kit/helm-charts>
## Requirements
| Repository | Name | Version |
|------------------------------------------------|------|---------|
| https://ot-container-kit.github.io/helm-charts | base | 0.1.0 |
## Values
| Key | Type | Default | Description |
|------------------------|--------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
| autoscaling | object | `{"enabled":false,"maxReplicas":50,"minReplicas":10,"targetCPUUtilizationPercentage":65,"targetMemoryUtilizationPercentage":65}` | Autoscaling properties with target CPU and Memory details |
| base | object | `{"image":{"command":[],"pullPolicy":"IfNotPresent","pullSecrets":"","repository":"nginx","tag":"latest"}}` | Base block to define the inputs for image, secret and configmap env |
| base.image | object | `{"command":[],"pullPolicy":"IfNotPresent","pullSecrets":"","repository":"nginx","tag":"latest"}` | Image block with all image details |
| base.image.command | list | `[]` | Additional command arguments which needs to be passed |
| base.image.pullPolicy | string | `"IfNotPresent"` | Default image pull policy |
| base.image.pullSecrets | string | `""` | Image pull secrets for private repository authentication |
| base.image.repository | string | `"nginx"` | Default image repository |
| base.image.tag | string | `"latest"` | Default image tag |
| healthcheck | object | `{"enabled":true,"statusPath":"/"}` | Healthcheck details for readiness and liveness probe |
| healthcheck.enabled | bool | `true` | Healthcheck is enabled or not |
| healthcheck.statusPath | string | `"/"` | Healthcheck status path on status will be checked |
| ingress | object | `{"annotations":{"ingress.gcp.kubernetes.io/pre-shared-cert":"global-sign-opstree-com-2024","kubernetes.io/ingress.allow-http":"false"},"class":"nginx","enabled":false}` | Ingress details with class, annotations and rules |
| internalIngress | object | `{"annotations":{"kubernetes.io/ingress.allow-http":"true"},"class":"nginx-internal","enabled":false}` | Internal ingress details with class, annotations and rules |
| replicaCount | int | `2` | Number of replicas for deployment, it will be overridden in case autoscaling is enabled |
| resources | object | `{}` | Kubernetes resource in terms of requests and limits |
| service | object | `{"port":80,"type":"ClusterIP"}` | Service specification details with port and type |
| service.type | string | `"ClusterIP"` | Prometheus metrics to expose metrics on path with port metrics: path: /v1/metrics port: 8081 |
| volumes | string | `nil` | Kubernetes volumes definition which needs to be mounted |

View File

@ -0,0 +1,4 @@
{{ include "configmap" . }}
---
{{ include "serviceAccount" . }}
---

View File

@ -0,0 +1,18 @@
{{- if .Values.volumes }}
{{- if .Values.volumes.configMaps }}
{{ range $cm := .Values.volumes.configMaps}}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ $cm.name }}
labels:
{{- include "base.labels" $ | nindent 4 }}
data:
{{- range $filename, $content := $cm.data }}
{{ $filename }}: |-
{{ $content | toString | indent 4}}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,87 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "base.fullname" . }}
labels:
{{- include "base.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
{{- with .Values.replicaCount }}
replicas: {{ . }}
{{- end }}
{{- end }}
selector:
matchLabels:
{{- include "base.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "base.selectorLabels" . | nindent 8 }}
spec:
{{- if .Values.base.image.pullSecrets }}
imagePullSecrets:
- name: {{ .Values.base.image.pullSecrets }}
{{- end }}
serviceAccountName: {{ include "base.serviceAccountName" . }}
terminationGracePeriodSeconds: 120
containers:
- name: {{ include "base.fullname" . }}
image: "{{ .Values.base.image.repository }}:{{ .Values.base.image.tag }}"
imagePullPolicy: {{ .Values.base.image.pullPolicy }}
{{- if .Values.base.image.command }}
command:
{{- toYaml .Values.base.image.command | nindent 12 }}
{{- end }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
resources:
{{- toYaml .Values.resources | nindent 12 }}
livenessProbe:
httpGet:
path: {{ .Values.healthcheck.statusPath }}
port: http
initialDelaySeconds: 30
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: {{ .Values.healthcheck.statusPath }}
port: http
initialDelaySeconds: 30
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 1
failureThreshold: 3
{{- if .Values.base.config }}
envFrom:
- configMapRef:
name: {{ include "base.fullname" . }}
{{- end }}
{{- if .Values.base.secret }}
- secretRef:
name: {{ include "base.fullname" . }}
{{- end }}
{{- if .Values.volumes }}
volumeMounts:
{{- if .Values.volumes.configMaps }}
{{- range $conf := .Values.volumes.configMaps }}
- mountPath: {{ $conf.mountPath }}
name: {{ $conf.name }}-volume
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.volumes }}
volumes:
{{- if .Values.volumes.configMaps }}
{{- range $conf := .Values.volumes.configMaps }}
- name: {{ $conf.name }}-volume
configMap:
name: {{ $conf.name }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,33 @@
{{- if .Values.autoscaling.enabled }}
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "base.fullname" . }}
labels:
{{- include "base.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "base.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,39 @@
---
{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: {{ .Values.ingress.class }}
labels:
{{- include "base.labels" . | nindent 4 }}
name: {{ include "base.fullname" . }}-external-ing
spec:
ingressClassName: {{ .Values.ingress.class }}
rules:
{{- range $rule := .Values.ingress.rules }}
- host: {{ $rule.hostname }}
http:
paths:
{{- if (kindIs "slice" $rule.path) }}
{{- range $path := $rule.path }}
- path: {{ $path }}
pathType: Prefix
backend:
service:
name: {{ include "base.fullname" $ }}
port:
number: {{ $.Values.service.port }}
{{- end }}
{{- end }}
{{- if (kindIs "string" $rule.path) }}
- path: {{ $rule.path }}
pathType: Prefix
backend:
service:
name: {{ include "base.fullname" $ }}
port:
number: {{ $.Values.service.port }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,41 @@
{{- if .Values.internalIngress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "base.fullname" . }}-internal-ing
annotations:
kubernetes.io/ingress.class: {{ .Values.internalIngress.class }}
{{- if .Values.internalIngress.annotations }}
{{ toYaml .Values.internalIngress.annotations | nindent 4 }}
{{- end }}
labels:
{{- include "base.labels" . | nindent 4 }}
spec:
ingressClassName: {{ .Values.internalIngress.class }}
rules:
{{- range $rule := .Values.internalIngress.rules }}
- host: {{ $rule.hostname }}
http:
paths:
{{- if (kindIs "slice" $rule.path) }}
{{- range $path := $rule.path }}
- path: {{ $path }}
pathType: Prefix
backend:
service:
name: {{ include "base.fullname" $ }}
port:
number: {{ $.Values.service.port }}
{{- end }}
{{- end }}
{{- if (kindIs "string" $rule.path) }}
- path: {{ $rule.path }}
pathType: Prefix
backend:
service:
name: {{ include "base.fullname" $ }}
port:
number: {{ $.Values.service.port }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,12 @@
{{- if .Values.base.secret -}}
{{- $top := . -}}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ include "base.fullname" . }}
labels:
{{- include "base.labels" . | nindent 4 }}
stringData:
{{- toYaml .Values.base.secret | nindent 2 -}}
{{- end -}}

View File

@ -0,0 +1,22 @@
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "base.fullname" . }}
annotations:
{{- if .Values.service.metrics }}
prometheus.io/path: "{{ .Values.service.metrics.path }}"
prometheus.io/port: "{{ .Values.service.metrics.port }}"
prometheus.io/scrape: "true"
{{- end }}
labels:
{{- include "base.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "base.selectorLabels" . | nindent 4 }}

91
charts/web/values.yaml Normal file
View File

@ -0,0 +1,91 @@
# -- Base block to define the inputs for image, secret and configmap env
base:
# -- Image block with all image details
image:
# -- Default image pull policy
pullPolicy: "IfNotPresent"
# -- Additional command arguments which needs to be passed
command: []
# -- Default image repository
repository: nginx
# -- Default image tag
tag: latest
# -- Image pull secrets for private repository authentication
pullSecrets: ""
# secret:
# FOO_SECRET: BAR
# config:
# FOO_CONFIG: BAR
# -- Healthcheck details for readiness and liveness probe
healthcheck:
# -- Healthcheck is enabled or not
enabled: true
# -- Healthcheck status path on status will be checked
statusPath: /
# -- Autoscaling properties with target CPU and Memory details
autoscaling:
enabled: false
targetCPUUtilizationPercentage: 65
targetMemoryUtilizationPercentage: 65
minReplicas: 10
maxReplicas: 50
# -- Service specification details with port and type
service:
port: 80
# -- Prometheus metrics to expose metrics on path with port
# metrics:
# path: /v1/metrics
# port: 8081
type: ClusterIP
# -- Kubernetes resource in terms of requests and limits
resources: {}
# -- Number of replicas for deployment, it will be overridden in case autoscaling is enabled
replicaCount: 2
# -- Kubernetes volumes definition which needs to be mounted
volumes:
# -- List of configmaps with mount path and data
# configMaps:
# - name: web
# mountPath: /test
# data:
# test.txt: |-
# Dummy text
# -- Ingress details with class, annotations and rules
ingress:
enabled: false
class: nginx
annotations:
ingress.gcp.kubernetes.io/pre-shared-cert: global-sign-opstree-com-2024
kubernetes.io/ingress.allow-http: "false"
# rules:
# - hostname: origin-sample-service.opstree.com
# path: /api/sample
# - hostname: origin-sample-service.opstree.com
# path:
# - /api
# - /api2
# - hostname: sample-service.opstree.com
# path: /api/sample
# -- Internal ingress details with class, annotations and rules
internalIngress:
enabled: false
class: nginx-internal
annotations:
kubernetes.io/ingress.allow-http: "true"
# rules:
# - hostname: origin-sample-service.opstree.in
# path: /api/sample
# - hostname: origin-sample-service.opstree.in
# path:
# - /api
# - /api2
# - hostname: sample-service.opstree.in
# path: /api/sample

26
charts/worker/Chart.yaml Normal file
View File

@ -0,0 +1,26 @@
---
apiVersion: v2
description: A deployment helm chart which will be used to deploy any type of stateless application
maintainers:
- name: iamabhishek-dubey
email: "abhishek.dubey@opstree.com"
url: https://github.com/iamabhishek-dubey
name: worker
sources:
- https://github.com/ot-container-kit/helm-charts
dependencies:
- name: base
version: 0.1.0
repository: https://ot-container-kit.github.io/helm-charts
version: 0.1.0
appVersion: "0.1.0"
home: https://github.com/ot-container-kit/helm-charts
keywords:
- deployment
- base
- opstree
- kubernetes
- openshift
- microservice
- stateless
icon: https://raw.githubusercontent.com/OT-CONTAINER-KIT/helm-charts/main/static/helm-chart-logo.svg

39
charts/worker/README.md Normal file
View File

@ -0,0 +1,39 @@
# Worker Deployment Helm Chart
![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square) ![AppVersion: 0.1.0](https://img.shields.io/badge/AppVersion-0.1.0-informational?style=flat-square)
A deployment helm chart which will be used to deploy any type of stateless application
**Homepage:** <https://github.com/ot-container-kit/helm-charts>
## Maintainers
| Name | Email | Url |
|-------------------|------------------------------|----------------------------------------|
| iamabhishek-dubey | <abhishek.dubey@opstree.com> | <https://github.com/iamabhishek-dubey> |
## Source Code
* <https://github.com/ot-container-kit/helm-charts>
## Requirements
| Repository | Name | Version |
|------------------------------------------------|------|---------|
| https://ot-container-kit.github.io/helm-charts | base | 0.1.0 |
## Values
| Key | Type | Default | Description |
|------------------------|--------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
| autoscaling | object | `{"enabled":false,"maxReplicas":50,"minReplicas":10,"targetCPUUtilizationPercentage":65,"targetMemoryUtilizationPercentage":65}` | Autoscaling properties with target CPU and Memory details |
| base | object | `{"image":{"command":[],"pullPolicy":"IfNotPresent","pullSecrets":"","repository":"nginx","tag":"latest"}}` | Base block to define the inputs for image, secret and configmap env |
| base.image | object | `{"command":[],"pullPolicy":"IfNotPresent","pullSecrets":"","repository":"nginx","tag":"latest"}` | Image block with all image details |
| base.image.command | list | `[]` | Additional command arguments which needs to be passed |
| base.image.pullPolicy | string | `"IfNotPresent"` | Default image pull policy |
| base.image.pullSecrets | string | `""` | Image pull secrets for private repository authentication |
| base.image.repository | string | `"nginx"` | Default image repository |
| base.image.tag | string | `"latest"` | Default image tag |
| replicaCount | int | `2` | Number of replicas for deployment, it will be overridden in case autoscaling is enabled |
| resources | object | `{}` | Kubernetes resource in terms of requests and limits |
| volumes | string | `nil` | Kubernetes volumes definition which needs to be mounted |

View File

@ -0,0 +1,4 @@
{{ include "configmap" . }}
---
{{ include "serviceAccount" . }}
---

View File

@ -0,0 +1,18 @@
{{- if .Values.volumes }}
{{- if .Values.volumes.configMaps }}
{{ range $cm := .Values.volumes.configMaps}}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ $cm.name }}
labels:
{{- include "base.labels" $ | nindent 4 }}
data:
{{- range $filename, $content := $cm.data }}
{{ $filename }}: |-
{{ $content | toString | indent 4}}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,65 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "base.fullname" . }}
labels:
{{- include "base.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
{{- with .Values.replicaCount }}
replicas: {{ . }}
{{- end }}
{{- end }}
selector:
matchLabels:
{{- include "base.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "base.selectorLabels" . | nindent 8 }}
spec:
{{- if .Values.base.image.pullSecrets }}
imagePullSecrets:
- name: {{ .Values.base.image.pullSecrets }}
{{- end }}
serviceAccountName: {{ include "base.serviceAccountName" . }}
terminationGracePeriodSeconds: 120
containers:
- name: {{ include "base.fullname" . }}
image: "{{ .Values.base.image.repository }}:{{ .Values.base.image.tag }}"
imagePullPolicy: {{ .Values.base.image.pullPolicy }}
{{- if .Values.base.image.command }}
command:
{{- toYaml .Values.base.image.command | nindent 12 }}
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- if .Values.base.config }}
envFrom:
- configMapRef:
name: {{ include "base.fullname" . }}
{{- end }}
{{- if .Values.base.secret }}
- secretRef:
name: {{ include "base.fullname" . }}
{{- end }}
{{- if .Values.volumes }}
volumeMounts:
{{- if .Values.volumes.configMaps }}
{{- range $conf := .Values.volumes.configMaps }}
- mountPath: {{ $conf.mountPath }}
name: {{ $conf.name }}-volume
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.volumes }}
volumes:
{{- if .Values.volumes.configMaps }}
{{- range $conf := .Values.volumes.configMaps }}
- name: {{ $conf.name }}-volume
configMap:
name: {{ $conf.name }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,33 @@
{{- if .Values.autoscaling.enabled }}
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "base.fullname" . }}
labels:
{{- include "base.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "base.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,12 @@
{{- if .Values.base.secret -}}
{{- $top := . -}}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ include "base.fullname" . }}
labels:
{{- include "base.labels" . | nindent 4 }}
stringData:
{{- toYaml .Values.base.secret | nindent 2 -}}
{{- end -}}

43
charts/worker/values.yaml Normal file
View File

@ -0,0 +1,43 @@
# -- Base block to define the inputs for image, secret and configmap env
base:
# -- Image block with all image details
image:
# -- Default image pull policy
pullPolicy: "IfNotPresent"
# -- Additional command arguments which needs to be passed
command: []
# -- Default image repository
repository: nginx
# -- Default image tag
tag: latest
# -- Image pull secrets for private repository authentication
pullSecrets: ""
# secret:
# FOO_SECRET: BAR
# config:
# FOO_CONFIG: BAR
# -- Autoscaling properties with target CPU and Memory details
autoscaling:
enabled: false
targetCPUUtilizationPercentage: 65
targetMemoryUtilizationPercentage: 65
minReplicas: 10
maxReplicas: 50
# -- Kubernetes resource in terms of requests and limits
resources: {}
# -- Number of replicas for deployment, it will be overridden in case autoscaling is enabled
replicaCount: 2
# -- Kubernetes volumes definition which needs to be mounted
volumes:
# -- List of configmaps with mount path and data
# configMaps:
# - name: web
# mountPath: /test
# data:
# test.txt: |-
# Dummy text

View File

@ -8,3 +8,4 @@ chart-repos:
excluded-charts:
- mysql
- pga
- redis-operator