Compare commits

..

No commits in common. "main" and "v1.0" have entirely different histories.
main ... v1.0

175 changed files with 32 additions and 20166 deletions

View File

@ -1,29 +0,0 @@
---
name: Bug report
about: Create a bug report to help us improve
title: ''
labels: 'bug'
assignees: ''
---
**Does this issue reproduce with the latest release?**
**What operating system and processor architecture are you using (`kubectl version`)?**
<details><summary><code>kubectl version</code> Output</summary><br><pre>
$ kubectl version
</pre></details>
**What did you do?**
<!--
If possible, provide a recipe for reproducing the error.
A detailed sequence of steps describing what to do to observe the issue is good.
A complete runnable bash shell script is best.
-->
**What did you expect to see?**
**What did you see instead?**

View File

@ -1,16 +0,0 @@
---
name: Documentation Update
about: Propose changes to the project's documentation
labels: documentation
---
<!-- Please answer these questions before submitting your documentation upadte. Thanks! -->
**Which document needs to be updated?**
<!-- Specify the document or section that needs an update. -->
**Expected changes**
<!-- Describe what you'd like to see updated. -->
**Additional context**
<!-- Add any other context or screenshots about the documentation update request here. -->

View File

@ -1,22 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: 'enhancement'
assignees: ''
---
<!-- Please answer these questions before submitting your feature request. Thanks! -->
**Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the solution you'd like**
<!-- A clear and concise description of what you want to happen. -->
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->

View File

@ -1,13 +0,0 @@
---
name: General Question
about: Ask a question or need support
labels: question
---
<!-- Please answer these questions before submitting your documentation upadte. Thanks! -->
**Describe your question**
<!-- Provide details about your question or the support needed. -->
**Additional context**
<!-- Add any other context or screenshots about the question here. -->

17
.github/stale.yml vendored
View File

@ -1,17 +0,0 @@
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 60
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 7
# Issues with these labels will never be considered stale
exemptLabels:
- pinned
- security
# Label to use when marking an issue as stale
staleLabel: wontfix
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Thank you
for your contributions.
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: false

View File

@ -1,38 +0,0 @@
---
name: Lint and Test Charts
on: pull_request
jobs:
lint-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install Helm
uses: azure/setup-helm@v3
with:
version: v3.16.2
- uses: actions/setup-python@v4
with:
python-version: '3.9'
check-latest: true
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.6.0
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --config ct.yaml)
if [[ -n "$changed" ]]; then
echo "changed=true" >> $GITHUB_OUTPUT
fi
- name: Run chart-testing (lint)
run: |
ct lint --config ct.yaml

View File

@ -1,28 +0,0 @@
---
name: Lint Code Base
# Documentation:
# https://help.github.com/en/articles/workflow-syntax-for-github-actions
on: pull_request
jobs:
build:
name: Lint Code Base
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v3
- name: Lint Code Base
uses: docker://github/super-linter:v3.12.0
env:
VALIDATE_ALL_CODEBASE: false
VALIDATE_BASH: false
VALIDATE_PYTHON: false
VALIDATE_PYTHON_FLAKE8: false
VALIDATE_PYTHON_BLACK: false
VALIDATE_KUBERNETES_KUBEVAL: false
VALIDATE_YAML: false
DEFAULT_BRANCH: main
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FILTER_REGEX_EXCLUDE: .*(README\.md|NOTES.txt).*

View File

@ -1,65 +0,0 @@
---
name: Release Charts
on:
push:
branches:
- main
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Fetch history
run: git fetch --prune --unshallow
- name: Configure Git
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
# See https://github.com/helm/chart-releaser-action/issues/6
- name: Install Helm
uses: azure/setup-helm@v3
with:
version: v3.16.2
- uses: actions/setup-python@v4
with:
python-version: '3.9'
check-latest: true
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.6.0
- name: Add Helm Repository
run: |
helm repo add jetstack https://charts.jetstack.io
- name: Update Helm Repositories
run: helm repo update
- name: Update Chart Dependencies for karpenter
run: helm dependency update charts/karpenter
- name: List Changed Charts
id: list-changed
run: |
changed_charts=$(ct list-changed --config ct.yaml)
echo "Changed charts: $changed_charts"
echo "changed_charts=$changed_charts" >> $GITHUB_ENV
- name: Package and Release Charts
run: |
for CHART in ${{ steps.list-changed.outputs.changed_charts }}; do
echo "Packaging $CHART..."
helm package charts/$CHART
done
- name: Run chart-releaser
uses: helm/chart-releaser-action@v1.5.0
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

View File

@ -1,37 +0,0 @@
name: Install and Test Helm Chart
on: pull_request
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Create k8s Kind Cluster
uses: helm/kind-action@v1.8.0
with:
cluster_name: kind
- name: Install Helm
uses: azure/setup-helm@v3
with:
version: v3.16.2
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.6.0
- uses: actions/setup-python@v4
with:
python-version: '3.9'
check-latest: true
- name: Install and test Helm charts
run: |
kubectl cluster-info --context kind-kind
changed=$(ct list-changed --config ct.yaml)
ct install --config ct.yaml || true

3
.gitignore vendored
View File

@ -1,3 +0,0 @@
*.tgz
Chart.lock
.DS_Store

View File

@ -1,47 +1 @@
<p align="center">
<img src="./static/helm-chart-logo.svg" height="220" width="220">
</p>
A helm repository that has a variety of helm charts for helping people to deploy stack inside Kubernetes cluster with best and security practices. One of the main motives of creating these charts is that person can easily deploy the stack or application inside the Kubernetes cluster without getting into the complexity.
[Helm](https://helm.sh/) must be installed to use the charts. Please refer to Helm's [documentation](https://helm.sh/docs/) to get started.
Once Helm is set up properly, add the repo as follows:
```shell
helm repo add ot-helm https://ot-container-kit.github.io/helm-charts
```
You can then run `helm search repo ot-helm` to see the charts.
### Pre-Requisities
- Kubernetes `>=1.15.X`
- Helm `>=3.0.X`
### Installing Helm
Helm is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources.
To install Helm, refer to the [Helm install guide](https://github.com/helm/helm#install) and ensure that the helm binary is in the PATH of your shell.
### Adding Repo
```shell
helm repo add ot-helm https://ot-container-kit.github.io/helm-charts
```
Please refer to the [Quick Start guide](https://helm.sh/docs/intro/quickstart/) if you wish to get running in just a few commands, otherwise the [Using Helm Guide](https://helm.sh/docs/intro/using_helm/) provides detailed instructions on how to use the Helm client to manage packages on your Kubernetes cluster.
Useful Helm Client Commands:
- View available charts: `helm search repo`
- Install a chart: `helm install my-release ot-helm/<package-name>`
- Upgrade your application: `helm upgrade`
## Contact Information
This project is managed by [OpsTree Solutions](http://opstree.com). For any queries or suggestions, you can reach out to us at [opensource@opstree.com](mailto:opensource@opstree.com).
Join our Slack Channel: [#redis-operator](https://opstree.slack.com/archives/C05MBRB50JG).
# helm-charts

View File

@ -1,21 +0,0 @@
---
apiVersion: v1
description: A base helm chart which will be used by different helm charts
engine: gotpl
maintainers:
- name: iamabhishek-dubey
email: "abhishek.dubey@opstree.com"
url: https://github.com/iamabhishek-dubey
name: base
sources:
- https://github.com/ot-container-kit/helm-charts
version: 0.1.0
appVersion: "0.1.0"
home: https://github.com/ot-container-kit/helm-charts
keywords:
- deployment
- base
- opstree
- kubernetes
- openshift
icon: https://raw.githubusercontent.com/OT-CONTAINER-KIT/helm-charts/main/static/helm-chart-logo.svg

View File

@ -1,28 +0,0 @@
# base
![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square) ![AppVersion: 0.1.0](https://img.shields.io/badge/AppVersion-0.1.0-informational?style=flat-square)
A base helm chart which will be used by different helm charts.
**Homepage:** <https://github.com/ot-container-kit/helm-charts>
## Maintainers
| Name | Email | Url |
|-------------------|----------------------------|----------------------------------------|
| iamabhishek-dubey | abhishek.dubey@opstree.com | <https://github.com/iamabhishek-dubey> |
## Source Code
* <https://github.com/ot-container-kit/helm-charts>
## Values
| Key | Type | Default | Description |
|----------------------------|--------|---------|--------------------------------------------------------------------------------|
| config | object | `{}` | ConfigMap key value pair to create configs |
| serviceAccount.annotations | object | `{}` | Annotations to add to the service account |
| serviceAccount.name | string | `""` | If not set and create is true, a name is generated using the fullname template |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2)

View File

@ -1,13 +0,0 @@
{{- define "configmap" -}}
{{- if .Values.base.config -}}
{{- $top := . -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "base.fullname" . }}
labels:
{{- include "base.labels" . | nindent 4 }}
data:
{{- toYaml .Values.base.config | nindent 2 -}}
{{- end -}}
{{- end -}}

View File

@ -1,42 +0,0 @@
{{/*
Create a default fully qualified app name.
We truncate service name aka .Release.Name at 59 chars because some Kubernetes name fields are limited to 63 (by the DNS naming spec).
We append 4 characters for chart type at the end which is -web or -crn or -wrk or -job or -sts.
*/}}
{{- define "base.fullname" -}}
{{- $name := .Release.Name | trunc 59 | trimSuffix "-" }}
{{- printf "%s-%s" $name .Chart.Name }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "base.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "base.labels" -}}
helm.sh/chart: {{ include "base.chart" . }}
{{ include "base.selectorLabels" . }}
{{- if .Release.Revision }}
app.kubernetes.io/version: {{ .Release.Revision | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "base.selectorLabels" -}}
app.kubernetes.io/name: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "base.serviceAccountName" -}}
{{- default (include "base.fullname" .) .Values.base.serviceAccount.name }}
{{- end }}

View File

@ -1,12 +0,0 @@
{{- define "serviceAccount" -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "base.serviceAccountName" . }}
labels:
{{- include "base.labels" . | nindent 4 }}
{{- with .Values.base.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -1,13 +0,0 @@
# Default values for base template.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
serviceAccount:
# -- Annotations to add to the service account
annotations: {}
# -- The name of the service account to use.
# -- If not set and create is true, a name is generated using the fullname template
name: ""
# -- ConfigMap key value pair to create configs
config: {}

View File

@ -1,22 +0,0 @@
---
apiVersion: v1
description: Provides easy Elasticsearch cluster setup
engine: gotpl
maintainers:
- name: Opstree Solutions
name: elasticsearch
sources:
- https://github.com/ot-container-kit/logging-operator
version: 0.4.0
appVersion: "0.4.0"
home: https://github.com/ot-container-kit/logging-operator
keywords:
- operator
- elasticsearch
- opstree
- kubernetes
- openshift
- logging-operator
- kibana
- fluentd
icon: https://raw.githubusercontent.com/OT-CONTAINER-KIT/logging-operator/master/static/logging-operator-logo.svg

View File

@ -1,79 +0,0 @@
## Elasticsearch Cluster
Elasticsearch is a poplular NoSQL database which gets used for multiple purpose like:- database, logging, searching, etc. his helm chart needs [Logging Operator](../logging-operator) inside Kubernetes cluster. The elasticsearch definition can be modified or changed by [values.yaml](./values.yaml).
Documentation -> https://ot-logging-operator.netlify.app/
```shell
$ helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
$ helm install <my-release> ot-helm/elasticsearch --namespace <namespace>
```
Elasticsearch setup can be upgraded by using `helm upgrade` command:-
```shell
$ helm upgrade <my-release> ot-helm/elasticsearch --install --namespace <namespace>
```
For uninstalling the chart:-
```shell
$ helm delete <my-release> --namespace <namespace>
```
### Pre-Requisities
- Kubernetes => 1.15
- Helm => 3.X
- Logging Operator => 0.3.0
### Parameters
| **Name** | **Value** | **Description** |
|----------------------------------|-----------------|--------------------------------------------------------------------|
| clusterName | elastic-prod | Name of the elasticsearch cluster |
| esVersion | 7.17.0 | Major and minor version of elaticsearch |
| esPlugins | [] | Plugins list to install inside elasticsearch |
| esKeystoreSecret | - | Keystore secret to include in elasticsearch cluster |
| customConfiguration | {} | Additional configuration parameters for elasticsearch |
| esSecurity.enabled | true | To enabled the xpack security of elasticsearch |
| esMaster.replicas | 3 | Number of replicas for elasticsearch master node |
| esMaster.storage.storageSize | 20Gi | Size of the elasticsearch persistent volume for master |
| esMaster.storage.accessModes | [ReadWriteOnce] | Access modes of the elasticsearch persistent volume for master |
| esMaster.storage.storageClass | default | Storage class of the elasticsearch persistent volume for master |
| esMaster.jvmMaxMemory | 1Gi | Java max memory for elasticsearch master node |
| esMaster.jvmMinMemory | 1Gi | Java min memory for elasticsearch master node |
| esMaster.resources | {} | Resources for elasticsearch master pods |
| esMaster.nodeSelectors | {} | Nodeselectors map key-values for elasticsearch master pods |
| esMaster.affinity | {} | Affinity and anit-affinity for elasticsearch master pods |
| esMaster.tolerations | {} | Tolerations and taints for elasticsearch master pods |
| esData.replicas | 3 | Number of replicas for elasticsearch data node |
| esData.storage.storageSize | 50Gi | Size of the elasticsearch persistent volume for data |
| esData.storage.accessModes | [ReadWriteOnce] | Access modes of the elasticsearch persistent volume for data |
| esData.storage.storageClass | default | Storage class of the elasticsearch persistent volume for data |
| esData.jvmMaxMemory | 1Gi | Java max memory for elasticsearch data node |
| esData.jvmMinMemory | 1Gi | Java min memory for elasticsearch data node |
| esData.resources | {} | Resources for elasticsearch data pods |
| esData.nodeSelectors | {} | Nodeselectors map key-values for elasticsearch data pods |
| esData.affinity | {} | Affinity and anit-affinity for elasticsearch data pods |
| esData.tolerations | {} | Tolerations and taints for elasticsearch data pods |
| esIngestion.replicas | - | Number of replicas for elasticsearch ingestion node |
| esIngestion.storage.storageSize | - | Size of the elasticsearch persistent volume for ingestion |
| esIngestion.storage.accessModes | - | Access modes of the elasticsearch persistent volume for ingestion |
| esIngestion.storage.storageClass | - | Storage class of the elasticsearch persistent volume for ingestion |
| esIngestion.jvmMaxMemory | - | Java max memory for elasticsearch ingestion node |
| esIngestion.jvmMinMemory | - | Java min memory for elasticsearch ingestion node |
| esIngestion.resources | - | Resources for elasticsearch ingestion pods |
| esIngestion.nodeSelectors | - | Nodeselectors map key-values for elasticsearch ingestion pods |
| esIngestion.affinity | - | Affinity and anit-affinity for elasticsearch ingestion pods |
| esIngestion.tolerations | - | Tolerations and taints for elasticsearch ingestion pods |
| esClient.replicas | - | Number of replicas for elasticsearch ingestion node |
| esClient.storage.storageSize | - | Size of the elasticsearch persistent volume for client |
| esClient.storage.accessModes | - | Access modes of the elasticsearch persistent volume for client |
| esClient.storage.storageClass | - | Storage class of the elasticsearch persistent volume for client |
| esClient.jvmMaxMemory | - | Java max memory for elasticsearch client node |
| esClient.jvmMinMemory | - | Java min memory for elasticsearch client node |
| esClient.resources | - | Resources for elasticsearch client pods |
| esClient.nodeSelectors | - | Nodeselectors map key-values for elasticsearch client pods |
| esClient.affinity | - | Affinity and anit-affinity for elasticsearch client pods |
| esClient.tolerations | - | Tolerations and taints for elasticsearch client pods |

View File

@ -1,11 +0,0 @@
CHART NAME: {{ .Chart.Name }}
CHART VERSION: {{ .Chart.Version }}
APP VERSION: {{ .Chart.AppVersion }}
The helm chart for Elasticsearch setup has been deployed.
Get the list of pods by executing:
kubectl get pods --namespace {{ .Release.Namespace }} -l 'role in (master,data,ingestion,client)'
For getting the credential for admin user:
kubectl get secrets -n {{ .Release.Namespace }} {{ .Release.Name }}-password -o jsonpath="{.data.password}" | base64 -d

View File

@ -1,16 +0,0 @@
{{- if .Values.customConfiguration }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-additional-config
labels:
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: database
data:
{{ toYaml .Values.customConfiguration | indent 2 }}
{{- end }}

View File

@ -1,132 +0,0 @@
---
apiVersion: logging.logging.opstreelabs.in/v1beta1
kind: Elasticsearch
metadata:
name: {{ .Release.Name }}
labels:
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: database
spec:
esClusterName: {{ .Values.clusterName }}
esVersion: "{{ .Values.esVersion }}"
{{- if .Values.esPlugins }}
esPlugins: {{ .Values.esPlugins }}
{{- end }}
{{- if .Values.esKeystoreSecret }}
esKeystoreSecret: {{ .Values.esKeystoreSecret }}
{{- end }}
esMaster:
replicas: {{ .Values.esMaster.replicas }}
storage:
storageSize: {{ .Values.esMaster.storage.storageSize }}
accessModes: {{ .Values.esMaster.storage.accessModes }}
storageClass: {{ .Values.esMaster.storage.storageClass }}
jvmMaxMemory: "{{ .Values.esMaster.jvmMaxMemory }}"
jvmMinMemory: "{{ .Values.esMaster.jvmMinMemory }}"
kubernetesConfig:
resources:
{{ toYaml .Values.esMaster.resources | indent 8 }}
{{- if .Values.esMaster.priorityClassName }}
priorityClassName: "{{ .Values.esMaster.priorityClassName }}"
{{- end }}
{{- if .Values.esMaster.affinity }}
affinity:
{{ toYaml .Values.esMaster.affinity | indent 8 }}
{{- end }}
{{- if .Values.esMaster.tolerations }}
tolerations:
{{ toYaml .Values.esMaster.tolerations | indent 8 }}
{{- end }}
{{- if .Values.customConfiguration }}
customConfig: {{ .Release.Name }}-additional-config
{{- end }}
{{- if .Values.esData }}
esData:
replicas: {{ .Values.esData.replicas }}
storage:
storageSize: {{ .Values.esData.storage.storageSize }}
accessModes: {{ .Values.esData.storage.accessModes }}
storageClass: {{ .Values.esData.storage.storageClass }}
jvmMaxMemory: "{{ .Values.esData.jvmMaxMemory }}"
jvmMinMemory: "{{ .Values.esData.jvmMinMemory }}"
kubernetesConfig:
resources:
{{ toYaml .Values.esData.resources | indent 8 }}
{{- if .Values.esData.priorityClassName }}
priorityClassName: "{{ .Values.esData.priorityClassName }}"
{{- end }}
{{- if .Values.esData.affinity }}
affinity:
{{ toYaml .Values.esData.affinity | indent 8 }}
{{- end }}
{{- if .Values.esData.tolerations }}
tolerations:
{{ toYaml .Values.esData.tolerations | indent 8 }}
{{- end }}
{{- if .Values.customConfiguration }}
customConfig: {{ .Release.Name }}-additional-config
{{- end }}
{{- end }}
{{- if .Values.esIngestion }}
esIngestion:
replicas: {{ .Values.esIngestion.replicas }}
storage:
storageSize: {{ .Values.esIngestion.storage.storageSize }}
accessModes: {{ .Values.esIngestion.storage.accessModes }}
storageClass: {{ .Values.esIngestion.storage.storageClass }}
jvmMaxMemory: "{{ .Values.esIngestion.jvmMaxMemory }}"
jvmMinMemory: "{{ .Values.esIngestion.jvmMinMemory }}"
kubernetesConfig:
resources:
{{ toYaml .Values.esIngestion.resources | indent 8 }}
{{- if .Values.esIngestion.priorityClassName }}
priorityClassName: "{{ .Values.esIngestion.priorityClassName }}"
{{- end }}
{{- if .Values.esIngestion.affinity }}
affinity:
{{ toYaml .Values.esIngestion.affinity | indent 8 }}
{{- end }}
{{- if .Values.esIngestion.tolerations }}
tolerations:
{{ toYaml .Values.esIngestion.tolerations | indent 8 }}
{{- end }}
{{- if .Values.customConfiguration }}
customConfig: {{ .Release.Name }}-additional-config
{{- end }}
{{- end }}
{{- if .Values.esClient }}
esClient:
replicas: {{ .Values.esClient.replicas }}
storage:
storageSize: {{ .Values.esClient.storage.storageSize }}
accessModes: {{ .Values.esClient.storage.accessModes }}
storageClass: {{ .Values.esClient.storage.storageClass }}
jvmMaxMemory: "{{ .Values.esClient.jvmMaxMemory }}"
jvmMinMemory: "{{ .Values.esClient.jvmMinMemory }}"
kubernetesConfig:
resources:
{{ toYaml .Values.esClient.resources | indent 8 }}
{{- if .Values.esClient.priorityClassName }}
priorityClassName: "{{ .Values.esClient.priorityClassName }}"
{{- end }}
{{- if .Values.esClient.affinity }}
affinity:
{{ toYaml .Values.esClient.affinity | indent 8 }}
{{- end }}
{{- if .Values.esClient.tolerations }}
tolerations:
{{ toYaml .Values.esClient.tolerations | indent 8 }}
{{- end }}
{{- if .Values.customConfiguration }}
customConfig: {{ .Release.Name }}-additional-config
{{- end }}
{{- end }}
{{- if .Values.esSecurity.enabled }}
esSecurity:
autoGeneratePassword: true
tlsEnabled: true
{{- end }}

View File

@ -1,76 +0,0 @@
---
clusterName: "elastic-prod"
esVersion: "7.17.0"
#customConfiguration:
# cluster.routing.allocation.disk.watermark.low: "87%"
#esPlugins: ["repository-s3"]
#esKeystoreSecret: keystore-secret
esMaster:
replicas: 3
storage:
storageSize: 20Gi
accessModes: [ReadWriteOnce]
storageClass: "default"
jvmMaxMemory: 1g
jvmMinMemory: 1g
resources: {}
# requests:
# cpu: 1000m
# memory: 2048Mi
# limits:
# cpu: 1000m
# memory: 2048Mi
affinity: {}
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: disktype
# operator: In
# values:
# - ssd
nodeSelector: {}
# memory: high
tolerations: []
# - key: "example-key"
# operator: "Exists"
# effect: "NoSchedule"
# priorityClassName: "-"
esData:
replicas: 3
storage:
storageSize: 50Gi
accessModes: [ReadWriteOnce]
storageClass: "default"
jvmMaxMemory: 1g
jvmMinMemory: 1g
resources: {}
# requests:
# cpu: 1000m
# memory: 2048Mi
# limits:
# cpu: 1000m
# memory: 2048Mi
affinity: {}
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: disktype
# operator: In
# values:
# - ssd
nodeSelector: {}
# memory: high
tolerations: []
# - key: "example-key"
# operator: "Exists"
# effect: "NoSchedule"
# priorityClassName: "-"
esSecurity:
enabled: true

View File

@ -1,22 +0,0 @@
---
apiVersion: v1
description: Provides easy Fluentd log-shipper setup
engine: gotpl
maintainers:
- name: Opstree Solutions
name: fluentd
sources:
- https://github.com/ot-container-kit/logging-operator
version: 0.4.0
appVersion: "0.4.0"
home: https://github.com/ot-container-kit/logging-operator
keywords:
- operator
- elasticsearch
- opstree
- kubernetes
- openshift
- logging-operator
- kibana
- fluentd
icon: https://raw.githubusercontent.com/OT-CONTAINER-KIT/logging-operator/master/static/logging-operator-logo.svg

View File

@ -1,41 +0,0 @@
## Fluentd
Fluentd is a CNCF graduated project that provides the capability of log shipping as well as parsing. It can ship the logs to multiple destinations like Elasticsearch, Kafka, S3, etc. This helm chart needs [Logging Operator](../logging-operator) inside Kubernetes cluster. The fluentd definition can be modified or changed by [values.yaml](./values.yaml).
```shell
$ helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
$ helm install <my-release> ot-helm/kibana --namespace <namespace>
```
Fluentd setup can be upgraded by using `helm upgrade` command:-
```shell
$ helm upgrade <my-release> ot-helm/kibana --install --namespace <namespace>
```
For uninstalling the chart:-
```shell
$ helm delete <my-release> --namespace <namespace>
```
### Pre-Requisities
- Kubernetes => 1.15
- Helm => 3.X
- Logging Operator => 0.3.0
### Parameters
| **Name** | **Values** | **Description** |
|----------------------------------|------------------------|-----------------------------------------------------------------|
| elasticSearchHost | elasticsearch-master | Hostname or URL of the elasticsearch server |
| indexNameStrategy | namespace_name | Strategy for creating indexes like:- namespace_name or pod_name |
| resources | {} | Resources for fluentd daemonset pods |
| nodeSelectors | {} | Nodeselectors map key-values for fluentd daemonset pods |
| affinity | {} | Affinity and anit-affinity for fluentd daemonset pods |
| tolerations | {} | Tolerations and taints for fluentd daemonset pods |
| customConfiguration | {} | Custom configuration parameters for fluentd |
| additionalConfiguration | {} | Additional configuration parameters for fluentd |
| esSecurity.enabled | true | To enabled the xpack security of fluentd |
| esSecurity.elasticSearchPassword | elasticsearch-password | Credentials for elasticsearch authentication |

View File

@ -1,11 +0,0 @@
CHART NAME: {{ .Chart.Name }}
CHART VERSION: {{ .Chart.Version }}
APP VERSION: {{ .Chart.AppVersion }}
The helm chart for Fluentd setup has been deployed.
Get the list of pods by executing:
kubectl get pods --namespace {{ .Release.Namespace }} -l 'app={{ .Release.Name }}'
For getting the credential for admin user:
kubectl get fluentd {{ .Release.Name }} -n ot-operators

View File

@ -1,16 +0,0 @@
{{- if .Values.additionalConfiguration }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-additional-config
labels:
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: database
data:
{{ toYaml .Values.additionalConfiguration | indent 2 }}
{{- end }}

View File

@ -1,16 +0,0 @@
{{- if .Values.customConfiguration }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-custom-config
labels:
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: database
data:
{{ toYaml .Values.customConfiguration | indent 2 }}
{{- end }}

View File

@ -1,41 +0,0 @@
---
apiVersion: logging.logging.opstreelabs.in/v1beta1
kind: Fluentd
metadata:
name: {{ .Release.Name }}
labels:
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: log-shipper
spec:
esCluster:
host: {{ .Values.elasticSearchHost }}
{{- if .Values.esSecurity }}
esSecurity:
tlsEnabled: {{ .Values.esSecurity.enabled }}
existingSecret: {{ .Values.esSecurity.elasticSearchPassword }}
{{- end }}
indexNameStrategy: {{ .Values.indexNameStrategy }}
kubernetesConfig:
resources:
{{ toYaml .Values.resources | indent 6 }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
{{- if .Values.affinity }}
affinity:
{{ toYaml .Values.affinity | indent 6 }}
{{- end }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}
{{- if .Values.customConfiguration }}
customConfig: {{ .Release.Name }}-custom-config
{{- end }}
{{- if .Values.additionalConfiguration }}
additionalConfig: {{ .Release.Name }}-additional-config
{{- end }}

View File

@ -1,44 +0,0 @@
---
elasticSearchHost: elasticsearch-master
elasticSearchPassword: elasticsearch-password
indexNameStrategy: namespace_name
resources: {}
# requests:
# cpu: 1000m
# memory: 2048Mi
# limits:
# cpu: 1000m
# memory: 2048Mi
affinity: {}
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: disktype
# operator: In
# values:
# - ssd
nodeSelector: {}
# memory: high
tolerations: []
# - key: "example-key"
# operator: "Exists"
# effect: "NoSchedule"
#priorityClassName: ""
#customConfiguration:
# fluent.conf: |
# #####
#additionalConfig:
# systemd.conf: |
# #####
esSecurity:
enabled: true
elasticSearchPassword: elasticsearch-password

View File

@ -1,16 +0,0 @@
apiVersion: v2
name: ingress-management
description: A Helm chart to manage Ingress traffic
version: 0.1.0
appVersion: "1.0"
home: https://github.com/ot-container-kit/helm-charts
maintainers:
- name: sharvarikhamkar1304
keywords:
- ingress
- kong
- httpRoute
- kubernetes
icon: https://raw.githubusercontent.com/OT-CONTAINER-KIT/helm-charts/main/static/helm-chart-logo.svg
sources:
- https://github.com/ot-container-kit/helm-charts

View File

@ -1,49 +0,0 @@
# Ingress Management Helm Chart
A simple and reusable Helm chart to manage Kubernetes Gateway API HTTPRoutes for routing traffic to backend services.
This chart helps manage HTTPRoute resources to expose services using the Kubernetes Gateway API. You can customize host, path, service, and namespace via values.
## Homepage
[https://github.com/ot-container-kit/helm-charts](https://github.com/ot-container-kit/helm-charts)
## Maintainers
| Name | URL |
| ---------------- | --------------------------------------------- |
| sharvari-khamkar | [GitHub](https://github.com/sharvari-khamkar) |
## Source Code
[GitHub - ot-container-kit/helm-charts](https://github.com/ot-container-kit/helm-charts)
## Requirements
| Repository | Name | Version |
| ------------------------------------------------------------------------------------------------ | ---- | ------- |
| [https://ot-container-kit.github.io/helm-charts](https://ot-container-kit.github.io/helm-charts) | base | 0.1.0 |
## Values
| **Attribute** | **Scope** | **Example** | **Description** | **Default** |
|------------------|------------------|------------------------|------------------------------------------------------------------------|--------------|
| <br> `name` <br> <br> | <br> Global <br> <br> | <br> `"my-app"` <br> <br> | <br> Name of the HTTPRoute and backend service (the app name)<br><br> | `""` |
| <br> `namespace` <br> <br> | <br> Global <br> <br> | <br> `"default"` <br> <br> | <br> Kubernetes namespace where resources like HTTPRoute will be deployed<br><br> | `""` |
| <br> `host` <br> <br> | Routing | `"app.example.com"` | Hostname to expose the app<br><br> | `""` |
| <br>`path` <br> <br> | Routing | `"/api"` | Path under the host<br><br> | `""` |
| <br>`service.name` <br> <br> | Service Config | `"my-backend-svc"` | Name of the backend service to which traffic will be routed<br><br> | `""` |
| <br>`service.kind` <br> <br> | Service Config | `"Service"` | Kind of backend resource (Service by default)<br><br> | `"Service"` |
| <br>`service.port` <br> <br> | Service Config | `80` | Port on which the backend service listens<br><br> | `80` |

View File

@ -1,46 +0,0 @@
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: {{ required "A valid 'name' is required!" .Values.name }}
{{- if .Values.labels }}
labels:
{{ toYaml .Values.labels | indent 4 }}
{{- end }}
{{- if .Values.annotations }}
annotations:
{{ toYaml .Values.annotations | indent 4 }}
{{- end }}
spec:
{{- if .Values.parentRefs }}
parentRefs:
{{- range .Values.parentRefs }}
- name: {{ .name }}
{{- if .namespace }}
namespace: {{ .namespace }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.hostnames }}
hostnames:
{{- range .Values.hostnames }}
- "{{ . }}"
{{- end }}
{{- end }}
rules:
{{- range .Values.rules }}
- matches:
{{- range .matches }}
- path:
type: {{ .path.type }}
value: {{ .path.value | quote }}
{{- end }}
backendRefs:
{{- range .backendRefs }}
- name: {{ .name }}
kind: {{ .kind | default "Service" }}
port: {{ .port }}
{{- end }}
{{- end }}

View File

@ -1,60 +0,0 @@
---
# charts/ingress-management/values.yaml
# -- Name of the HTTPRoute and backend service (typically the app name)
name: ""
# -- Labels to apply to the HTTPRoute metadata
labels:
app: ""
# -- Optional annotations to apply to the HTTPRoute resource
annotations: {}
# -- Reference to the Gateway (parentRefs)
parentRefs:
- name: ""
namespace: ""
# -- Hostnames to be matched in the HTTPRoute
hostnames:
- ""
# -- Routing rules for HTTPRoute
rules:
- matches:
- path:
type: PathPrefix
value: ""
backendRefs:
- name: ""
kind: Service
port: 80
# -----------------------------------------------------
# Example values.yaml File
# -----------------------------------------------------
# name: open-webui
# labels:
# app: open-webui
# annotations:
# konghq.com/protocols: https
# konghq.com/https-redirect-status-code: "301"
# parentRefs:
# - name: kong
# namespace: default
# hostnames:
# - bp-ai.opstree.dev
# rules:
# - matches:
# - path:
# type: PathPrefix
# value: /
# backendRefs:
# - name: open-webui
# kind: Service
# port: 80

View File

@ -7,7 +7,7 @@ maintainers:
name: k8s-vault-webhook
sources:
- https://github.com/opstree/k8s-vault-webhook
version: "0.2"
version: "0.1"
home: https://github.com/opstree/k8s-vault-webhook
keywords:
- vault

View File

@ -44,9 +44,7 @@ metadata:
cert-manager.io/inject-ca-from: "{{ .Release.Namespace }}/{{ include "k8s-vault-webhook.servingCertificate" . }}"
{{- end }}
webhooks:
- name: pods.{{ template "k8s-vault-webhook.name" . }}.admission.opstree.com
sideEffects: {{ .Values.apiSideEffectValue }}
admissionReviewVersions: ["v1beta1"]
- name: pods.{{ template "k8s-vault-webhook.name" . }}.admission.banzaicloud.com
clientConfig:
service:
namespace: {{ .Release.Namespace }}
@ -88,8 +86,6 @@ webhooks:
- skip
{{- end }}
- name: secrets.{{ template "k8s-vault-webhook.name" . }}.admission
sideEffects: {{ .Values.apiSideEffectValue }}
admissionReviewVersions: ["v1beta1"]
clientConfig:
service:
namespace: {{ .Release.Namespace }}

View File

@ -1,5 +1,5 @@
{{- if .Values.podDisruptionBudget.enabled }}
apiVersion: policy/v1
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: {{ template "k8s-vault-webhook.fullname" . }}

View File

@ -15,7 +15,7 @@ certificate:
image:
repository: quay.io/opstree/k8s-vault-webhook
tag: "2.0"
tag: 1.0
pullPolicy: IfNotPresent
imagePullSecrets: []
@ -27,8 +27,8 @@ service:
env:
VAULT_IMAGE: vault:1.6.1
K8S_SECRET_INJECTOR_IMAGE: quay.io/opstree/k8s-secret-injector:2.0
# K8S_SECRET_INJECTOR_IMAGE_PULL_POLICY: Always
SECRETS_CONSUMER_ENV_IMAGE: quay.io/opstree/k8s-secret-injector:1.0
# SECRETS_CONSUMER_ENV_IMAGE_PULL_POLICY: Always
# VAULT_CAPATH: /vault/tls
# used when the pod that should get secret injected does not specify
# an imagePullSecret

View File

@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -1,9 +0,0 @@
apiVersion: v2
name: ot-karpenter
version: 0.3.0
maintainers:
- name: opstree
dependencies:
- name: karpenter
version: 1.1.1
repository: oci://public.ecr.aws/karpenter

View File

@ -1,78 +0,0 @@
# Karpenter
Karpenter is an open-source Kubernetes cluster autoscaler built for efficiency and speed. This Helm chart installs Karpenter in your Kubernetes cluster and can be used to manage your node pools for dynamically scaling your infrastructure. This chart supports automated deployment of Karpenter, including the creation of NodePools, EC2NodeClasses, IAM roles, and other necessary resources.
To install Karpenter, use the following commands:
```shell
$ helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
$ helm install karpenter ot-helm/karpenter --namespace <namespace> --dependency-update --create-namespace
```
Adds the ot-helm repository to Helm, which contains the Karpenter Helm chart.
Installs the Karpenter chart from the ot-helm repository.
To upgrade the setup:
```shell
$ helm upgrade karpenter ot-helm/karpenter --install --namespace <namespace> --create-namespace
```
Upgrades an existing Karpenter release or installs it if it doesn't exist.
To uninstall the chart:
```shell
$ helm delete karpenter --namespace <namespace>
```
Deletes the Karpenter release from the specified namespace.
Replace <namespace> with the namespace where Karpenter is installed.
### Pre-Requisites
- Kubernetes => 1.18+
- Helm => 3.X
- Karpenter Operator => 0.1.0
- Open ID Connector (EKS) => https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html
- IAM Roles for Karpenter
- Add tags to subnets and security groups
- Update aws-auth ConfigMap
### Parameters
| **Name** | **Value** | **Description** |
|--------------------------------------------------------------------|:-------------------------------|------------------------------------------------|
| `karpenter.settings.clusterName` | `my-cluster` | The name of your Kubernetes cluster |
| `karpenter.serviceAccount.annotations.eks.amazonaws.com/role-arn` | Required | IAM role ARN for Karpenter controller |
| `karpenter.controller.resources.requests.cpu` | `1` | CPU request for Karpenter controller |
| `karpenter.controller.resources.requests.memory` | `1Gi` | Memory request for Karpenter controller |
| `karpenter.controller.resources.limits.cpu` | `1` | CPU limit for Karpenter controller |
| `karpenter.controller.resources.limits.memory` | `1Gi` | Memory limit for Karpenter controller |
| `nodePools` | [] | List of NodePools to be created |
| `nodePools.name` | default-nodepool | Name of the NodePool |
| `nodePools.labels` - If not required can be omitted | {} | Labels for the NodePool |
| `nodePools.annotations` - If not required can be omitted | {} | Annotations for the NodePool |
| `nodePools.requirements` - Can be empty [] | [] | Node requirements like CPU, memory, etc. |
| `nodePools.taints` - If not required can be omitted | [] | Taints for the NodePool |
| `nodePools.expireAfter` | 720h | Expiration duration for idle NodePools |
| `nodePools.limits.cpu` - Required Field | "1000m" | CPU limit for the NodePool |
| `nodePools.limits.memory`- If not required can be omitted | "2Gi" | Memory limit for the NodePool |
| `nodePools.disruption.consolidationPolicy` - Required Field | WhenEmptyOrUnderutilized | Consolidation policy for underutilized nodes |
| `nodePools.disruption.consolidateAfter` - Required Field | 1m | Time before consolidating underutilized nodes |
### Notes:
- Refer to Example Folder for a example values.yaml file
- Karpenter automatically creates and manages NodePools as part of the installation process.
- Make sure to configure the IAM roles required by Karpenter for it to interact with EC2 instances and manage resources along with all prerequisites.
- The chart will ensure the Karpenter controller and NodePools are deployed correctly with all required configurations.

View File

@ -1,82 +0,0 @@
#This example below has 2 nodepools for reference
# Custom values for your chart
clusterName: "" # Name of the EKS cluster (for identification in the chart and Karpenter)
awsPartition: "" # AWS partition, default is 'aws' (used in multi-region or partitioned environments)
awsAccountId: 3333 # AWS account ID where the resources will be provisioned
# Karpenter chart overrides
karpenter:
settings:
clusterName: "" # Cluster name for the Karpenter controller to identify and manage nodes in this cluster
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::3333:role/KarpenterControllerRole-demo-eks # IAM role for Karpenter controller's access to AWS services
controller:
resources:
requests:
cpu: "1" # CPU resource request for the Karpenter controller (minimum resources Karpenter will be allocated)
memory: "1Gi" # Memory resource request for the Karpenter controller
limits:
cpu: "1" # CPU resource limit for the Karpenter controller (maximum resources Karpenter can consume)
memory: "1Gi" # Memory resource limit for the Karpenter controller
# NodePools define groups of nodes with specific requirements
nodePools:
- name: default # Name of the node pool, used for identification
limits: # Required Field
cpu: "1000"
memory: "1000Gi"
disruption: # Required Field
consolidationPolicy: WhenEmptyOrUnderutilized
consolidateAfter: 1m
requirements: # Node pool requirements for instance types and other properties
- key: kubernetes.io/arch
operator: In # Specifies the architecture for nodes
values:
- "amd64"
- key: kubernetes.io/os
operator: In # Specifies the OS type for nodes
values:
- "linux" # The node pool requires Linux OS
- key: karpenter.sh/capacity-type
operator: In # Specifies the capacity type for nodes
values:
- "on-demand"
- key: karpenter.k8s.aws/instance-category
operator: In # Specifies allowed EC2 instance categories
values:
- "t" # Instance category t (e.g., T2, T3)
- "m"
- "r"
minValues: 2 # Minimum number of instances of each category
- key: karpenter.k8s.aws/instance-family
operator: Exists # Specifies that instances in the family must exist (e.g., m5, r5)
minValues: 5 # Minimum number of instances in the specified family
- key: karpenter.k8s.aws/instance-family
operator: In # Specifies that the instance family must match one of the listed values
values:
- "m5"
- "m5d"
- "c5"
- "c5d"
- "c4"
- "r4"
minValues: 3 # Minimum number of instances from these families
- key: node.kubernetes.io/instance-type
operator: Exists # Ensures that the node pool has specific instance types
minValues: 10 # Minimum number of instances of the specified types
- key: karpenter.k8s.aws/instance-generation
operator: Gt # Specifies that the instance generation must be greater than a particular value
values:
- "2" # Instance generation must be greater than 2 (i.e., newer generation)
nodeClass:
group: karpenter.k8s.aws # Node class group for Karpenter
kind: EC2NodeClass # Kind of node class, EC2NodeClass indicates AWS EC2 instances
name: default # The name of the node class (default for this pool)

View File

@ -1,33 +0,0 @@
{{- range .Values.ec2NodeClasses }}
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
name: {{ .name }}
spec:
amiFamily: {{ .amiFamily | default "AL2" }}
role: {{ .role }}
{{- if .detailedMonitoring }}
detailedMonitoring: {{ .detailedMonitoring }}
{{- end }}
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: "{{ $.Values.clusterName }}"
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: "{{ $.Values.clusterName }}"
amiSelectorTerms:
- id: "{{ .amiSelector.arm }}"
- id: "{{ .amiSelector.amd }}"
{{- if .amiSelector.gpu }}
- id: "{{ .amiSelector.gpu }}"
{{- end }}
{{- if .amiSelector.name }}
- name: "{{ .amiSelector.name }}"
{{- end }}
{{- if .tags }}
tags:
{{- range $key, $value := .tags }}
{{ $key }}: "{{ $value }}"
{{- end }}
{{- end }}
{{- end }}

View File

@ -1,73 +0,0 @@
{{- range .Values.nodePools }}
---
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: {{ .name }}
spec:
template:
metadata:
labels:
{{- if .labels }}
{{- range $key, $value := .labels }}
{{ $key }}: {{ $value }}
{{- end }}
{{- else }}
{} # Empty labels object if no labels are defined
{{- end }}
annotations:
{{- if .annotations }}
{{- range $key, $value := .annotations }}
{{ $key }}: {{ $value }}
{{- end }}
{{- else }}
{} # Empty annotations object if no annotations are defined
{{- end }}
spec:
requirements:
{{- if .requirements }}
{{- if gt (len .requirements) 0 }}
{{- range .requirements }}
- key: {{ .key }}
operator: {{ .operator }}
values:
{{ toYaml .values | indent 12 }}
{{- if .minValues }}
minValues: {{ .minValues }}
{{- end }}
{{- end }}
{{- else }}
[] # Render an empty array explicitly when no requirements are defined
{{- end }}
{{- else }}
[] # Ensure that an empty array is rendered even if the user does not specify requirements
{{- end }}
taints:
{{- if .taints }}
{{- range .taints }}
- key: {{ .key }}
{{- if .value }}
value: {{ .value }}
{{- end }}
effect: {{ .effect }}
{{- end }}
{{- else }}
[] # Empty taints array if no taints are defined
{{- end }}
nodeClassRef:
group: {{ .nodeClass.group | default "karpenter.k8s.aws" }}
kind: {{ .nodeClass.kind | default "EC2NodeClass" }}
name: {{ .nodeClass.name }}
expireAfter: {{ .expireAfter | default "720h" }}
limits:
{{- if .limits.cpu }}
cpu: {{ .limits.cpu }}
{{- end }}
{{- if .limits.memory }}
memory: {{ .limits.memory }}
{{- end }}
disruption:
consolidationPolicy: {{ .disruption.consolidationPolicy | default "WhenEmptyOrUnderutilized" }}
consolidateAfter: {{ .disruption.consolidateAfter | default "1m" }}
{{- end }}

View File

@ -1,110 +0,0 @@
# Custom values for your chart
# Name of the EKS cluster (for identification in the chart and Karpenter)
clusterName: ""
# AWS partition, default is 'aws' (used in multi-region or partitioned environments)
awsPartition: ""
# AWS account ID where the resources will be provisioned
awsAccountId: 3333
# Karpenter chart overrides
karpenter:
settings:
# Cluster name for the Karpenter controller to identify and manage nodes in this cluster
clusterName: ""
# Name of SQS queue for handling EC2 instance interruptions
# interruptionQueue: ""
serviceAccount:
annotations:
# IAM role ARN for Karpenter controller's access to AWS services
eks.amazonaws.com/role-arn: arn:aws:iam::3333:role/KarpenterControllerRole-demo-eks
# Karpenter controller resources can be customized in this section below
# controller:
# resources:
# requests:
# cpu: "1" # CPU resource request for the Karpenter controller (minimum resources Karpenter will be allocated)
# memory: "1Gi" # Memory resource request for the Karpenter controller
# limits:
# cpu: "1" # CPU resource limit for the Karpenter controller (maximum resources Karpenter can consume)
# memory: "1Gi" # Memory resource limit for the Karpenter controller
# EC2NodeClasses define the EC2 instance classes that Karpenter can use
ec2NodeClasses:
- name: default
# Amazon Linux 2 AMI family
amiFamily: AL2
# "KarpenterNodeRole-my-eks-cluster" # Name of karpenter Node Role ( NOT THE ARN )
role:
amiSelector:
# To get the AMI ID, run the commands below in the AWS CLI and replace the AMI ID in the values.yaml file
# ARM_AMI_ID="$(aws ssm get-parameter --name /aws/service/eks/optimized-ami/${K8S_VERSION}/amazon-linux-2-arm64/recommended/image_id --query Parameter.Value --output text)"
arm:
# AMD_AMI_ID="$(aws ssm get-parameter --name /aws/service/eks/optimized-ami/${K8S_VERSION}/amazon-linux-2/recommended/image_id --query Parameter.Value --output text)"
amd:
# GPU_AMI_ID="$(aws ssm get-parameter --name /aws/service/eks/optimized-ami/${K8S_VERSION}/amazon-linux-2-gpu/recommended/image_id --query Parameter.Value --output text)"
# gpu: ami-gpu-id
# amazon-eks-node-1.27-* # Optional: EKS Node AMI Name
# name:
# Optional, propagates tags to underlying EC2 resources
# tags:
# environment: production
# team: "engineering"
# owner: "admin@company.com"
# Enable detailed monitoring for the EC2 instance
# detailedMonitoring: true
# NodePools define groups of nodes with specific requirements
nodePools:
- name: default # Name of the node pool, preset here is set to default nodepool
requirements: # List of node requirements for scheduling
- key: kubernetes.io/arch # Architecture requirement (e.g., amd64, arm64)
operator: In # Only nodes with the specified architecture will be selected
values:
- "amd64" # Specifies that the node should have an amd64 architecture
- key: kubernetes.io/os # OS requirement (e.g., linux, windows)
operator: In # Only nodes with the specified OS will be selected
values:
- "linux" # Specifies that the node should run Linux
- key: karpenter.sh/capacity-type # Defines the instance's capacity type
operator: In # Only nodes with the specified capacity type will be selected
values:
- "on-demand" # Specifies that the node should be an on-demand instance, can be "spot" as well
- key: karpenter.k8s.aws/instance-category # Defines the instance category (e.g., t, m, r)
operator: In # Only nodes with the specified instance category will be selected
values:
- "t" # These can be customized as per need
- "m"
- "r"
# - key: karpenter.k8s.aws/instance-family # Uncomment to define the instance family (e.g., t3, m5, r5)
# operator: In
# values:
# - "t3a"
- key: karpenter.k8s.aws/instance-generation # Instance generation requirement
operator: Gt # Greater than the specified value
values:
- "2" # Specifies that only instance generations greater than 2 are allowed
nodeClass: # Defines the node class, which is linked to EC2NodeClass
group: karpenter.k8s.aws # Group of the EC2NodeClass
kind: EC2NodeClass # Type of node class, which is EC2NodeClass in this case
name: default # Name of the EC2NodeClass to use for the node pool (name of the EC2 instance class)
expireAfter: 720h # Maximum lifetime of the node pool before it expires (720 hours = 30 days)
limits: # Resource limits for the node pool
cpu: "1000" # Maximum CPU limit for the node pool
memory: "1Gi"
disruption: # Policy for handling disruption in the node pool
consolidationPolicy: WhenEmptyOrUnderutilized # Consolidate nodes when they are empty or underutilized
consolidateAfter: 1m # Time after which consolidation will occur, in this case, 1 minute
# Uncomment Below annotations key ( next 3 Lines ) if you want to use annotations
# annotations: # Annotations are key-value pairs that provide additional metadata for the node pool
# example.com/owner: "my-team" # An example annotation that associates the node pool with a team
# example.com/maintainer: "admin@company.com" # Example annotation for the maintainer's contact information
# Uncomment below taint key ( next 4 Lines ) if you want to use taints
# taints: # Taints are used to control which pods can be scheduled on the node pool
# - key: "example.com/special-taint" # Taint key that identifies the taint
# value: "special-value" # Value associated with the taint
# effect: "NoExecute" # Effect of the taint. In this case, NoExecute means pods won't be scheduled on tainted nodes
# Comment Labels Key below if you dont want to use Labels
labels: # Labels are key-value pairs used for categorizing the node pool
environment: production # Label indicating that this node pool is for production use
team: "engineering" # Label associating the node pool with the engineering team

View File

@ -1,22 +0,0 @@
---
apiVersion: v1
description: Provides easy Kibana visualization setup
engine: gotpl
maintainers:
- name: Opstree Solutions
name: kibana
sources:
- https://github.com/ot-container-kit/logging-operator
version: 0.4.0
appVersion: "0.4.0"
home: https://github.com/ot-container-kit/logging-operator
keywords:
- operator
- elasticsearch
- opstree
- kubernetes
- openshift
- logging-operator
- kibana
- fluentd
icon: https://raw.githubusercontent.com/OT-CONTAINER-KIT/logging-operator/master/static/logging-operator-logo.svg

View File

@ -1,46 +0,0 @@
## Kibana
Kibana is a visualization tool that can be integrated with elasticsearch. It can be used to visualize logs and create operational dashboards over it. This helm chart needs [Logging Operator](../logging-operator) inside Kubernetes cluster. The kibana definition can be modified or changed by [values.yaml](./values.yaml).
```shell
$ helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
$ helm install <my-release> ot-helm/kibana --namespace <namespace>
```
Kibana setup can be upgraded by using `helm upgrade` command:-
```shell
$ helm upgrade <my-release> ot-helm/kibana --install --namespace <namespace>
```
For uninstalling the chart:-
```shell
$ helm delete <my-release> --namespace <namespace>
```
### Pre-Requisities
- Kubernetes => 1.15
- Helm => 3.X
- Logging Operator => 0.3.0
### Parameters
| **Name** | **Value** | **Description** |
|-----------------------------------|-----------------------------------|------------------------------------------------------------|
| replicas | 1 | Number of deployment replicas for kibana |
| esCluster.esURL | https://elasticsearch-master:9200 | Hostname or URL of the elasticsearch server |
| esCluster.esVersion | 7.17.0 | Version of the kibana in pair with elasticsearch |
| esCluster.clusterName | elasticsearch | Name of the elasticsearch created by elasticsearch crd |
| resources | {} | Resources for kibana visualization pods |
| nodeSelectors | {} | Nodeselectors map key-values for kibana visualization pods |
| affinity | {} | Affinity and anit-affinity for kibana visualization pods |
| tolerations | {} | Tolerations and taints for kibana visualization pods |
| esSecurity.enabled | true | To enabled the xpack security of kibana |
| esSecurity.elasticSearchPassword | elasticsearch-password | Credentials for elasticsearch authentication |
| externalService.enabled | false | To create a LoadBalancer service of kibana |
| ingress.enabled | false | To enable the ingress resource for kibana |
| ingress.host | kibana.opstree.com | Hostname or URL on which kibana will be exposed |
| ingress.tls.enabled | false | To enable SSL on kibana ingress resource |
| ingress.tls.secret | tls-secret | SSL certificate for kibana ingress resource |

View File

@ -1,11 +0,0 @@
CHART NAME: {{ .Chart.Name }}
CHART VERSION: {{ .Chart.Version }}
APP VERSION: {{ .Chart.AppVersion }}
The helm chart for Kibana setup has been deployed.
Get the list of pods by executing:
kubectl get pods --namespace {{ .Release.Namespace }} -l 'app={{ .Release.Name }}'
For getting the credential for admin user:
kubectl get kibana {{ .Release.Name }} -n ot-operators

View File

@ -1,24 +0,0 @@
{{- if (eq .Values.externalService.enabled true) }}
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: visualization
name: {{ .Release.Name }}
spec:
ports:
- name: http
port: 5601
protocol: TCP
targetPort: 5601
selector:
app: {{ .Release.Name }}
service: kibana
type: LoadBalancer
{{- end }}

View File

@ -1,32 +0,0 @@
{{- if (eq .Values.ingress.enabled true) }}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
labels:
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: visualization
name: {{ .Release.Name }}
spec:
{{- if (eq .Values.ingress.tls.enabled true) }}
tls:
- hosts:
- {{ .Values.ingress.host }}
secretName: {{ .Values.ingress.tls.secret }}
{{- end }}
rules:
- host: {{ .Values.ingress.host }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Release.Name }}
port:
number: 5601
{{- end }}

View File

@ -1,37 +0,0 @@
---
apiVersion: logging.logging.opstreelabs.in/v1beta1
kind: Kibana
metadata:
name: {{ .Release.Name }}
labels:
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: visualization
spec:
replicas: {{ .Values.replicas }}
esCluster:
host: {{ .Values.esCluster.esURL }}
esVersion: {{ .Values.esCluster.esVersion }}
clusterName: {{ .Values.esCluster.clusterName }}
{{- if .Values.esSecurity }}
esSecurity:
tlsEnabled: {{ .Values.esSecurity.enabled }}
existingSecret: {{ .Values.esSecurity.elasticSearchPassword }}
{{- end }}
kubernetesConfig:
resources:
{{ toYaml .Values.resources | indent 6 }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
{{- if .Values.affinity }}
affinity:
{{ toYaml .Values.affinity | indent 6 }}
{{- end }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}

View File

@ -1,58 +0,0 @@
---
replicas: 1
esCluster:
esURL: https://elasticsearch-master:9200
esVersion: 7.17.0
clusterName: elasticsearch
resources: {}
# requests:
# cpu: 1000m
# memory: 2048Mi
# limits:
# cpu: 1000m
# memory: 2048Mi
affinity: {}
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: disktype
# operator: In
# values:
# - ssd
nodeSelector: {}
# memory: high
tolerations: []
# - key: "example-key"
# operator: "Exists"
# effect: "NoSchedule"
#priorityClassName: ""
#customConfiguration:
# fluent.conf: |
# #####
#additionalConfig:
# systemd.conf: |
# #####
esSecurity:
enabled: true
elasticSearchPassword: elasticsearch-sa-token
externalService:
enabled: false
ingress:
enabled: false
host: kibana.opstree.com
tls:
enabled: true
secret: tls-secret

View File

@ -1 +0,0 @@
*.tgz

View File

@ -1,26 +0,0 @@
---
apiVersion: v2
appVersion: "0.4.0"
description: Helm chart to deploy and manage EFK stack in Kubernetes
engine: gotpl
maintainers:
- name: Abhishek Dubey
- name: Sandeep Rawat
name: logging-operator
sources:
- https://github.com/OT-CONTAINER-KIT/logging-operator
version: 0.4.0
home: https://github.com/OT-CONTAINER-KIT/logging-operator
icon: https://raw.githubusercontent.com/OT-CONTAINER-KIT/logging-operator/master/static/logging-operator-logo.svg
keywords:
- operator
- elasticsearch
- opstree
- kubernetes
- openshift
- backup
- restore
- fluentd
- kibana
- monitoring
- logging

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,56 +0,0 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
creationTimestamp: null
name: indexlifecycles.logging.logging.opstreelabs.in
spec:
group: logging.logging.opstreelabs.in
names:
kind: IndexLifeCycle
listKind: IndexLifeCycleList
plural: indexlifecycles
singular: indexlifecycle
scope: Namespaced
versions:
- name: v1beta1
schema:
openAPIV3Schema:
description: IndexLifeCycle is the Schema for the indexlifecycles API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: IndexLifeCycleSpec defines the desired state of IndexLifeCycle
properties:
foo:
description: Foo is an example field of IndexLifeCycle. Edit indexlifecycle_types.go
to remove/update
type: string
type: object
status:
description: IndexLifeCycleStatus defines the observed state of IndexLifeCycle
type: object
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -1,56 +0,0 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
creationTimestamp: null
name: indextemplates.logging.logging.opstreelabs.in
spec:
group: logging.logging.opstreelabs.in
names:
kind: IndexTemplate
listKind: IndexTemplateList
plural: indextemplates
singular: indextemplate
scope: Namespaced
versions:
- name: v1beta1
schema:
openAPIV3Schema:
description: IndexTemplate is the Schema for the indextemplates API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: IndexTemplateSpec defines the desired state of IndexTemplate
properties:
foo:
description: Foo is an example field of IndexTemplate. Edit indextemplate_types.go
to remove/update
type: string
type: object
status:
description: IndexTemplateStatus defines the observed state of IndexTemplate
type: object
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

File diff suppressed because it is too large Load Diff

View File

@ -1,64 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
labels:
control-plane: "{{ .Release.Name }}"
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
name: {{ .Release.Name }}
template:
metadata:
labels:
name: {{ .Release.Name }}
spec:
securityContext:
runAsNonRoot: true
containers:
- name: "{{ .Release.Name }}"
image: "{{ .Values.loggingOperator.imageName }}:{{ .Values.loggingOperator.imageTag }}"
imagePullPolicy: {{ .Values.loggingOperator.imagePullPolicy }}
command:
- /manager
args:
- --leader-elect
{{- if .Values.resources }}
resources:
{{ toYaml .Values.resources | indent 10 }}
{{- end }}
{{- if .Values.readinessProbe }}
readinessProbe:
{{ toYaml .Values.readinessProbe | indent 10 }}
{{- end }}
{{- if .Values.livenessProbe }}
livenessProbe:
{{ toYaml .Values.livenessProbe | indent 10 }}
{{- end }}
securityContext:
allowPrivilegeEscalation: false
nodeSelector:
{{- with .Values.nodeSelector }}
{{ toYaml . | indent 8 }}
{{- end }}
priorityClassName: {{ .Values.priorityClassName }}
{{- with .Values.affinity }}
affinity: {{ toYaml . | nindent 8 }}
{{- end }}
tolerations:
{{- if .Values.tolerateAllTaints }}
- operator: Exists
{{- end }}
{{- with .Values.tolerations }}
{{ toYaml . | indent 8 }}
{{- end }}
serviceAccountName: "{{ .Release.Name }}"
serviceAccount: "{{ .Release.Name }}"

View File

@ -1,20 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Release.Name }}
labels:
control-plane: "{{ .Release.Name }}"
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccountName }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ .Release.Name }}
apiGroup: rbac.authorization.k8s.io

View File

@ -1,233 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Release.Name }}
labels:
control-plane: "{{ .Release.Name }}"
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
rules:
- apiGroups:
- ""
resources:
- configmaps
- events
- secrets
- services
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- namespaces
- pods
- serviceaccounts
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- apps
resources:
- daemonsets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- apps
resources:
- deployments
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- logging.logging.opstreelabs.in
resources:
- elasticsearches
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- logging.logging.opstreelabs.in
resources:
- elasticsearches/finalizers
verbs:
- update
- apiGroups:
- logging.logging.opstreelabs.in
resources:
- elasticsearches/status
verbs:
- get
- patch
- update
- apiGroups:
- logging.logging.opstreelabs.in
resources:
- fluentds
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- logging.logging.opstreelabs.in
resources:
- fluentds/finalizers
verbs:
- update
- apiGroups:
- logging.logging.opstreelabs.in
resources:
- fluentds/status
verbs:
- get
- patch
- update
- apiGroups:
- logging.logging.opstreelabs.in
resources:
- indexlifecycles
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- logging.logging.opstreelabs.in
resources:
- indexlifecycles/finalizers
verbs:
- update
- apiGroups:
- logging.logging.opstreelabs.in
resources:
- indexlifecycles/status
verbs:
- get
- patch
- update
- apiGroups:
- logging.logging.opstreelabs.in
resources:
- indextemplates
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- logging.logging.opstreelabs.in
resources:
- indextemplates/finalizers
verbs:
- update
- apiGroups:
- logging.logging.opstreelabs.in
resources:
- indextemplates/status
verbs:
- get
- patch
- update
- apiGroups:
- logging.logging.opstreelabs.in
resources:
- kibanas
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- logging.logging.opstreelabs.in
resources:
- kibanas/finalizers
verbs:
- update
- apiGroups:
- logging.logging.opstreelabs.in
resources:
- kibanas/status
verbs:
- get
- patch
- update
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterrolebindings
- clusterroles
verbs:
- create
- delete
- get
- list
- patch
- update
- watch

View File

@ -1,13 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
labels:
control-plane: "{{ .Release.Name }}"
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}

View File

@ -1,23 +0,0 @@
---
apiVersion: v1
kind: Service
metadata:
labels:
control-plane: "{{ .Release.Name }}"
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
spec:
ports:
- name: operator
port: {{ .Values.loggingOperator.port }}
protocol: TCP
targetPort: {{ .Values.loggingOperator.port }}
selector:
name: {{ .Release.Name }}
sessionAffinity: None
type: ClusterIP

View File

@ -1,21 +0,0 @@
---
apiVersion: v1
kind: Pod
metadata:
name: "{{ .Release.Name }}-test-connection"
labels:
control-plane: "{{ .Release.Name }}"
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['-qO-', '{{ .Release.Name }}.{{ .Release.Namespace }}.svc:{{ .Values.loggingOperator.port }}/healthz']
restartPolicy: Never

View File

@ -1,38 +0,0 @@
---
loggingOperator:
imageName: quay.io/opstree/logging-operator
imageTag: v0.4.0
imagePullPolicy: Always
port: 8081
resources:
limits:
cpu: 300m
memory: 800Mi
requests:
cpu: 300m
memory: 800Mi
replicas: 1
livenessProbe:
httpGet:
path: /healthz
port: 8081
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe:
httpGet:
path: /readyz
port: 8081
initialDelaySeconds: 5
periodSeconds: 10
serviceAccountName: logging-operator
priorityClassName: ""
nodeSelector: {}
tolerateAllTaints: false
tolerations: []
affinity: {}

View File

@ -1,27 +0,0 @@
apiVersion: v2
name: loki
description: A Helm chart for loki
type: application
version: 1.0.1
appVersion: 1.0.0
dependencies:
- name: loki-distributed
version: 0.76.1
repository: https://grafana.github.io/helm-charts
alias: distributed
tags:
- logging
condition: distributed.enabled
- name: promtail
version: 6.16.4
repository: https://grafana.github.io/helm-charts
alias: promtail
tags:
- logging
- name: loki
version: 6.7.3
repository: https://grafana.github.io/helm-charts
alias: standalone
tags:
- logging
condition: standalone.enabled

View File

@ -1,501 +0,0 @@
logging:
gateway:
# image:
# registry:
# repository:
# tag: 1.20.2-alpine
enabled: true
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 2
resources:
requests:
memory: 500Mi
cpu: 200m
limits:
memory: 500Mi
cpu: 200m
nginxConfig:
file: |
worker_processes 5; ## Default: 1
error_log /dev/stderr;
pid /tmp/nginx.pid;
worker_rlimit_nofile 8192;
events {
worker_connections 4096; ## Default: 1024
}
http {
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
client_max_body_size 5M;
proxy_http_version 1.1;
default_type application/octet-stream;
log_format {{ .Values.gateway.nginxConfig.logFormat }}
{{- if .Values.gateway.verboseLogging }}
access_log /dev/stderr main;
{{- else }}
map $status $loggable {
~^[23] 0;
default 1;
}
access_log /dev/stderr main if=$loggable;
{{- end }}
sendfile on;
tcp_nopush on;
{{- if .Values.gateway.nginxConfig.resolver }}
resolver {{ .Values.gateway.nginxConfig.resolver }};
{{- else }}
resolver {{ .Values.global.dnsService }}.{{ .Values.global.dnsNamespace }}.svc.{{ .Values.global.clusterDomain }};
{{- end }}
{{- with .Values.gateway.nginxConfig.httpSnippet }}
{{ . | nindent 2 }}
{{- end }}
server {
listen 8080;
{{- if .Values.gateway.basicAuth.enabled }}
auth_basic "Loki";
auth_basic_user_file /etc/nginx/secrets/.htpasswd;
{{- end }}
location = / {
return 200 'OK';
auth_basic off;
access_log off;
}
location = /api/prom/push {
set $api_prom_push_backend http://{{ include "loki.distributorFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
proxy_pass $api_prom_push_backend:3100$request_uri;
proxy_http_version 1.1;
}
location = /api/prom/tail {
set $api_prom_tail_backend http://{{ include "loki.querierFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
proxy_pass $api_prom_tail_backend:3100$request_uri;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
}
# Ruler
location ~ /prometheus/api/v1/alerts.* {
proxy_pass http://{{ include "loki.rulerFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /prometheus/api/v1/rules.* {
proxy_pass http://{{ include "loki.rulerFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /api/prom/rules.* {
proxy_pass http://{{ include "loki.rulerFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /api/prom/alerts.* {
proxy_pass http://{{ include "loki.rulerFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /api/prom/.* {
set $api_prom_backend http://{{ include "loki.queryFrontendFullname" . }}-headless.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
proxy_pass $api_prom_backend:3100$request_uri;
proxy_http_version 1.1;
}
location = /loki/api/v1/push {
set $loki_api_v1_push_backend http://{{ include "loki.distributorFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
proxy_pass $loki_api_v1_push_backend:3100$request_uri;
proxy_http_version 1.1;
}
location = /loki/api/v1/tail {
set $loki_api_v1_tail_backend http://{{ include "loki.querierFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
proxy_pass $loki_api_v1_tail_backend:3100$request_uri;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
}
location ~ /loki/api/.* {
set $loki_api_backend http://{{ include "loki.queryFrontendFullname" . }}-headless.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
proxy_pass $loki_api_backend:3100$request_uri;
proxy_http_version 1.1;
}
{{- with .Values.gateway.nginxConfig.serverSnippet }}
{{ . | nindent 4 }}
{{- end }}
}
}
loki:
# image:
# registry:
# repository: grafana/loki
# tag: 2.9.2
podAnnotations:
sidecar.istio.io/inject: "false"
storageConfig:
aws:
s3: http://minio:minio123@monitoring-minio.monitoring.svc:9000/loki
s3forcepathstyle: true
region: us-east-1
# aws:
# region: ap-south-1
# bucketnames: jm-prod-loki-app-logs
# s3forcepathstyle: false
# sse_encryption: true
boltdb_shipper:
shared_store: s3
cache_ttl: 24h
schemaConfig:
configs:
- from: "2020-09-07"
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: loki_index_
period: 24h
config: |
auth_enabled: false
server:
{{- toYaml .Values.loki.server | nindent 6 }}
common:
compactor_address: http://{{ include "loki.compactorFullname" . }}:3100
distributor:
ring:
kvstore:
store: memberlist
memberlist:
join_members:
- {{ include "loki.fullname" . }}-memberlist
ingester_client:
grpc_client_config:
grpc_compression: gzip
ingester:
lifecycler:
ring:
kvstore:
store: memberlist
replication_factor: 1
chunk_idle_period: 30m
chunk_block_size: 262144
chunk_encoding: snappy
chunk_retain_period: 1m
max_transfer_retries: 0
wal:
dir: /var/loki/wal
limits_config:
retention_period: 72h
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
max_cache_freshness_per_query: 10m
split_queries_by_interval: 15m
# for big logs tune
per_stream_rate_limit: 512M
per_stream_rate_limit_burst: 1024M
cardinality_limit: 200000
ingestion_burst_size_mb: 1000
ingestion_rate_mb: 10000
max_entries_limit_per_query: 1000000
max_label_value_length: 20480
max_label_name_length: 10240
max_label_names_per_series: 300
{{- if .Values.loki.schemaConfig}}
schema_config:
{{- toYaml .Values.loki.schemaConfig | nindent 2}}
{{- end}}
{{- if .Values.loki.storageConfig}}
storage_config:
{{- if .Values.indexGateway.enabled}}
{{- $indexGatewayClient := dict "server_address" (printf "dns:///%s:9095" (include "loki.indexGatewayFullname" .)) }}
{{- $_ := set .Values.loki.storageConfig.boltdb_shipper "index_gateway_client" $indexGatewayClient }}
{{- end}}
{{- toYaml .Values.loki.storageConfig | nindent 2}}
{{- if .Values.memcachedIndexQueries.enabled }}
index_queries_cache_config:
memcached_client:
addresses: dnssrv+_memcached-client._tcp.{{ include "loki.memcachedIndexQueriesFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}
consistent_hash: true
{{- end}}
{{- end}}
runtime_config:
file: /var/{{ include "loki.name" . }}-runtime/runtime.yaml
chunk_store_config:
max_look_back_period: 0s
{{- if .Values.memcachedChunks.enabled }}
chunk_cache_config:
embedded_cache:
enabled: false
memcached_client:
consistent_hash: true
addresses: dnssrv+_memcached-client._tcp.{{ include "loki.memcachedChunksFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}
{{- end }}
{{- if .Values.memcachedIndexWrites.enabled }}
write_dedupe_cache_config:
memcached_client:
consistent_hash: true
addresses: dnssrv+_memcached-client._tcp.{{ include "loki.memcachedIndexWritesFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}
{{- end }}
table_manager:
retention_deletes_enabled: false
retention_period: 0s
query_range:
align_queries_with_step: true
max_retries: 5
cache_results: true
results_cache:
cache:
{{- if .Values.memcachedFrontend.enabled }}
memcached_client:
addresses: dnssrv+_memcached-client._tcp.{{ include "loki.memcachedFrontendFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}
consistent_hash: true
{{- else }}
embedded_cache:
enabled: true
ttl: 24h
{{- end }}
frontend_worker:
{{- if .Values.queryScheduler.enabled }}
scheduler_address: {{ include "loki.querySchedulerFullname" . }}:9095
{{- else }}
frontend_address: {{ include "loki.queryFrontendFullname" . }}-headless:9095
{{- end }}
frontend:
log_queries_longer_than: 5s
compress_responses: true
{{- if .Values.queryScheduler.enabled }}
scheduler_address: {{ include "loki.querySchedulerFullname" . }}:9095
{{- end }}
tail_proxy_url: http://{{ include "loki.querierFullname" . }}:3100
compactor:
working_directory: /tml/loki/compactor
shared_store: s3
compaction_interval: 2m
retention_enabled: false
ruler:
storage:
type: local
local:
directory: /etc/loki/rules
ring:
kvstore:
store: memberlist
rule_path: /tmp/loki/scratch
alertmanager_url: https://alertmanager.xx
external_url: https://alertmanager.xx
serviceAccount:
create: true
name: loki-sa
imagePullSecrets: []
labels: {}
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::913108190184:role/jm-prod-fluent
automountServiceAccountToken: true
compactor:
enabled: true
retention_enabled: true
shared_store: s3
# nodeSelector:
# appType: monitoring
# tolerations:
# - key: "appType"
# operator: "Equal"
# value: "monitoring"
# effect: "NoSchedule"
queryFrontend:
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 2
resources:
requests:
memory: 500Mi
cpu: 200m
limits:
memory: 500Mi
cpu: 200m
distributor:
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 2
resources:
requests:
cpu: 200m
memory: 500Mi
limits:
cpu: 200m
memory: 500Mi
ingester:
replicas: 2
maxUnavailable: 1
persistence:
enabled: true
claims:
- name: data
size: 1Gi
# storageClass: encrypted-gp3
resources:
requests:
cpu: 200m
memory: 500Mi
limits:
cpu: 200m
memory: 500Mi
# nodeSelector:
# appType: monitoring
# tolerations:
# - key: "appType"
# operator: "Equal"
# value: "monitoring"
# effect: "NoSchedule"
# affinity: ""
querier:
kind: Deployment
replicas: 1
maxUnavailable: 1
# persistence:
# enabled: true
# size: 10Gi
# storageClass: encrypted-gp3
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 2
resources:
requests:
cpu: 200m
memory: 500Mi
limits:
cpu: 200m
memory: 500Mi
memcachedChunks:
enabled: true
replicas: 1
maxUnavailable: 1
persistence:
enabled: true
size: 1Gi
# storageClass: encrypted-gp3
extraArgs:
- -m 2048
- -I 32m
resources:
requests:
cpu: 200m
memory: 500Mi
limits:
cpu: 200m
memory: 500Mi
memcachedFrontend:
enabled: true
replicas: 1
maxUnavailable: 1
persistence:
enabled: true
size: 1Gi
# storageClass: encrypted-gp3
extraArgs:
- -m 2048
- -I 32m
resources:
requests:
cpu: 200m
memory: 500Mi
limits:
cpu: 200m
memory: 500Mi
memcachedIndexQueries:
enabled: true
replicas: 1
maxUnavailable: 1
persistence:
enabled: true
size: 1Gi
# storageClass: encrypted-gp3
extraArgs:
- -m 2048
- -I 32m
resources:
requests:
cpu: 200m
memory: 500Mi
limits:
cpu: 200m
memory: 500Mi
indexGateway:
enabled: true
replicas: 2
maxUnavailable: 1
persistence:
enabled: true
size: 1Gi
# storageClass: encrypted-gp3
resources:
requests:
cpu: 200m
memory: 500Mi
limits:
cpu: 200m
memory: 500Mi
# serviceMonitor:
# enabled: true
# namespace: logging
# namespaceSelector:
# any: true
# labels:
# prometheus: kube
# prometheusRule:
# enabled: false
# namespace: logging
# annotations: {}
# labels:
# app: loki-kube-prometheus
# prometheus: kube
# groups: []
promtail:
config:
logLevel: info
clients:
- url: http://loki-logging-gateway.logging.svc.cluster.local/loki/api/v1/push

View File

@ -1,86 +0,0 @@
logging:
loki:
storage:
type: filesystem
auth_enabled: false
commonConfig:
replication_factor: 1
schemaConfig:
configs:
- from: 2024-04-01
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: loki_index_
period: 24h
ingester:
chunk_encoding: snappy
tracing:
enabled: true
querier:
# Default is 4, if you have enough memory and CPU you can increase, reduce if OOMing
max_concurrent: 2
deploymentMode: SingleBinary
lokiCanary:
enabled: false
test:
enabled: false
singleBinary:
replicas: 1
resources:
limits:
cpu: 3
memory: 4Gi
requests:
cpu: 2
memory: 2Gi
extraEnv:
# Keep a little bit lower than memory limits
- name: GOMEMLIMIT
value: 3750MiB
chunksCache:
# default is 500MB, with limited memory keep this smaller
writebackSizeLimit: 10MB
allocatedMemory: 1024
# Enable minio for storage
minio:
enabled: false
persistence:
size: 10Gi
# Zero out replica counts of other deployment modes
backend:
replicas: 0
read:
replicas: 0
write:
replicas: 0
ingester:
replicas: 0
querier:
replicas: 0
queryFrontend:
replicas: 0
queryScheduler:
replicas: 0
distributor:
replicas: 0
compactor:
replicas: 0
indexGateway:
replicas: 0
bloomCompactor:
replicas: 0
bloomGateway:
replicas: 0
promtail:
config:
logLevel: info
clients:
- url: http://logging-gateway/loki/api/v1/push

View File

@ -1,90 +0,0 @@
distributed:
enabled: false
standalone:
enabled: true
loki:
storage:
type: filesystem
auth_enabled: false
commonConfig:
replication_factor: 1
schemaConfig:
configs:
- from: 2024-04-01
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: loki_index_
period: 24h
ingester:
chunk_encoding: snappy
tracing:
enabled: true
querier:
# Default is 4, if you have enough memory and CPU you can increase, reduce if OOMing
max_concurrent: 2
deploymentMode: SingleBinary
lokiCanary:
enabled: false
test:
enabled: false
singleBinary:
replicas: 1
resources:
limits:
cpu: 3
memory: 4Gi
requests:
cpu: 2
memory: 2Gi
extraEnv:
# Keep a little bit lower than memory limits
- name: GOMEMLIMIT
value: 3750MiB
chunksCache:
# default is 500MB, with limited memory keep this smaller
writebackSizeLimit: 10MB
allocatedMemory: 1024
# Enable minio for storage
minio:
enabled: false
persistence:
size: 10Gi
# Zero out replica counts of other deployment modes
backend:
replicas: 0
read:
replicas: 0
write:
replicas: 0
ingester:
replicas: 0
querier:
replicas: 0
queryFrontend:
replicas: 0
queryScheduler:
replicas: 0
distributor:
replicas: 0
compactor:
replicas: 0
indexGateway:
replicas: 0
bloomCompactor:
replicas: 0
bloomGateway:
replicas: 0
promtail:
config:
logLevel: info
clients:
- url: http://logging-gateway/loki/api/v1/push

View File

@ -1,10 +0,0 @@
apiVersion: v2
name: microservice
description: Basic helm chart for deploying microservices on kubernetes with best practices
type: application
version: 0.1.8
appVersion: "0.1.2"
maintainers:
- name: ashwani-opstree
- name: tripathishikha1
- name: khushimalhoz

View File

@ -1,45 +0,0 @@
# microservice
Basic helm chart for deploying microservices on kubernetes with best practices
![Version: 0.1.2](https://img.shields.io/badge/Version-0.1.2-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 0.1.2](https://img.shields.io/badge/AppVersion-0.1.2-informational?style=flat-square)
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| Ashwani Singh | <ashwani.singh@opstree.com> | |
| Shikha Tripathi | | |
## Installing the Chart
To install the chart with the release name `my-release`:
```console
helm install my-release microservice/
```
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| deployment | object | `{"affinity":{},"annotations":{},"environment":{},"image":{"name":"","pullPolicy":"IfNotPresent","tag":""},"livenessProbe":{"failureThreshold":5,"initialDelaySeconds":250,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":5},"nodeSelector":{},"readinessProbe":{"failureThreshold":5,"initialDelaySeconds":30,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":5},"resources":{},"tolerations":[],"volumeMounts":[],"volumes":{"configMaps":null,"enabled":true,"pvc":{"accessModes":["ReadWriteOnce"],"class":"default","enabled":false,"existing_claim":false,"mountPath":"/pv","name":"pvc","size":"1G"}}}` | Object that configures Deployment instance |
| deployment.image | object | `{"name":"","pullPolicy":"IfNotPresent","tag":""}` | Override default container image format |
| global | object | `{"environment":{},"fullnameOverride":"","imagePullSecrets":[],"nameOverride":"","namespace":"default","replicaCount":1}` | global variables |
| hpa.enabled | bool | `true` | |
| hpa.maxReplicas | int | `1` | |
| hpa.minReplicas | int | `1` | |
| hpa.targetCPU | int | `80` | |
| hpa.targetMemory | int | `80` | |
| kubeVersion | string | `""` | |
| service.annotations | object | `{}` | |
| service.specs[0].name | string | `"http"` | |
| service.specs[0].port | int | `80` | |
| service.type | string | `"ClusterIP"` | |
| serviceAccount.annotations | object | `{}` | |
| serviceAccount.automount | bool | `true` | |
| serviceAccount.create | bool | `false` | |
| serviceAccount.name | string | `""` | |
> **_NOTE:_** Please find the sample helm values yaml in example repository.

View File

@ -1,22 +0,0 @@
{{ template "chart.header" . }}
{{ template "chart.description" . }}
{{ template "chart.versionBadge" . }}{{ template "chart.typeBadge" . }}{{ template "chart.appVersionBadge" . }}
{{ template "chart.maintainersSection" . }}
## Installing the Chart
To install the chart with the release name `my-release`:
```console
$ helm install my-release microservice/
```
{{/* {{ template "chart.requirementsSection" . }} */}}
{{ template "chart.valuesSection" . }}
> **_NOTE:_** Please find the sample helm values yaml in example repository.
{{/* {{ template "helm-docs.versionFooter" . }} */}}

View File

@ -1,2 +0,0 @@
name=opstree
address=opstreesolution

View File

@ -1,43 +0,0 @@
global:
namespace: "demo-dev"
fullnameOverride: "webapp"
deployment:
image:
name: nginx
tag: latest
pullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: "/"
port: http
readinessProbe:
httpGet:
path: "/"
port: http
resources:
requests:
memory: 100Mi
cpu: 100m
limits:
memory: 500Mi
cpu: 500m
volumes:
enabled: true
configMaps:
- name: index
mountPath: /usr/share/nginx/html
data:
index.html: |
Hello! Opstree
topologySpreadConstraints:
whenUnsatisfiable: "DoNotSchedule"
# serviceAccount:
# create: true
# annotations: "aws arn link"
# serviceAccount:
# name: "myserviceaccount"

View File

@ -1,5 +0,0 @@
You have deployed the following release: {{ include "microservice.fullname" . }}.
To get further information, you can run the commands:
$ helm status {{ include "microservice.fullname" . }}
$ helm get all {{ include "microservice.fullname" . }}

View File

@ -1,36 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Return the target Kubernetes version
*/}}
{{- define "microservice.capabilities.kubeVersion" -}}
{{- default (default .Capabilities.KubeVersion.Version .Values.kubeVersion) ((.Values.global).kubeVersion) -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for Horizontal Pod Autoscaler.
*/}}
{{- define "microservice.capabilities.hpa.apiVersion" -}}
{{- $kubeVersion := include "microservice.capabilities.kubeVersion" .context -}}
{{- if and (not (empty $kubeVersion)) (semverCompare "<1.23-0" $kubeVersion) -}}
{{- if .beta2 -}}
{{- print "autoscaling/v2beta2" -}}
{{- else -}}
{{- print "autoscaling/v2beta1" -}}
{{- end -}}
{{- else -}}
{{- print "autoscaling/v2" -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for deployment.
*/}}
{{- define "microservice.capabilities.deployment.apiVersion" -}}
{{- $kubeVersion := include "microservice.capabilities.kubeVersion" . -}}
{{- if and (not (empty $kubeVersion)) (semverCompare "<1.14-0" $kubeVersion) -}}
{{- print "extensions/v1beta1" -}}
{{- else -}}
{{- print "apps/v1" -}}
{{- end -}}
{{- end -}}

View File

@ -1,54 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Create a defautl fully qualified app name
It will use the release name to give the app name
*/}}
{{- define "microservice.name" -}}
{{- default .Chart.Name .Values.global.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "microservice.fullname" -}}
{{- if .Values.global.fullnameOverride -}}
{{- .Values.global.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.global.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "microservice.labels" -}}
app: {{ include "microservice.fullname" . }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "microservice.selectorLabels" -}}
app: {{ include "microservice.fullname" . }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "microservice.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "microservice.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@ -1,30 +0,0 @@
#ConfigMap mounted as volumes
{{- if .Values.deployment.volumes.configMaps }}
{{- if .Values.deployment.volumes.enabled }}
{{ $header := .Values.deployment.volumes.configFileCommonHeader | default "" }}
{{ $root := . }}
{{ range $cm := .Values.deployment.volumes.configMaps}}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "microservice.fullname" $root }}-{{ $cm.name }}-cm
namespace: {{ $root.Values.global.namespace | quote }}
data:
{{- if $cm.data }}
{{- range $filename, $content := $cm.data }}
# property-like keys; each key maps to a simple value
{{ $filename }}: |-
{{ $content | toString | indent 4}}
{{- end }}
{{- end }}
{{- if $cm.files }}
{{- range $file := $cm.files }}
{{ $file.destination }}: |
{{ $header | toString | indent 4 }}
{{ $root.Files.Get "$file.source" }}
{{- end}}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -1,139 +0,0 @@
{{ $root := . }}
---
apiVersion: {{ include "microservice.capabilities.deployment.apiVersion" . }}
kind: Deployment
metadata:
name: {{ include "microservice.fullname" . }}-app
namespace: {{ .Values.global.namespace | quote }}
{{- if .Values.deployment.annotations }}
annotations:
{{- range $key, $value := .Values.deployment.annotations }}
{{ $key }}: {{ $value }}
{{- end }}
{{- end }}
labels:
{{- include "microservice.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.global.replicaCount }}
{{- if .Values.deployment.strategy }}
strategy:
{{- toYaml .Values.deployment.strategy | nindent 4 }}
{{- end }}
selector:
matchLabels:
{{- include "microservice.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "microservice.selectorLabels" . | nindent 8 }}
{{- if .Values.deployment.podAnnotations }}
annotations:
{{- range $key, $value := .Values.deployment.podAnnotations }}
{{ $key }}: {{ $value }}
{{- end }}
{{- end }}
spec:
{{- with .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.serviceAccount.create }}
serviceAccountName: {{ include "microservice.serviceAccountName" . }}-sa
{{- end }}
{{- if .Values.serviceAccount.name }}
serviceAccountName: {{ .Values.serviceAccount.name }}
{{- end }}
terminationGracePeriodSeconds: {{ .Values.deployment.terminationGracePeriodSeconds }}
containers:
- name: {{ include "microservice.fullname" . }}
image: "{{ .Values.deployment.image.name }}:{{ .Values.deployment.image.tag }}"
imagePullPolicy: {{ .Values.deployment.image.pullPolicy }}
{{- if .Values.deployment.command }}
command: {{ .Values.deployment.command }}
{{- end }}
{{- if .Values.deployment.args }}
args: {{ .Values.deployment.args }}
{{- end }}
ports:
{{- range .Values.service.specs}}
- name: {{ .name }}
containerPort: {{ .targetPort | default .port}}
protocol: {{ .protocol | default "TCP" }}
{{- end }}
{{- if (merge .Values.global.environment .Values.deployment.environment) }}
env:
{{- range $name, $value := merge .Values.global.environment .Values.deployment.environment}}
- name: {{ $name | quote}}
value: {{ $value | quote }}
{{- end }}
{{- end }}
{{- if and .Values.deployment.healthProbes.enabled .Values.deployment.livenessProbe.httpGet }}
livenessProbe:
{{- toYaml .Values.deployment.livenessProbe | nindent 12 }}
{{- end }}
{{- if and .Values.deployment.healthProbes.enabled .Values.deployment.readinessProbe.httpGet }}
readinessProbe:
{{- toYaml .Values.deployment.readinessProbe | nindent 12 }}
{{- end }}
resources:
{{- toYaml .Values.deployment.resources | nindent 12 }}
{{- if .Values.deployment.volumes.enabled }}
volumeMounts:
{{- range $conf := .Values.deployment.volumes.configMaps }}
- mountPath: {{ $conf.mountPath }}
name: {{ include "microservice.fullname" $root }}-{{ $conf.name }}-cm
{{- end }}
{{- if .Values.deployment.volumes.pvc.enabled }}
- mountPath: {{ .Values.volumes.pvc.mountPath }}
name: {{ .Values.volumes.pvc.existing_claim | default .Values.volumes.pvc.name }}-volume
{{- end }}
{{- end }}
{{- if .Values.deployment.volumes.enabled }}
volumes:
{{- range $conf := .Values.deployment.volumes.configMaps }}
- name: {{ include "microservice.fullname" $root }}-{{ $conf.name }}-cm
configMap:
name: {{ include "microservice.fullname" $root }}-{{ $conf.name }}-cm
{{- end }}
{{- if .Values.deployment.volumes.pvc.enabled}}
- name: {{ .Values.deployment.volumes.pvc.existing_claim | default .Values.volumes.pvc.name }}-volume
persistentVolumeClaim:
claimName: {{ .Values.deployment.volumes.pvc.existing_claim | default .Values.volumes.pvc.name }}
{{- end}}
{{- end }}
{{- with .Values.deployment.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if and .Values.deployment.affinity.enabled (or .Values.deployment.affinity.preferred.enabled .Values.deployment.affinity.required.enabled) }}
affinity:
podAntiAffinity:
{{- if .Values.deployment.affinity.preferred.enabled }}
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: {{ include "microservice.fullname" . }}
topologyKey: {{ .Values.deployment.affinity.topologyKey }}
{{- end }}
{{- if and .Values.deployment.affinity.required.enabled (not .Values.deployment.affinity.preferred.enabled) }}
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ include "microservice.fullname" . }}
topologyKey: {{ .Values.deployment.affinity.topologyKey }}
{{- end }}
{{- end }}
{{- if .Values.deployment.topologySpreadConstraints.enabled }}
topologySpreadConstraints:
- maxSkew: 1
topologyKey: {{ .Values.deployment.topologySpreadConstraints.topologyKey }}
whenUnsatisfiable: "{{ .Values.deployment.topologySpreadConstraints.whenUnsatisfiable }}"
labelSelector:
matchLabels:
app: {{ include "microservice.fullname" . }}
{{- if ( eq .Values.deployment.topologySpreadConstraints.whenUnsatisfiable "DoNotSchedule")}}
minDomains: 2
{{- end }}
{{- end }}

View File

@ -1,41 +0,0 @@
{{- if .Values.hpa.enabled }}
apiVersion: {{ include "microservice.capabilities.hpa.apiVersion" ( dict "context" $ ) }}
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "microservice.fullname" . }}-hpa
namespace: {{ .Values.global.namespace | quote }}
labels:
{{- include "microservice.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: {{ include "microservice.capabilities.deployment.apiVersion" . }}
kind: Deployment
name: {{ include "microservice.fullname" . }}-app
minReplicas: {{ .Values.hpa.minReplicas }}
maxReplicas: {{ .Values.hpa.maxReplicas }}
metrics:
{{- if .Values.hpa.targetMemory }}
- type: Resource
resource:
name: memory
{{- if semverCompare "<1.23-0" (include "microservice.capabilities.kubeVersion" .) }}
targetAverageUtilization: {{ .Values.hpa.targetMemory }}
{{- else }}
target:
type: Utilization
averageUtilization: {{ .Values.hpa.targetMemory }}
{{- end }}
{{- end }}
{{- if .Values.hpa.targetCPU }}
- type: Resource
resource:
name: cpu
{{- if semverCompare "<1.23-0" (include "microservice.capabilities.kubeVersion" .) }}
targetAverageUtilization: {{ .Values.hpa.targetCPU }}
{{- else }}
target:
type: Utilization
averageUtilization: {{ .Values.hpa.targetCPU }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -1,21 +0,0 @@
{{- if .Values.deployment.volumes.pvc.enabled }}
{{- if .Values.deployment.volumes.pvc.existing_claim -}}
{{- else -}}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.deployment.volumes.pvc.name }}
namespace: {{ .Values.global.namespace | quote }}
spec:
{{- if .Values.deployment.volumes.pvc.class }}
storageClassName: {{ .Values.deployment.volumes.pvc.class }}
{{- end }}
accessModes:
{{- range $accessMode := .Values.deployment.volumes.pvc.accessModes }}
- {{ $accessMode }}
{{- end }}
resources:
requests:
storage: {{ .Values.deployment.volumes.pvc.size }}
{{- end }}
{{- end }}

View File

@ -1,34 +0,0 @@
{{- $root:= . }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "microservice.fullname" . }}-svc
namespace: {{ .Values.global.namespace | quote }}
{{- if .Values.service.annotations }}
annotations:
{{- range $key, $value := .Values.service.annotations }}
{{ $key }}: {{ $value }}
{{- end }}
{{- end }}
labels:
{{- include "microservice.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
selector:
{{- include "microservice.selectorLabels" . | nindent 4 }}
ports:
{{- range $spec := .Values.service.specs }}
- name: {{ $spec.name }}
port: {{ $spec.port }}
protocol: {{ $spec.protocol | default "TCP" }}
{{- if $spec.targetPort }}
targetPort: {{ $spec.targetPort }}
{{- else }}
targetPort: {{ $spec.name }}
{{- end}}
{{- if $spec.nodePort }}
nodePort: {{ $spec.nodePort }}
{{- end }}
{{- end -}}

View File

@ -1,14 +0,0 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "microservice.serviceAccountName" . }}-sa
namespace: {{ .Values.global.namespace | quote }}
labels:
{{- include "microservice.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
automountServiceAccountToken: {{ .Values.serviceAccount.automount }}
{{- end }}

View File

@ -1,155 +0,0 @@
# -- global variables
global:
namespace: "default"
replicaCount: 1
nameOverride: ""
fullnameOverride: ""
imagePullSecrets: []
environment: {}
# list of key: value
# GLOBAL1: value
## @param kubeVersion Override Kubernetes version
##
kubeVersion: ""
# -- Object that configures Deployment instance
deployment:
# -- Override default container image format
image:
name: ""
tag: ""
pullPolicy: IfNotPresent
strategy: {}
# Annotation for the Deployment
annotations: {}
podAnnotations: {}
terminationGracePeriodSeconds: 60
healthProbes:
enabled: true
# livenessProbe: {}
livenessProbe:
# httpGet:
# path: "/"
# port: http
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
# readinessProbe: {}
readinessProbe:
# httpGet:
# path: "/"
# port: http
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
# command: ["/bin/sh","-c"]
# args: ["echo 'consuming a message'; sleep 5"]
environment: {}
# VAR1: value1
resources: {}
# resources:
# requests:
# memory: 100Mi
# cpu: 100m
# limits:
# memory: 100Mi
# cpu: 100m
# Additional volumes on the output Deployment definition.
volumes:
enabled: true
pvc:
enabled: false
existing_claim: false
name: pvc
mountPath: /pv
size: 1G
class: "default"
accessModes:
- ReadWriteOnce
# configFileCommonHeader: |
# line1
# line2
configMaps:
# - name: test
# mountPath: /test
# data:
# test.conf: |
# hello
# hello2
# - name: test-from-file
# mountPath: /test2
# files:
# - source: config.conf
# destination: application.conf
# - name: test-mixed
# mountPath: /test3
# data:
# test2.conf: |
# another hello
# files:
# - source: config.conf
# destination: application2.conf
# Additional volumeMounts on the output Deployment definition.
volumeMounts: []
# - name: foo
# mountPath: "/etc/foo"
# readOnly: true
nodeSelector: {}
tolerations: []
affinity:
enabled: true
preferred:
enabled: true
required:
enabled: false
topologyKey: "topology.kubernetes.io/zone"
topologySpreadConstraints:
enabled: true
# whenUnsatisfiable: "DoNotSchedule" OR "ScheduleAnyway"
whenUnsatisfiable: "ScheduleAnyway"
topologyKey: "topology.kubernetes.io/zone"
hpa:
enabled: true
minReplicas: 1
maxReplicas: 1
targetCPU: 80
targetMemory: 80
service:
type: ClusterIP
annotations: {}
specs:
- port: 80
name: http
serviceAccount:
create: false
automount: true
annotations: {}
name: ""

View File

@ -1,21 +0,0 @@
---
apiVersion: v1
description: Provides easy MongoDB cluster setup definitions for Kubernetes services, and deployment.
engine: gotpl
maintainers:
- name: Opstree Solutions
name: mongodb-cluster
sources:
- https://github.com/ot-container-kit/mongodb-operator
version: 0.3.0
appVersion: "0.3.0"
home: https://github.com/ot-container-kit/mongodb-operator
keywords:
- operator
- MongoDB
- opstree
- kubernetes
- openshift
- mongodb-exporter
- mongodb-cluster
icon: https://github.com/OT-CONTAINER-KIT/mongodb-operator/raw/main/static/mongodb-operator-logo.svg

View File

@ -1,53 +0,0 @@
## MongoDB Cluster
MongDB is a NoSQL based distributed database, this helm chart is for only standalone setup. This helm chart needs [MongoDB Operator](../mongodb-operator) inside Kubernetes cluster. The mongodb definition can be modified or changed by [values.yaml](./values.yaml).
```shell
$ helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
$ helm install <my-release> ot-helm/mongodb-cluster --namespace <namespace>
```
Redis setup can be upgraded by using `helm upgrade` command:-
```shell
$ helm upgrade <my-release> ot-helm/mongodb-cluster --install --namespace <namespace>
```
For uninstalling the chart:-
```shell
$ helm delete <my-release> --namespace <namespace>
```
### Pre-Requisities
- Kubernetes => 1.15
- Helm => 3.X
- MongoDB Operator => 0.1.0
### Parameters
| **Name** | **Value** | **Description** |
|-------------------------------------------|:------------------------:|------------------------------------------------------|
| `clusterSize` | 3 | Size of the MongoDB cluster |
| `image.name` | quay.io/opstree/mongo | Name of the MongoDB image |
| `image.tag` | v5.0 | Tag for the MongoDB image |
| `image.imagePullPolicy` | IfNotPresent | Image Pull Policy of the MongoDB |
| `image.pullSecret` | "" | Image Pull Secret for private registry |
| `resources` | {} | Request and limits for MongoDB statefulset |
| `storage.enabled` | true | Storage is enabled for MongoDB or not |
| `storage.accessModes` | ["ReadWriteOnce"] | AccessMode for storage provider |
| `storage.storageSize` | 1Gi | Size of storage for MongoDB |
| `storage.storageClass` | gp2 | Name of the storageClass to create storage |
| `mongoDBMonitoring.enabled` | true | MongoDB exporter should be deployed or not |
| `mongoDBMonitoring.image.name` | bitnami/mongodb-exporter | Name of the MongoDB exporter image |
| `mongoDBMonitoring.image.tag` | 0.11.2-debian-10-r382 | Tag of the MongoDB exporter image |
| `mongoDBMonitoring.image.imagePullPolicy` | IfNotPresent | Image Pull Policy of the MongoDB exporter image |
| `serviceMonitor.enabled` | false | Servicemonitor to monitor MongoDB with Prometheus |
| `serviceMonitor.interval` | 30s | Interval at which metrics should be scraped. |
| `serviceMonitor.scrapeTimeout` | 10s | Timeout after which the scrape is ended |
| `serviceMonitor.namespace` | monitoring | Namespace in which Prometheus operator is running |
| `nodeSelector` | {} | Nodeselector for the MongoDB statefulset |
| `priorityClassName` | "" | Priority class name for the MongoDB statefulset |
| `affinity` | {} | Affinity for node and pods for MongoDB statefulset |
| `tolerations` | [] | Tolerations for MongoDB statefulset |

View File

@ -1,11 +0,0 @@
CHART NAME: {{ .Chart.Name }}
CHART VERSION: {{ .Chart.Version }}
APP VERSION: {{ .Chart.AppVersion }}
The helm chart for MongoDB standalone setup has been deployed.
Get the list of pods by executing:
kubectl get pods --namespace {{ .Release.Namespace }} -l app={{ .Release.Name }}-cluster
For getting the credential for admin user:
kubectl get secrets -n {{ .Release.Namespace }} {{ .Release.Name }}-secret -o jsonpath="{.data.password}" | base64 -d

View File

@ -1,17 +0,0 @@
{{- if .Values.mongoDBConfiguration }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-additional-config
labels:
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: database
data:
mongo.yaml: |
{{ .Values.mongoDBMonitoring.resources | indent 4 }}
{{- end }}

View File

@ -1,59 +0,0 @@
---
apiVersion: opstreelabs.in/v1alpha1
kind: MongoDBCluster
metadata:
name: {{ .Release.Name }}
labels:
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: database
spec:
clusterSize: {{ .Values.clusterSize }}
kubernetesConfig:
image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.imagePullPolicy }}
{{- if .Values.image.pullSecret }}
imagePullSecret: {{ .Values.image.pullSecret }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 6 }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: "{{ .Values.priorityClassName }}"
{{- end }}
{{- if .Values.affinity }}
affinity:
{{ toYaml .Values.affinity | indent 6 }}
{{- end }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 6 }}
{{- if .Values.storage.enabled }}
storage:
accessModes: {{ .Values.storage.accessModes }}
storageSize: {{ .Values.storage.storageSize }}
storageClass: {{ .Values.storage.storageClass }}
{{- end }}
{{- if .Values.mongoDBConfiguration }}
mongoDBAdditionalConfig: {{ .Release.Name }}-additional-config
{{- end }}
mongoDBSecurity:
mongoDBAdminUser: admin
secretRef:
name: {{ .Release.Name }}-secret
key: password
{{- if .Values.mongoDBMonitoring.enabled }}
mongoDBMonitoring:
enableExporter: {{ .Values.mongoDBMonitoring.enabled }}
image: "{{ .Values.mongoDBMonitoring.image.name }}:{{ .Values.mongoDBMonitoring.image.tag }}"
imagePullPolicy: {{ .Values.mongoDBMonitoring.image.imagePullPolicy }}
resources:
{{ toYaml .Values.mongoDBMonitoring.resources | indent 6 }}
{{- end }}

View File

@ -1,16 +0,0 @@
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-secret
labels:
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: database
type: Opaque
data:
password: {{ randAlphaNum 20 | b64enc }}

View File

@ -1,27 +0,0 @@
{{- if eq .Values.serviceMonitor.enabled true }}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ .Release.Name }}-prometheus-monitoring
labels:
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
spec:
selector:
matchLabels:
app: {{ .Release.Name }}
mongodb_setup: standalone
role: standalone
endpoints:
- port: metrics
interval: {{ .Values.serviceMonitor.interval }}
scrapeTimeout: {{ .Values.serviceMonitor.scrapeTimeout }}
namespaceSelector:
matchNames:
- {{ .Values.serviceMonitor.namespace }}
{{- end }}

View File

@ -1,67 +0,0 @@
---
clusterSize: 3
image:
name: quay.io/opstree/mongo
tag: v5.0
imagePullPolicy: IfNotPresent
# pullSecret: regcred
nodeSelector: {}
# memory: high
#priorityClassName: "-"
affinity: {}
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: disktype
# operator: In
# values:
# - ssd
tolerations: []
# - key: "example-key"
# operator: "Exists"
# effect: "NoSchedule"
resources: {}
# requests:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 100m
# memory: 128Mi
storage:
enabled: true
accessModes: ["ReadWriteOnce"]
storageSize: 1Gi
storageClass: csi-cephfs-sc
mongoDBMonitoring:
enabled: true
image:
name: bitnami/mongodb-exporter
tag: 0.11.2-debian-10-r382
imagePullPolicy: IfNotPresent
resources: {}
# requests:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 100m
# memory: 128Mi
#mongoDBConfiguration: |-
# net:
# bindIp: 0.0.0.0
# port: 27017
serviceMonitor:
enabled: false
interval: 30s
scrapeTimeout: 10s
namespace: monitoring

View File

@ -1 +0,0 @@
*.tgz

View File

@ -1,24 +0,0 @@
---
apiVersion: v2
appVersion: "0.3.0"
description: Helm chart to deploy and manage MongoDB operator in Kubernetes.
engine: gotpl
maintainers:
- name: Abhishek Dubey
- name: Sandeep Rawat
- name: Shubham Gupta
name: mongodb-operator
sources:
- https://github.com/OT-CONTAINER-KIT/mongodb-operator
version: 0.3.1
home: https://github.com/OT-CONTAINER-KIT/mongodb-operator
icon: https://raw.githubusercontent.com/OT-CONTAINER-KIT/mongodb-operator/main/static/mongodb-operator-logo.svg
keywords:
- operator
- mongodb
- opstree
- kubernetes
- openshift
- backup
- restore
- mongodb-cluster

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,64 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
labels:
control-plane: "{{ .Release.Name }}"
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
name: {{ .Release.Name }}
template:
metadata:
labels:
name: {{ .Release.Name }}
spec:
securityContext:
runAsNonRoot: true
containers:
- name: "{{ .Release.Name }}"
image: "{{ .Values.mongoDBOperator.imageName }}:{{ .Values.mongoDBOperator.imageTag }}"
imagePullPolicy: {{ .Values.mongoDBOperator.imagePullPolicy }}
command:
- /manager
args:
- --leader-elect
{{- if .Values.resources }}
resources:
{{ toYaml .Values.resources | indent 10 }}
{{- end }}
{{- if .Values.readinessProbe }}
readinessProbe:
{{ toYaml .Values.readinessProbe | indent 10 }}
{{- end }}
{{- if .Values.livenessProbe }}
livenessProbe:
{{ toYaml .Values.livenessProbe | indent 10 }}
{{- end }}
securityContext:
allowPrivilegeEscalation: false
nodeSelector:
{{- with .Values.nodeSelector }}
{{ toYaml . | indent 8 }}
{{- end }}
priorityClassName: {{ .Values.priorityClassName }}
{{- with .Values.affinity }}
affinity: {{ toYaml . | nindent 8 }}
{{- end }}
tolerations:
{{- if .Values.tolerateAllTaints }}
- operator: Exists
{{- end }}
{{- with .Values.tolerations }}
{{ toYaml . | indent 8 }}
{{- end }}
serviceAccountName: "{{ .Release.Name }}"
serviceAccount: "{{ .Release.Name }}"

View File

@ -1,20 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Release.Name }}
labels:
control-plane: "{{ .Release.Name }}"
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
subjects:
- kind: ServiceAccount
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ .Release.Name }}
apiGroup: rbac.authorization.k8s.io

View File

@ -1,104 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Release.Name }}
labels:
control-plane: "{{ .Release.Name }}"
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
rules:
- apiGroups:
- ""
resources:
- configmaps
- events
- secrets
- services
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- opstreelabs.in
resources:
- mongodbclusters
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- opstreelabs.in
resources:
- mongodbclusters/finalizers
verbs:
- update
- apiGroups:
- opstreelabs.in
resources:
- mongodbclusters/status
verbs:
- get
- patch
- update
- apiGroups:
- opstreelabs.in
resources:
- mongodbs
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- opstreelabs.in
resources:
- mongodbs/finalizers
verbs:
- update
- apiGroups:
- opstreelabs.in
resources:
- mongodbs/status
verbs:
- get
- patch
- update

Some files were not shown because too many files have changed in this diff Show More