Compare commits

...

18 Commits

Author SHA1 Message Date
tarunsinghot 20dd283856
Merge 0cf050b88f into fe8cec4075 2025-08-18 20:40:16 +00:00
sharvarikhamkar1304 fe8cec4075
feat : Added helm chart for ingress management (#285)
* Add ingress-management

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Update values.yaml

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Move ingress-management chart under charts directory

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Create README.md

* Update README.md

* Update README.md

* Update README.md

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Fix: Update Chart.yaml metadata with repo and maintainer info

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Fix maintainer name

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* update

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Delete charts/ingress-management/etc directory

* Delete charts/ingress-management/LICENSE

* Update README.md

* Update README.md

* fix: update helm chart for ingress-management

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Update values.yaml

* Update values.yaml

* Update values.yaml

* Fix: update values.yaml

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Update values.yaml

* Update httproute.yaml

* Update values.yaml

* Update httproute.yaml

* Update values.yaml

* Update values.yaml

* Update values.yaml

* Update httproute.yaml

* Update values.yaml

* Update values.yaml

* Update values.yaml

* Update values.yaml

---------

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>
2025-08-05 13:41:19 +05:30
Abhishek Dubey 22a3df98c8
[COE][Add] Added helm chart for worker based deployment (#281)
* Added helm chart for worker based deployment

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

* Added helm chart for worker based deployment

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

* Added helm chart for worker based deployment

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

---------

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>
2025-07-04 00:52:37 +05:30
Prashantdev780 9a6b440441
Helm fix for liveliness probe and readiness probe (#279) 2025-06-30 20:07:40 +05:30
Sandeep Rawat 6306426bed
Merge pull request #278 from OT-CONTAINER-KIT/helm_fix
Fixed secrets block in deployment.yml of microservice
2025-06-30 18:34:38 +05:30
Prashantdev780 ff645ddecd
Update Chart.yaml 2025-06-30 18:32:37 +05:30
Prashant Sharma 3619d89003 Fixed secrets block in deployment.yml of microservice 2025-06-30 17:13:05 +05:30
Tarun Singh 0cf050b88f Removed extra commented lines
Signed-off-by: Tarun Singh <tarun.singh@opstree.com>
2024-12-24 18:43:52 +05:30
Tarun Singh 99ef8dba65 Updated readme and values
Signed-off-by: Tarun Singh <tarun.singh@opstree.com>
2024-12-24 18:39:06 +05:30
Tarun Singh 9135799d00 Merge branch 'main' of https://github.com/OT-CONTAINER-KIT/helm-charts into percona-postgresql 2024-12-24 11:08:16 +05:30
Tarun Singh ba3ad30738 tested backup & restore and created templates 2024-12-10 18:05:20 +05:30
Tarun Singh e8f6eeaf95 updated readme and added pmm values 2024-07-26 16:35:05 +05:30
Tarun Singh f885882ed6 added doc and updated values 2024-07-23 18:52:52 +05:30
Ashwani Singh df898b7ff3
Disable client side ssl validation 2024-07-20 09:40:44 +05:30
Tarun Singh c1526ef38a updated values 2024-07-18 18:45:19 +05:30
Tarun Singh d3cff964a7 Adding combined pg operator and DB Helm Chart 2024-07-18 15:08:05 +05:30
Tarun Singh e0c98b1776 updated pg db values 2024-07-17 13:39:21 +05:30
Tarun Singh df3e88cad4 helm charts for postgresql percona operator and db 2024-07-17 13:14:23 +05:30
20 changed files with 1407 additions and 4 deletions

View File

@ -0,0 +1,16 @@
apiVersion: v2
name: ingress-management
description: A Helm chart to manage Ingress traffic
version: 0.1.0
appVersion: "1.0"
home: https://github.com/ot-container-kit/helm-charts
maintainers:
- name: sharvarikhamkar1304
keywords:
- ingress
- kong
- httpRoute
- kubernetes
icon: https://raw.githubusercontent.com/OT-CONTAINER-KIT/helm-charts/main/static/helm-chart-logo.svg
sources:
- https://github.com/ot-container-kit/helm-charts

View File

@ -0,0 +1,49 @@
# Ingress Management Helm Chart
A simple and reusable Helm chart to manage Kubernetes Gateway API HTTPRoutes for routing traffic to backend services.
This chart helps manage HTTPRoute resources to expose services using the Kubernetes Gateway API. You can customize host, path, service, and namespace via values.
## Homepage
[https://github.com/ot-container-kit/helm-charts](https://github.com/ot-container-kit/helm-charts)
## Maintainers
| Name | URL |
| ---------------- | --------------------------------------------- |
| sharvari-khamkar | [GitHub](https://github.com/sharvari-khamkar) |
## Source Code
[GitHub - ot-container-kit/helm-charts](https://github.com/ot-container-kit/helm-charts)
## Requirements
| Repository | Name | Version |
| ------------------------------------------------------------------------------------------------ | ---- | ------- |
| [https://ot-container-kit.github.io/helm-charts](https://ot-container-kit.github.io/helm-charts) | base | 0.1.0 |
## Values
| **Attribute** | **Scope** | **Example** | **Description** | **Default** |
|------------------|------------------|------------------------|------------------------------------------------------------------------|--------------|
| <br> `name` <br> <br> | <br> Global <br> <br> | <br> `"my-app"` <br> <br> | <br> Name of the HTTPRoute and backend service (the app name)<br><br> | `""` |
| <br> `namespace` <br> <br> | <br> Global <br> <br> | <br> `"default"` <br> <br> | <br> Kubernetes namespace where resources like HTTPRoute will be deployed<br><br> | `""` |
| <br> `host` <br> <br> | Routing | `"app.example.com"` | Hostname to expose the app<br><br> | `""` |
| <br>`path` <br> <br> | Routing | `"/api"` | Path under the host<br><br> | `""` |
| <br>`service.name` <br> <br> | Service Config | `"my-backend-svc"` | Name of the backend service to which traffic will be routed<br><br> | `""` |
| <br>`service.kind` <br> <br> | Service Config | `"Service"` | Kind of backend resource (Service by default)<br><br> | `"Service"` |
| <br>`service.port` <br> <br> | Service Config | `80` | Port on which the backend service listens<br><br> | `80` |

View File

@ -0,0 +1,46 @@
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: {{ required "A valid 'name' is required!" .Values.name }}
{{- if .Values.labels }}
labels:
{{ toYaml .Values.labels | indent 4 }}
{{- end }}
{{- if .Values.annotations }}
annotations:
{{ toYaml .Values.annotations | indent 4 }}
{{- end }}
spec:
{{- if .Values.parentRefs }}
parentRefs:
{{- range .Values.parentRefs }}
- name: {{ .name }}
{{- if .namespace }}
namespace: {{ .namespace }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.hostnames }}
hostnames:
{{- range .Values.hostnames }}
- "{{ . }}"
{{- end }}
{{- end }}
rules:
{{- range .Values.rules }}
- matches:
{{- range .matches }}
- path:
type: {{ .path.type }}
value: {{ .path.value | quote }}
{{- end }}
backendRefs:
{{- range .backendRefs }}
- name: {{ .name }}
kind: {{ .kind | default "Service" }}
port: {{ .port }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,60 @@
---
# charts/ingress-management/values.yaml
# -- Name of the HTTPRoute and backend service (typically the app name)
name: ""
# -- Labels to apply to the HTTPRoute metadata
labels:
app: ""
# -- Optional annotations to apply to the HTTPRoute resource
annotations: {}
# -- Reference to the Gateway (parentRefs)
parentRefs:
- name: ""
namespace: ""
# -- Hostnames to be matched in the HTTPRoute
hostnames:
- ""
# -- Routing rules for HTTPRoute
rules:
- matches:
- path:
type: PathPrefix
value: ""
backendRefs:
- name: ""
kind: Service
port: 80
# -----------------------------------------------------
# Example values.yaml File
# -----------------------------------------------------
# name: open-webui
# labels:
# app: open-webui
# annotations:
# konghq.com/protocols: https
# konghq.com/https-redirect-status-code: "301"
# parentRefs:
# - name: kong
# namespace: default
# hostnames:
# - bp-ai.opstree.dev
# rules:
# - matches:
# - path:
# type: PathPrefix
# value: /
# backendRefs:
# - name: open-webui
# kind: Service
# port: 80

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: microservice
description: Basic helm chart for deploying microservices on kubernetes with best practices
type: application
version: 0.1.6
version: 0.1.8
appVersion: "0.1.2"
maintainers:
- name: ashwani-opstree

View File

@ -33,7 +33,7 @@ spec:
{{- end }}
{{- end }}
spec:
{{- with .Values.imagePullSecrets }}
{{- with .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
@ -67,11 +67,11 @@ spec:
value: {{ $value | quote }}
{{- end }}
{{- end }}
{{- if and .Values.deployment.healthProbes.enabled .Values.deployment.livenessProbe }}
{{- if and .Values.deployment.healthProbes.enabled .Values.deployment.livenessProbe.httpGet }}
livenessProbe:
{{- toYaml .Values.deployment.livenessProbe | nindent 12 }}
{{- end }}
{{- if and .Values.deployment.healthProbes.enabled .Values.deployment.readinessProbe }}
{{- if and .Values.deployment.healthProbes.enabled .Values.deployment.readinessProbe.httpGet }}
readinessProbe:
{{- toYaml .Values.deployment.readinessProbe | nindent 12 }}
{{- end }}

View File

@ -0,0 +1,21 @@
apiVersion: v2
name: pg-operator-db
description: A Helm chart for Percona Operator and Percona Distribution for PostgreSQL
type: application
version: 1.0.0
appVersion: 1.0.0
dependencies:
- name: pg-operator
version: 2.5.0
repository: https://percona.github.io/percona-helm-charts/
alias: pg-operator
tags:
- pg-operator
condition: pg-operator.enabled
- name: pg-db
version: 2.5.1
repository: https://percona.github.io/percona-helm-charts/
alias: pg-db
tags:
- pg-db
condition: pg-db.enabled

View File

@ -0,0 +1,18 @@
Backup and Restore have been tested using backup.yaml and restore.yaml files respectively using Azure Blob Storage. For using cloud storage as backup, a Kubernetes secret need to be made: https://docs.percona.com/percona-operator-for-postgresql/2.0/backup-tutorial.html#configure-backup-storage
Extension Installation- To use an extension, install it. Run the CREATE EXTENSION command on the PostgreSQL node where you want the extension to be available e.g. CREATE EXTENSION hstore SCHEMA demo; https://docs.percona.com/postgresql/13/extensions.html
PGbouncer- We are exposing the cluster through PgBouncer, which is enabled by default. It acts as DB proxy. It can be disabled by setting proxy.pgBouncer.replicas to 0. https://docs.percona.com/percona-operator-for-postgresql/2.0/expose.html
Patroni Template - It is a template for configuring a highly available PostgreSQL cluster. https://docs.percona.com/postgresql/16/solutions/high-availability.html
LLVM (for JIT Compilation)- Percona Operator is based on CrunchyDatas PostgreSQL Operator which includes LLVM (for JIT compilation). JIT compilation is the process of turning some form of interpreted program evaluation into a native program, and doing so at run time. For example, instead of using general-purpose code that can evaluate arbitrary SQL expressions to evaluate a particular SQL predicate like WHERE a.col = 3, it is possible to generate a function that is specific to that expression and can be natively executed by the CPU, yielding a speedup. https://www.postgresql.org/docs/current/jit-reason.html
DR - To achieve a production grade PostgreSQL disaster recovery solution, we need something that can take full or incremental database backups from a running instance, and restore from those backups at any point in time. Percona Distribution for PostgreSQL is supplied with pgBackRest: a reliable, open-source backup and recovery solution for PostgreSQL.
pgBackRest supports remote repository hosting and can even use cloud-based services like AWS S3, Google Cloud Services Cloud Storage, Azure Blob Storage for saving backup files.
https://docs.percona.com/postgresql/14/solutions/backup-recovery.html
Switch Over- In Percona Operator, the primary instance election can be controlled by the patroni.switchover section of the Custom Resource manifest. It allows to enable switchover targeting a specific PostgreSQL instance as the new primary, or just running a failover if PostgreSQL cluster has entered a bad state. https://docs.percona.com/percona-operator-for-postgresql/2.0/change-primary.html
User and DB creation- We can create the users and DB by providing values in the 'users' section in values.yaml. https://docs.percona.com/percona-operator-for-postgresql/2.0/users.html

View File

@ -0,0 +1,303 @@
# Percona Operator and Distribution for PostgreSQL
This chart deploys Percona Operator and Percona Distribution for PostgreSQL on Kubernetes.
Useful links:
- [Operator Github repository](https://github.com/percona/percona-postgresql-operator)
- [Operator Documentation](https://www.percona.com/doc/kubernetes-operator-for-postgresql/index.html)
## Pre-requisites
* Kubernetes 1.28+
* At least `v3.2.3` version of helm
# Installation
This chart will deploy the Operator Pod and a PostgreSQL cluster in Kubernetes. It will create a Custom Resource, and the Operator will trigger the creation of corresponding Kubernetes primitives: Deployments, Pods, Secrets, etc.
## Installing the Chart
To install the chart with the `my-db` release name using a dedicated namespace (recommended):
```sh
helm dependency build
helm install my-db <path-to-chart> --namespace my-namespace
```
The chart can be customized using the following configurable parameters:
#These parameters are for pg-operator:
| Parameter | Description | Default |
| -------------------- | ---------------------------------------------------------------------------------- | ------------------------------------------- |
| `image` | PG Operator Container image full path | `percona/percona-postgresql-operator:2.5.0` |
| `imagePullPolicy` | PG Operator Container pull policy | `Always` |
| `resources` | Resource requests and limits | `{}` |
| `nodeSelector` | Labels for Pod assignment | `{}` |
| `logStructured` | Force PG operator to print JSON-wrapped log messages | `false` |
| `logLevel` | PG Operator logging level | `INFO` |
| `disableTelemetry` | Disable sending PG Operator telemetry data to Percona | `false` |
| `podAnnotations` | Add annotations to the Operator Pod | `{}` |
| `watchNamespace` | Set this variable if the target cluster namespace differs from operators namespace | `` |
| `watchAllNamespaces` | K8S Cluster-wide operation | `false`
#These parameters are for pg-db:
| Parameter | Description | Default |
| --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------ |
| `finalizers` | Finalizers list | `{}` |
| `crVersion` | CR Cluster Manifest version | `2.5.0` |
| `repository` | PostgreSQL container image repository | `percona/percona-postgresql-operator` |
| `image` | Postgres image | `percona/percona-postgresql-operator:2.5.0-ppg16.4-postgres` |
| `imagePullPolicy` | image Pull Policy | `Always` |
| `port` | PostgreSQL port | `5432` |
| `postgresVersion` | PostgreSQL container version tag | `16` |
| `pause` | Stop PostgreSQL Database safely | `false` |
| `unmanaged` | Start cluster and don't manage it (cross cluster replication) | `false` |
| `standby.enabled` | Switch/start PostgreSQL Database in standby mode | `false` |
| `standby.host` | Host address of the primary cluster this standby cluster connects to | `` |
| `standby.port` | Port number used by a standby copy to connect to the primary cluster | `` |
| `standby.repoName` | Name of the pgBackRest repository in the primary cluster this standby cluster connects to | `` |
| `customRootCATLSSecret.name` | Name of the secret with the custom root CA certificate and key for secure connections to the PostgreSQL server | `` |
| `customRootCATLSSecret.items` | Key-value pairs of the `key` (a key from the `secrets.customRootCATLSSecret.name` secret) and the `path` (name on the file system) for the custom root certificate and key | `` |
| `customTLSSecret.name` | A secret with TLS certificate generated for external communications | `""` |
| `customReplicationTLSSecret.name` | A secret with TLS certificate generated for internal communications | `""` |
| `openshift` | Set to true if the cluster is being deployed on OpenShift, set to false otherwise, or unset it for autodetection | `false` |
| `users.name` | The name of the PostgreSQL user | `""` |
| `users.databases` | Databases accessible by a specific PostgreSQL user with rights to create objects in them (the option is ignored for postgres user; also, modifying it cant be used to revoke the already given access) | `{}` |
| `users.options` | The ALTER ROLE options other than password (the option is ignored for postgres user) | `""` |
| `users.password.type` | The set of characters used for password generation: can be either ASCII (default) or AlphaNumeric | `ASCII` |
| `users.secretName` | User secret name | `"rhino-credentials"` |
| `databaseInitSQL.key` | Data key for the Custom configuration options ConfigMap with the init SQL file, which will be executed at cluster creation time | `init.sql` |
| `databaseInitSQL.name` | Name of the ConfigMap with the init SQL file, which will be executed at cluster creation time | `cluster1-init-sql` |
| |
| `dataSource.postgresCluster.clusterName` | Name of an existing cluster to use as the data source when restoring backup to a new cluster | `""` |
| `dataSource.postgresCluster.repoName` | Name of the pgBackRest repository in the source cluster that contains the backup to be restored to a new cluster | `""` |
| `dataSource.postgresCluster.options` | The pgBackRest command-line options for the pgBackRest restore command | `[]` |
| `dataSource.postgresCluster.tolerations.effect` | The Kubernetes Pod tolerations effect for data migration jobs | `NoSchedule` |
| `dataSource.postgresCluster.tolerations.key` | The Kubernetes Pod tolerations key for data migration jobs | `role` |
| `dataSource.postgresCluster.tolerations.operator` | The Kubernetes Pod tolerations operator for data migration jobs | `Equal` |
| `dataSource.postgresCluster.tolerations.value` | The Kubernetes Pod tolerations value for data migration jobs | `connection-poolers` |
| `dataSource.pgbackrest.stanza` | Name of the pgBackRest stanza to use as the data source when restoring backup to a new cluster | `""` |
| `dataSource.pgbackrest.configuration[].secret.name` | Name of the Kubernetes Secret object with custom pgBackRest configuration, which will be added to the pgBackRest configuration generated by the Operator | `""` |
| `dataSource.pgbackrest.global.repo1-path` | Repo path are to be included in the global section of the pgBackRest configuration generated by the Operator | `""` |
| `dataSource.pgbackrest.tolerations.effect` | The Kubernetes Pod tolerations effect for pgBackRest | `NoSchedule` |
| `dataSource.pgbackrest.tolerations.key` | The Kubernetes Pod tolerations key for pgBackRest | `role` |
| `dataSource.pgbackrest.tolerations.operator` | The Kubernetes Pod tolerations operator for pgBackRest | `Equal` |
| `dataSource.pgbackrest.tolerations.value` | The Kubernetes Pod tolerations value for pgBackRest | `connection-poolers` |
| `dataSource.pgbackrest.repo.name` | Name of the pgBackRest repository | `""` |
| `dataSource.pgbackrest.repo.s3.bucket` | The Amazon S3 bucket name used for backups | `""` |
| `dataSource.pgbackrest.repo.s3.endpoint` | The endpoint URL of the S3-compatible storage to be used for backups (not needed for the original Amazon S3 cloud) | `""` |
| `dataSource.pgbackrest.repo.s3.region` | The AWS region to use for Amazon and all S3-compatible storages | `""` |
| `dataSource.volumes.pgDataVolume` | Defines the existing pgData volume and directory to use in the current PostgresCluster | `{}` |
| `dataSource.volumes.pgWALVolume` | Defines the existing pg_wal volume and directory to use in the current PostgresCluster | `{}` |
| `dataSource.volumes.pgBackRestVolume` | Defines the existing pgBackRest repo volume and directory to use in the current PostgresCluster | `{}` |
| |
| `expose.annotations` | The Kubernetes annotations metadata for PostgreSQL | `{}` |
| `expose.labels` | Set labels for the PostgreSQL Service | `{}` |
| `expose.type` | Specifies the type of Kubernetes Service for PostgreSQL | `LoadBalancer` |
| `expose.loadBalancerSourceRanges` | The range of client IP addresses from which the load balancer should be reachable (if not set, there is no limitations) | `[]` |
| `exposeReplicas.annotations` | The Kubernetes annotations metadata for PostgreSQL replicas | `{}` |
| `exposeReplicas.labels` | Set labels for the PostgreSQL Service replicas | `{}` |
| `exposeReplicas.type` | Specifies the type of Kubernetes Service for PostgreSQL replicas | `LoadBalancer` |
| `exposeReplicas.loadBalancerSourceRanges` | The range of client IP addresses from which the load balancer should be reachable (if not set, there is no limitations) for PostgreSQL replicas | `[]` |
| |
| `instances.name` | The name of the PostgreSQL instance | `instance1` |
| `instances.replicas` | The number of Replicas to create for the PostgreSQL instance | `3` |
| `instances.affinity.podAntiAffinity` | Pod anti-affinity, allows setting the standard Kubernetes affinity constraints of any complexity | `{}` |
| `instances.resources.requests.memory` | Kubernetes memory requests for a PostgreSQL instance | `""` |
| `instances.resources.requests.cpu` | Kubernetes CPU requests for a PostgreSQL instance | `""` |
| `instances.resources.limits.memory` | Kubernetes memory limits for a PostgreSQL instance | `""` |
| `instances.resources.limits.cpu` | Kubernetes CPU limits for a PostgreSQL instance | `""` |
| `instances.containers.replicaCertCopy.resources.limits.cpu` | Kubernetes CPU limits for replicaCertCopy instance | `200m` |
| `instances.containers.replicaCertCopy.resources.limits.memory` | Kubernetes memory limits for replicaCertCopy instance | `128Mi` |
| `instances.sidecars.name` | Name of the custom sidecar container for PostgreSQL Pods | `testcontainer` |
| `instances.sidecars.image` | Image for the custom sidecar container for PostgreSQL Pods | `mycontainer1:latest` |
| `instances.topologySpreadConstraints.maxSkew` | The degree to which Pods may be unevenly distributed under the Kubernetes Pod Topology Spread Constraints | `1` |
| `instances.topologySpreadConstraints.topologyKey` | The key of node labels for the Kubernetes Pod Topology Spread Constraints | `my-node-label` |
| `instances.topologySpreadConstraints.whenUnsatisfiable` | What to do with a Pod if it doesnt satisfy the Kubernetes Pod Topology Spread Constraints | `DoNotSchedule` |
| `instances.topologySpreadConstraints.labelSelector.matchLabels` | The Label selector for the Kubernetes Pod Topology Spread Constraints | `postgres-operator.crunchydata.com/instance-set: instance1` |
| `instances.tolerations.effect` | The Kubernetes Pod tolerations effect for the PostgreSQL instance | `NoSchedule` |
| `instances.tolerations.key` | The Kubernetes Pod tolerations key for the PostgreSQL instance | `role` |
| `instances.tolerations.operator` | The Kubernetes Pod tolerations operator for the PostgreSQL instance | `Equal` |
| `instances.tolerations.value` | The Kubernetes Pod tolerations value for the PostgreSQL instance | `connection-poolers` |
| `instances.priorityClassName` | The Kuberentes Pod priority class for PostgreSQL instance Pods | `high-priority` |
| `instances.securityContext` | The Kubernetes Pod security context for the PostgreSQL instance | `{}` |
| `instances.walVolumeClaimSpec.accessModes` | The Kubernetes PersistentVolumeClaim access modes for the PostgreSQL Write-ahead Log storage | `ReadWriteOnce` |
| `instances.walVolumeClaimSpec.storageClassName` | The Kubernetes storageClassName for the Write-ahead Log storage | `""` |
| `instances.walVolumeClaimSpec.resources.requests.storage` | The Kubernetes storage requests for the PostgreSQL Write-ahead Log use | `1Gi` |
| `instances.dataVolumeClaimSpec.accessModes` | The Kubernetes PersistentVolumeClaim access modes for the PostgreSQL data storage | `ReadWriteOnce` |
| `instances.dataVolumeClaimSpec.storageClassName` | The Kubernetes storageClassName for the PostgreSQL data storage | `""` |
| `instances.dataVolumeClaimSpec.resources.requests.storage` | The Kubernetes storage requests for the storage the PostgreSQL instance will use | `1Gi` |
| `instances.tablespaceVolumes.name` | Name for the custom [tablespace volume](https://docs.percona.com/percona-operator-for-postgresql/2.0/tablespaces.html) | `""` |
| `instances.tablespaceVolumes.dataVolumeClaimSpec.accessModes` | The Kubernetes PersistentVolumeClaim access modes for the tablespace volume | `{}` |
| `instances.tablespaceVolumes.dataVolumeClaimSpec.resources.requests.storage` | The Kubernetes storage requests for the tablespace volume | `""` |
| |
| `backups.trackLatestRestorableTime` | Enable background worker to track commit timestamps and set latest restorable time to latest successful backup | `true` |
| `backups.pgbackrest.metadata.labels` | Set labels for pgbackrest | `test-label:test` |
| `backups.pgbackrest.configuration` | Name of the Kubernetes Secret object with custom pgBackRest configuration, which will be added to the pgBackRest configuration generated by the Operator | `[]` |
| `backups.pgbackrest.containers.pgbackrest.resources.limits.cpu` | Kubernetes CPU limits for pgbackrest instance | `200m` |
| `backups.pgbackrest.containers.pgbackrest.resources.limits.memory` | Kubernetes memory limits for pgbackrest instance | `128Mi` |
| `backups.pgbackrest.containers.pgbackrestConfig.resources.limits.cpu` | Kubernetes CPU limits for pgbackrestConfig instance | `200m` |
| `backups.pgbackrest.containers.pgbackrestConfig.resources.limits.memory` | Kubernetes memory limits for pgbackrestConfig instance | `128Mi` |
| `backups.pgbackrest.jobs.priorityClassName` | The Kuberentes Pod priority class for pgBackRest jobs | `high-priority` |
| `backups.pgbackrest.jobs.resources.limits.cpu` | Kubernetes CPU limits for a pgBackRest job | `200m` |
| `backups.pgbackrest.jobs.resources.limits.memory` | Kubernetes memory limits for a pgBackRest job | `128Mi` |
| `backups.pgbackrest.jobs.tolerations.effect` | The Kubernetes Pod tolerations effect for a backup job | `NoSchedule` |
| `backups.pgbackrest.jobs.tolerations.key` | The Kubernetes Pod tolerations key for a backup job | `role` |
| `backups.pgbackrest.jobs.tolerations.operator` | The Kubernetes Pod tolerations operator for a backup job | `Equal` |
| `backups.pgbackrest.jobs.tolerations.value` | The Kubernetes Pod tolerations value for a backup job | `connection-poolers` |
| `backups.pgbackrest.jobs.securityContext` | The Kubernetes Pod security context for pgBackRest jobs | `{}` |
| `backups.pgbackrest.global` | Settings, which are to be included in the global section of the pgBackRest configuration generated by the Operator | `/pgbackrest/postgres-operator/hippo/repo1` |
| `backups.pgbackrest.repoHost.topologySpreadConstraints.maxSkew` | The degree to which Pods may be unevenly distributed under the Kubernetes Pod Topology Spread Constraints | `1` |
| `backups.pgbackrest.repoHost.topologySpreadConstraints.topologyKey` | The key of node labels for the Kubernetes Pod Topology Spread Constraints | `my-node-label` |
| `backups.pgbackrest.repoHost.topologySpreadConstraints.whenUnsatisfiable` | What to do with a Pod if it doesnt satisfy the Kubernetes Pod Topology Spread Constraints | `DoNotSchedule` |
| `backups.pgbackrest.repoHost.topologySpreadConstraints.labelSelector.matchLabels` | The Label selector for the Kubernetes Pod Topology Spread Constraints | `postgres-operator.crunchydata.com/instance-set: instance1` |
| `backups.pgbackrest.repoHost.priorityClassName` | The Kuberentes Pod priority class for pgBackRest repo | `high-priority` |
| `backups.pgbackrest.repoHost.affinity.podAntiAffinity` | Pod anti-affinity, allows setting the standard Kubernetes affinity constraints of any complexity | `{}` |
| `backups.pgbackrest.repoHost.tolerations.effect` | The Kubernetes Pod tolerations effect for pgBackRest repo | `NoSchedule` |
| `backups.pgbackrest.repoHost.tolerations.key` | The Kubernetes Pod tolerations key for pgBackRest repo | `role` |
| `backups.pgbackrest.repoHost.tolerations.operator` | The Kubernetes Pod tolerations operator for pgBackRest repo | `Equal` |
| `backups.pgbackrest.repoHost.tolerations.value` | The Kubernetes Pod tolerations value for pgBackRest repo | `connection-poolers` |
| `backups.pgbackrest.repoHost.securityContext` | The Kubernetes Pod security context for pgBackRest repo | `{}` |
| `backups.pgbackrest.manual.repoName` | Name of the pgBackRest repository for on-demand backups | `repo1` |
| `backups.pgbackrest.manual.options` | The on-demand backup command-line options which will be passed to pgBackRest for on-demand backups | `--type=full` |
| `backups.pgbackrest.repos.repo1.name` | Name of the pgBackRest repository for backups | `repo1` |
| `backups.pgbackrest.repos.repo1.schedules.full` | Scheduled time to make a full backup specified in the crontab format | `0 0 \* \* 6` |
| `backups.pgbackrest.repos.repo1.schedules.differential` | Scheduled time to make a differential backup specified in the crontab format | `0 0 \* \* 6` |
| `backups.pgbackrest.repos.repo1.schedules.incremental` | Scheduled time to make an incremental backup specified in the crontab format | `0 0 \* \* 6` |
| `backups.pgbackrest.repos.repo1.volume.volumeClaimSpec.accessModes` | The Kubernetes PersistentVolumeClaim access modes for the pgBackRest Storage | `ReadWriteOnce` |
| `backups.pgbackrest.repos.repo1.volume.volumeClaimSpec.storageClassName` | The Kubernetes storageClassName for the pgBackRest Storage | `""` |
| `backups.pgbackrest.repos.repo1.volume.volumeClaimSpec.resources.requests.storage` | The Kubernetes storage requests for the pgBackRest storage | `1Gi` |
| `backups.pgbackrest.repos.repo3.gcs.bucket` | The Google Cloud Storage bucket | `my-bucket` |
| `backups.pgbackrest.repos.repo4.azure.container` | Name of the Azure Blob Storage container for backups | `my-container` |
| `backups.pgbackrest.restore.tolerations.effect` | The Kubernetes Pod tolerations effect for the backup restore job | `NoSchedule` |
| `backups.pgbackrest.restore.tolerations.key` | The Kubernetes Pod tolerations key for the backup restore job | `role` |
| `backups.pgbackrest.restore.tolerations.operator` | The Kubernetes Pod tolerations operator for the backup restore job | `Equal` |
| `backups.pgbackrest.restore.tolerations.value` | The Kubernetes Pod tolerations value for the backup restore job | `connection-poolers` |
| `backups.restore.enabled` | Enables or disables restoring a previously made backup | `false` |
| `backups.restore.repoName` | Name of the pgBackRest repository that contains the backup to be restored | `repo1` |
| `backups.restore.options` | The pgBackRest command-line options for the pgBackRest restore command | `--type=time` |
| `backups.pgbackrest.image` | Set this variable if you need to use a custom pgBackrest image | `percona/percona-postgresql-operator:2.5.0-ppg16.4-pgbackrest2.53-1` |
| `backups.repos.repo2.s3.bucket` | Storage bucket | `` |
| `backups.repos.repo2.s3.region` | S3-compatible storage name | `` |
| `backups.repos.repo2.s3.endpoint` | S3-compatible storage endpoint | `` |
| |
| `proxy.pgBouncer.expose.annotations` | The Kubernetes annotations metadata for pgBouncer | `pg-cluster-annot: cluster1` |
| `proxy.pgBouncer.expose.labels` | Set labels for the pgBouncer Service | `pg-cluster-label: cluster1` |
| `proxy.pgBouncer.expose.type` | K8S service type for the pgbouncer deployment | `ClusterIP` |
| `proxy.pgBouncer.expose.loadBalancerSourceRanges` | The range of client IP addresses from which the load balancer should be reachable (if not set, there is no limitations) | `[]` |
| `proxy.pgBouncer.sidecars.image` | Image for the custom sidecar container for pgBouncer Pods | `mycontainer1:latest` |
| `proxy.pgBouncer.sidecars.name` | Name of the custom sidecar container for pgBouncer Pods | `testcontainer` |
| `proxy.pgBouncer.exposeSuperusers` | Allow superusers connect via pgbouncer | `false` |
| `proxy.pgBouncer.config.global` | Custom configuration options for pgBouncer. | `pool_mode: transaction` |
| `proxy.pgBouncer.topologySpreadConstraints.maxSkew` | The degree to which Pods may be unevenly distributed under the Kubernetes Pod Topology Spread Constraints | `1` |
| `proxy.pgBouncer.topologySpreadConstraints.topologyKey` | The key of node labels for the Kubernetes Pod Topology Spread Constraints | `my-node-label` |
| `proxy.pgBouncer.topologySpreadConstraints.whenUnsatisfiable` | What to do with a Pod if it doesnt satisfy the Kubernetes Pod Topology Spread Constraints | `DoNotSchedule` |
| `proxy.pgBouncer.topologySpreadConstraints.labelSelector.matchLabels` | The Label selector for the Kubernetes Pod Topology Spread Constraints | `postgres-operator.crunchydata.com/instance-set: instance1` |
| `proxy.pgBouncer.tolerations.effect` | The Kubernetes Pod tolerations effect for the PostgreSQL instance | `NoSchedule` |
| `proxy.pgBouncer.tolerations.key` | The Kubernetes Pod tolerations key for the PostgreSQL instance | `role` |
| `proxy.pgBouncer.tolerations.operator` | The Kubernetes Pod tolerations operator for the PostgreSQL instance | `Equal` |
| `proxy.pgBouncer.tolerations.value` | The Kubernetes Pod tolerations value for the PostgreSQL instance | `connection-poolers` |
| `proxy.pgBouncer.customTLSSecret.name` | Custom external TLS secret name | `keycloakdb-pgbouncer.tls` |
| `proxy.pgBouncer.securityContext` | The Kubernetes Pod security context for the pgBouncer instance | `{}` |
| `proxy.pgBouncer.affinity.podAntiAffinity` | Pod anti-affinity, allows setting the standard Kubernetes affinity constraints of any complexity | `{}` |
| `proxy.pgBouncer.image` | Set this variable if you need to use a custom pgbouncer image | `percona/percona-postgresql-operator:2.5.0-ppg16.4-pgbouncer1.23.1` |
| `proxy.pgBouncer.replicas` | The number of pgbouncer instances | `3` |
| `proxy.pgBouncer.resources.requests.cpu` | Container resource request for CPU | `1` |
| `proxy.pgBouncer.resources.requests.memory` | Container resource request for RAM | `128Mi` |
| `proxy.pgBouncer.resources.limits.cpu` | Container resource limits for CPU | `2` |
| `proxy.pgBouncer.resources.limits.memory` | Container resource limits for RAM | `512Mi` |
| `proxy.pgBouncer.containers.pgbouncerConfig.resources.limits.cpu` | Kubernetes CPU limits for pgbouncerConfig instance | `200m` |
| `proxy.pgBouncer.containers.pgbouncerConfig.resources.limits.memory` | Kubernetes memory limits for pgbouncerConfig instance | `128Mi` |
| |
| `pmm.enabled` | Enable integration with [Percona Monitoring and Management software](https://www.percona.com/blog/2020/07/23/using-percona-kubernetes-operators-with-percona-monitoring-and-management/) | `false` |
| `pmm.image.repository` | PMM Container image repository | `percona/pmm-client` |
| `pmm.image.tag` | PMM Container image tag | `2.43.1` |
| `pmm.serverHost` | PMM server related K8S service hostname | `monitoring-service` |
| `pmm.querySource` | PMM querySource, 'pgstatmonitor' or 'pgstatstatemenets'. | `pgstatmonitor` |
| `pmm.resources.requests.memory` | Container resource request for RAM | `200M` |
| `pmm.resources.requests.cpu` | Container resource request for CPU | `500m` |
| |
| `patroni.syncPeriodSeconds` | The interval for refreshing the leader lock and applying dynamicConfiguration | `10` |
| `patroni.leaderLeaseDurationSeconds` | TTL of the cluster leader lock | `30` |
| `patroni.dynamicConfiguration` | Custom PostgreSQL configuration options. Please note that configuration changes are automatically applied to the running instances without validation, so having an invalid config can make the cluster unavailable | `{}` |
| `patroni.dynamicConfiguration.postgresql.parameters` | Custom PostgreSQL configuration options | `{}` |
| `patroni.dynamicConfiguration.postgresql.pg_hba` | PostgreSQL Host-Based Authentication section | `{}` |
| `patroni.switchover.enabled` | Enables or disables manual change of the cluster primary instance | `""` |
| `patroni.switchover.targetInstance` | The name of the Pod that should be set as the new primary. When not specified, the new primary will be selected randomly | `""` |
| |
| `extensions.image` | Image for the custom PostgreSQL extension loader sidecar container | `""` |
| `extensions.imagePullPolicy` | Policy for the custom extension sidecar container | `Always` |
| `extensions.storage.type` | The cloud storage type used for backups. Only s3 type is currently supported. | `""` |
| `extensions.storage.bucket` | The Amazon S3 bucket name for prepackaged PostgreSQL custom extensions | `""` |
| `extensions.storage.region` | The AWS region to use | `""` |
| `extensions.storage.endpoint` | The S3 endpoint to use. | `""` |
| `extensions.storage.secret.name` | The Kubernetes secret for the custom extensions storage. It should contain AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY keys | `""` |
| `extensions.builtin` | The key-value pairs which enable or disable Percona Distribution for PostgreSQL builtin extensions | `{}` |
| `extensions.custom` | Array of name and versions for each PostgreSQL custom extension | `[]` |
| |
| `secrets.name` | Database secrets object name. Object will be autogenerated if the name is not explicitly specified | `<cluster_name>-users` |
| `secrets.primaryuser` | primary user password (in use for replication only) | `autogenerated by operator` |
| `secrets.postgres` | postges user password (superuser, not accessible via pgbouncer) | `autogenerated by operator` |
| `secrets.pgbouncer` | pgbouncer user password | `autogenerated by operator` |
| `secrets.<default_user>` | Default user password | `autogenerated by operator` |
# Parameters for Backup
| Parameter | Description |
| --------------- | -------------------------------------------------- |
| `enabled` | Specifies whether the backup is enabled |
| `annotations` | Annotations for the resource |
| `name` | Name of the backup resource |
| `labels` | Labels for the resource |
| `pgCluster` | Name of the PostgreSQL cluster to backup |
| `repoName` | Name of the storage configuration for the backup |
| `options` | Additional options for the backup operation |
# Parameters for Restore
| Parameter | Description |
| --------------- | -------------------------------------------------- |
| `enabled` | Specifies whether the restore is enabled |
| `annotations` | Annotations for the resource |
| `name` | Name of the restore resource |
| `labels` | Labels for the resource |
| `pgCluster` | Name of the PostgreSQL cluster to restore |
| `repoName` | Name of the backup repository to restore from |
| `options` | Additional options for the restore operation |
Specify parameters using `--set key=value[,key=value]` argument to `helm install`
Notice that you can use multiple replica sets only with sharding enabled.
## Examples
### Deploy for tests - single PostgreSQL node and automated PVCs deletion
Such a setup is good for testing, as it does not require a lot of compute power
and performs and automated clean up of the Persistent Volume Claims (PVCs).
It also deploys just one pgBouncer node, instead of 3.
```bash
$ helm install my-test percona/pg-db \
--set instances[0].name=test \
--set instances[0].replicas=1 \
--set instances[0].dataVolumeClaimSpec.resources.requests.storage=1Gi \
--set proxy.pgBouncer.replicas=1 \
--set finalizers={'percona\.com\/delete-pvc,percona\.com\/delete-ssl'}
```
### Expose pgBouncer with a Load Balancer
Expose the cluster's pgBouncer with a LoadBalancer:
```bash
$ helm install my-test percona/pg-db \
--set proxy.pgBouncer.expose.type=LoadBalancer
```
### Add a custom user and a database
The following command is going to deploy the cluster with the user `test`
and give it access to the database `mytest`:
```bash
$ helm install my-test percona/pg-db \
--set users[0].name=test \
--set users[0].databases={mytest}
```

View File

@ -0,0 +1,21 @@
{{- if .Values.backup.enabled }}
apiVersion: pgv2.percona.com/v2
kind: PerconaPGBackup
metadata:
name: {{ .Values.backup.name }}
{{- if .Values.backup.annotations }}
annotations:
{{ .Values.backup.annotations | toYaml | indent 4 }}
{{- end }}
{{- if .Values.backup.labels }}
labels:
{{ .Values.backup.labels | toYaml | indent 4 }}
{{- end }}
spec:
pgCluster: {{ .Values.backup.pgCluster }}
repoName: {{ .Values.backup.repoName }}
options:
{{- range .Values.backup.options }}
- {{ . }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,21 @@
{{- if .Values.restore.enabled }}
apiVersion: pgv2.percona.com/v2
kind: PerconaPGRestore
metadata:
name: {{ .Values.restore.name }}
{{- if .Values.restore.annotations }}
annotations:
{{ .Values.restore.annotations | toYaml | indent 4 }}
{{- end }}
{{- if .Values.restore.labels }}
labels:
{{ .Values.restore.labels | toYaml | indent 4 }}
{{- end }}
spec:
pgCluster: {{ .Values.restore.pgCluster }}
repoName: {{ .Values.restore.repoName }}
options:
{{- range .Values.restore.options }}
- {{ . }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,608 @@
pg-operator:
enabled: true
# Default values for pg-operator.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
operatorImageRepository: percona/percona-postgresql-operator
imagePullPolicy: IfNotPresent
image: ""
# set if you want to specify a namespace to watch
# defaults to `.Release.namespace` if left blank
# watchNamespace:
# set if operator should be deployed in cluster wide mode. defaults to false
watchAllNamespaces: false
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you don't want to specify resources, comment the following
# lines and add the curly braces after 'resources:'.
limits:
cpu: 200m
memory: 500Mi
requests:
cpu: 100m
memory: 20Mi
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
# disableTelemetry: according to
# https://docs.percona.com/percona-operator-for-postgresql/2.0/telemetry.html
# this is how you can disable telemetry collection
# default is false which means telemetry will be collected
disableTelemetry: false
logStructured: false
logLevel: "INFO"
pg-db:
enabled: true
# Default values for pg-cluster.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
finalizers:
# Set this if you want that operator deletes the PVCs on cluster deletion
# - percona.com/delete-pvc
# Set this if you want that operator deletes the ssl objects on cluster deletion
# - percona.com/delete-ssl
crVersion: 2.5.0
repository: percona/percona-postgresql-operator
image: percona/percona-postgresql-operator:2.5.0-ppg16.4-postgres
imagePullPolicy: Always
postgresVersion: 16
# port: 5432
pause: false
unmanaged: false
standby:
enabled: false
# host: "<primary-ip>"
# port: "<primary-port>"
# repoName: repo1
# customRootCATLSSecret:
# name: cluster1-ca-cert
# items:
# - key: "tls.crt"
# path: "root.crt"
# - key: "tls.key"
# path: "root.key"
customTLSSecret:
name: ""
customReplicationTLSSecret:
name: ""
# openshift: true
# users:
# - name: rhino
# databases:
# - zoo
# options: "SUPERUSER"
# password:
# type: ASCII
# secretName: "rhino-credentials"
# databaseInitSQL:
# key: init.sql
# name: cluster1-init-sql
# dataSource:
# postgresCluster:
# clusterName: cluster1
# repoName: repo1
# options:
# - --type=time
# - --target="2021-06-09 14:15:11-04"
# tolerations:
# - effect: NoSchedule
# key: role
# operator: Equal
# value: connection-poolers
# pgbackrest:
# stanza: db
# configuration:
# - secret:
# name: pgo-s3-creds
# global:
# repo1-path: /pgbackrest/postgres-operator/hippo/repo1
# options:
# - --type=time
# - --target="2021-06-09 14:15:11-04"
# tolerations:
# - effect: NoSchedule
# key: role
# operator: Equal
# value: connection-poolers
# repo:
# name: repo1
# s3:
# bucket: "my-bucket"
# endpoint: "s3.ca-central-1.amazonaws.com"
# region: "ca-central-1"
# gcs:
# bucket: "my-bucket"
# azure:
# container: "my-container"
# volumes:
# pgDataVolume:
# pvcName: cluster1
# directory: cluster1
# tolerations:
# - effect: NoSchedule
# key: role
# operator: Equal
# value: connection-poolers
# annotations:
# test-annotation: value
# labels:
# test-label: value
# pgWALVolume:
# pvcName: cluster1-pvc-name
# directory: some-dir
# tolerations:
# - effect: NoSchedule
# key: role
# operator: Equal
# value: connection-poolers
# annotations:
# test-annotation: value
# labels:
# test-label: value
# pgBackRestVolume:
# pvcName: cluster1-pgbr-repo
# directory: cluster1-backrest-shared-repo
# tolerations:
# - effect: NoSchedule
# key: role
# operator: Equal
# value: connection-poolers
# annotations:
# test-annotation: value
# labels:
# test-label: value
# expose:
# annotations:
# my-annotation: value1
# labels:
# my-label: value2
# type: LoadBalancer
# loadBalancerSourceRanges:
# - 10.0.0.0/8
# exposeReplicas:
# annotations:
# my-annotation: value1
# labels:
# my-label: value2
# type: LoadBalancer
# loadBalancerSourceRanges:
# - 10.0.0.0/8
instances:
- name: instance1
replicas: 3
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchLabels:
postgres-operator.crunchydata.com/data: postgres
topologyKey: kubernetes.io/hostname
# resources:
# requests:
# cpu: 2.0
# memory: 4Gi
# limits:
# cpu: 2.0
# memory: 4Gi
# containers:
# replicaCertCopy:
# resources:
# limits:
# cpu: 200m
# memory: 128Mi
#
# sidecars:
# - name: testcontainer
# image: mycontainer1:latest
# - name: testcontainer2
# image: mycontainer1:latest
#
# topologySpreadConstraints:
# - maxSkew: 1
# topologyKey: my-node-label
# whenUnsatisfiable: DoNotSchedule
# labelSelector:
# matchLabels:
# postgres-operator.crunchydata.com/instance-set: instance1
#
# tolerations:
# - effect: NoSchedule
# key: role
# operator: Equal
# value: connection-poolers
#
# priorityClassName: high-priority
#
# securityContext:
# fsGroup: 1001
# runAsUser: 1001
# runAsNonRoot: true
# fsGroupChangePolicy: "OnRootMismatch"
# runAsGroup: 1001
# seLinuxOptions:
# type: spc_t
# level: s0:c123,c456
# seccompProfile:
# type: Localhost
# localhostProfile: localhost/profile.json
# supplementalGroups:
# - 1001
# sysctls:
# - name: net.ipv4.tcp_keepalive_time
# value: "600"
# - name: net.ipv4.tcp_keepalive_intvl
# value: "60"
#
# walVolumeClaimSpec:
# storageClassName: standard
# accessModes:
# - ReadWriteOnce
# resources:
# requests:
# storage: 1Gi
#
dataVolumeClaimSpec:
# storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
#
# tablespaceVolumes:
# - name: user
# dataVolumeClaimSpec:
# accessModes:
# - 'ReadWriteOnce'
# resources:
# requests:
# storage: 1Gi
proxy:
pgBouncer:
replicas: 3
image: percona/percona-postgresql-operator:2.5.0-ppg16.4-pgbouncer1.23.1
# exposeSuperusers: true
# resources:
# limits:
# cpu: 200m
# memory: 128Mi
# containers:
# pgbouncerConfig:
# resources:
# limits:
# cpu: 200m
# memory: 128Mi
# expose:
# annotations:
# my-annotation: value1
# labels:
# my-label: value2
# type: LoadBalancer
# loadBalancerSourceRanges:
# - 10.0.0.0/8
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchLabels:
postgres-operator.crunchydata.com/role: pgbouncer
topologyKey: kubernetes.io/hostname
# tolerations:
# - effect: NoSchedule
# key: role
# operator: Equal
# value: connection-poolers
#
# securityContext:
# fsGroup: 1001
# runAsUser: 1001
# runAsNonRoot: true
# fsGroupChangePolicy: "OnRootMismatch"
# runAsGroup: 1001
# seLinuxOptions:
# type: spc_t
# level: s0:c123,c456
# seccompProfile:
# type: Localhost
# localhostProfile: localhost/profile.json
# supplementalGroups:
# - 1001
# sysctls:
# - name: net.ipv4.tcp_keepalive_time
# value: "600"
# - name: net.ipv4.tcp_keepalive_intvl
# value: "60"
#
# topologySpreadConstraints:
# - maxSkew: 1
# topologyKey: my-node-label
# whenUnsatisfiable: ScheduleAnyway
# labelSelector:
# matchLabels:
# postgres-operator.crunchydata.com/role: pgbouncer
#
# sidecars:
# - name: bouncertestcontainer1
# image: mycontainer1:latest
#
# customTLSSecret:
# name: keycloakdb-pgbouncer.tls
#
# config:
# global:
# pool_mode: transaction
backups:
trackLatestRestorableTime: true
pgbackrest:
# metadata:
# labels:
image: percona/percona-postgresql-operator:2.5.0-ppg16.4-pgbackrest2.53-1
# containers:
# pgbackrest:
# resources:
# limits:
# cpu: 200m
# memory: 128Mi
# pgbackrestConfig:
# resources:
# limits:
# cpu: 200m
# memory: 128Mi
#
configuration:
- secret:
name: cluster1-pgbackrest-secrets
# jobs:
# priorityClassName: high-priority
# resources:
# limits:
# cpu: 200m
# memory: 128Mi
# tolerations:
# - effect: NoSchedule
# key: role
# operator: Equal
# value: connection-poolers
#
# securityContext:
# fsGroup: 1001
# runAsUser: 1001
# runAsNonRoot: true
# fsGroupChangePolicy: "OnRootMismatch"
# runAsGroup: 1001
# seLinuxOptions:
# type: spc_t
# level: s0:c123,c456
# seccompProfile:
# type: Localhost
# localhostProfile: localhost/profile.json
# supplementalGroups:
# - 1001
# sysctls:
# - name: net.ipv4.tcp_keepalive_time
# value: "600"
# - name: net.ipv4.tcp_keepalive_intvl
# value: "60"
#
global:
# repo1-retention-full: "14"
# repo1-retention-full-type: time
# repo1-path: /pgbackrest/postgres-operator/cluster1/repo1
# repo1-cipher-type: aes-256-cbc
# repo1-s3-uri-style: path
# repo2-path: /pgbackrest/postgres-operator/cluster1-multi-repo/repo2
# repo3-path: /pgbackrest/postgres-operator/cluster1-multi-repo/repo3
repo4-path: /pgbackrest/postgres-operator/cluster1-multi-repo/repo4
repoHost:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchLabels:
postgres-operator.crunchydata.com/data: pgbackrest
topologyKey: kubernetes.io/hostname
# tolerations:
# - effect: NoSchedule
# key: role
# operator: Equal
# value: connection-poolers
# priorityClassName: high-priority
#
# topologySpreadConstraints:
# - maxSkew: 1
# topologyKey: my-node-label
# whenUnsatisfiable: ScheduleAnyway
# labelSelector:
# matchLabels:
# postgres-operator.crunchydata.com/pgbackrest: ""
#
# securityContext:
# fsGroup: 1001
# runAsUser: 1001
# runAsNonRoot: true
# fsGroupChangePolicy: "OnRootMismatch"
# runAsGroup: 1001
# seLinuxOptions:
# type: spc_t
# level: s0:c123,c456
# seccompProfile:
# type: Localhost
# localhostProfile: localhost/profile.json
# supplementalGroups:
# - 1001
# sysctls:
# - name: net.ipv4.tcp_keepalive_time
# value: "600"
# - name: net.ipv4.tcp_keepalive_intvl
# value: "60"
manual:
repoName: repo1
options:
- --type=full
repos:
- name: repo1
schedules:
full: "0 0 * * 6"
# differential: "0 1 * * 1-6"
# incremental: "0 1 * * 1-6"
volume:
volumeClaimSpec:
# storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
# - name: repo2
# s3:
# bucket: "<YOUR_AWS_S3_BUCKET_NAME>"
# endpoint: "<YOUR_AWS_S3_ENDPOINT>"
# region: "<YOUR_AWS_S3_REGION>"
# - name: repo3
# gcs:
# bucket: "<YOUR_GCS_BUCKET_NAME>"
- name: repo4
azure:
container: "percona-container"
#
# restore:
# repoName: repo1
# tolerations:
# - effect: NoSchedule
# key: role
# operator: Equal
# value: connection-poolers
pmm:
enabled: false
image:
repository: percona/pmm-client
tag: 2.43.1
# imagePullPolicy: IfNotPresent
secret: cluster1-pmm-secret
serverHost: monitoring-service
querySource: pgstatmonitor
# resources:
# requests:
# memory: 200M
# cpu: 500m
# patroni:
# # Some values of the Liveness/Readiness probes of the patroni container are calulated using syncPeriodSeconds by the following formulas:
# # - timeoutSeconds: syncPeriodSeconds / 2;
# # - periodSeconds: syncPeriodSeconds;
# # - failureThreshold: leaderLeaseDurationSeconds / syncPeriodSeconds.
# syncPeriodSeconds: 10
# leaderLeaseDurationSeconds: 30
# dynamicConfiguration:
# postgresql:
# parameters:
# max_parallel_workers: 2
# max_worker_processes: 2
# shared_buffers: 1GB
# work_mem: 2MB
# pg_hba:
# - host all mytest 123.123.123.123/32 reject
# switchover:
# enabled: "true"
# targetInstance: ""
# extensions:
# image: percona/percona-postgresql-operator:2.5.0
# imagePullPolicy: Always
# storage:
# type: s3
# bucket: pg-extensions
# region: eu-central-1
# endpoint: s3.eu-central-1.amazonaws.com
# secret:
# name: cluster1-extensions-secret
# builtin:
# pg_stat_monitor: true
# pg_audit: true
# custom:
# - name: pg_cron
# version: 1.6.1
secrets:
name:
# replication user password
primaryuser:
# superuser password
postgres:
# pgbouncer user password
pgbouncer:
# pguser user password
pguser:
backup:
enabled: true
annotations:
description: "test"
name: backup1
labels:
app: postgres-backup
environment: testing
pgCluster: postgres-pg-db
repoName: repo4
options:
- --type=full
restore:
enabled: true
annotations:
description: "test"
name: restore1
labels:
app: postgres-restore
environment: testing
pgCluster: postgres-pg-db
repoName: repo4
options:
- --type=time
- --target="2024-12-10 10:35:34+00"

26
charts/worker/Chart.yaml Normal file
View File

@ -0,0 +1,26 @@
---
apiVersion: v2
description: A deployment helm chart which will be used to deploy any type of stateless application
maintainers:
- name: iamabhishek-dubey
email: "abhishek.dubey@opstree.com"
url: https://github.com/iamabhishek-dubey
name: worker
sources:
- https://github.com/ot-container-kit/helm-charts
dependencies:
- name: base
version: 0.1.0
repository: https://ot-container-kit.github.io/helm-charts
version: 0.1.0
appVersion: "0.1.0"
home: https://github.com/ot-container-kit/helm-charts
keywords:
- deployment
- base
- opstree
- kubernetes
- openshift
- microservice
- stateless
icon: https://raw.githubusercontent.com/OT-CONTAINER-KIT/helm-charts/main/static/helm-chart-logo.svg

39
charts/worker/README.md Normal file
View File

@ -0,0 +1,39 @@
# Worker Deployment Helm Chart
![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square) ![AppVersion: 0.1.0](https://img.shields.io/badge/AppVersion-0.1.0-informational?style=flat-square)
A deployment helm chart which will be used to deploy any type of stateless application
**Homepage:** <https://github.com/ot-container-kit/helm-charts>
## Maintainers
| Name | Email | Url |
|-------------------|------------------------------|----------------------------------------|
| iamabhishek-dubey | <abhishek.dubey@opstree.com> | <https://github.com/iamabhishek-dubey> |
## Source Code
* <https://github.com/ot-container-kit/helm-charts>
## Requirements
| Repository | Name | Version |
|------------------------------------------------|------|---------|
| https://ot-container-kit.github.io/helm-charts | base | 0.1.0 |
## Values
| Key | Type | Default | Description |
|------------------------|--------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
| autoscaling | object | `{"enabled":false,"maxReplicas":50,"minReplicas":10,"targetCPUUtilizationPercentage":65,"targetMemoryUtilizationPercentage":65}` | Autoscaling properties with target CPU and Memory details |
| base | object | `{"image":{"command":[],"pullPolicy":"IfNotPresent","pullSecrets":"","repository":"nginx","tag":"latest"}}` | Base block to define the inputs for image, secret and configmap env |
| base.image | object | `{"command":[],"pullPolicy":"IfNotPresent","pullSecrets":"","repository":"nginx","tag":"latest"}` | Image block with all image details |
| base.image.command | list | `[]` | Additional command arguments which needs to be passed |
| base.image.pullPolicy | string | `"IfNotPresent"` | Default image pull policy |
| base.image.pullSecrets | string | `""` | Image pull secrets for private repository authentication |
| base.image.repository | string | `"nginx"` | Default image repository |
| base.image.tag | string | `"latest"` | Default image tag |
| replicaCount | int | `2` | Number of replicas for deployment, it will be overridden in case autoscaling is enabled |
| resources | object | `{}` | Kubernetes resource in terms of requests and limits |
| volumes | string | `nil` | Kubernetes volumes definition which needs to be mounted |

View File

@ -0,0 +1,4 @@
{{ include "configmap" . }}
---
{{ include "serviceAccount" . }}
---

View File

@ -0,0 +1,18 @@
{{- if .Values.volumes }}
{{- if .Values.volumes.configMaps }}
{{ range $cm := .Values.volumes.configMaps}}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ $cm.name }}
labels:
{{- include "base.labels" $ | nindent 4 }}
data:
{{- range $filename, $content := $cm.data }}
{{ $filename }}: |-
{{ $content | toString | indent 4}}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,65 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "base.fullname" . }}
labels:
{{- include "base.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
{{- with .Values.replicaCount }}
replicas: {{ . }}
{{- end }}
{{- end }}
selector:
matchLabels:
{{- include "base.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "base.selectorLabels" . | nindent 8 }}
spec:
{{- if .Values.base.image.pullSecrets }}
imagePullSecrets:
- name: {{ .Values.base.image.pullSecrets }}
{{- end }}
serviceAccountName: {{ include "base.serviceAccountName" . }}
terminationGracePeriodSeconds: 120
containers:
- name: {{ include "base.fullname" . }}
image: "{{ .Values.base.image.repository }}:{{ .Values.base.image.tag }}"
imagePullPolicy: {{ .Values.base.image.pullPolicy }}
{{- if .Values.base.image.command }}
command:
{{- toYaml .Values.base.image.command | nindent 12 }}
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- if .Values.base.config }}
envFrom:
- configMapRef:
name: {{ include "base.fullname" . }}
{{- end }}
{{- if .Values.base.secret }}
- secretRef:
name: {{ include "base.fullname" . }}
{{- end }}
{{- if .Values.volumes }}
volumeMounts:
{{- if .Values.volumes.configMaps }}
{{- range $conf := .Values.volumes.configMaps }}
- mountPath: {{ $conf.mountPath }}
name: {{ $conf.name }}-volume
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.volumes }}
volumes:
{{- if .Values.volumes.configMaps }}
{{- range $conf := .Values.volumes.configMaps }}
- name: {{ $conf.name }}-volume
configMap:
name: {{ $conf.name }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,33 @@
{{- if .Values.autoscaling.enabled }}
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "base.fullname" . }}
labels:
{{- include "base.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "base.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,12 @@
{{- if .Values.base.secret -}}
{{- $top := . -}}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ include "base.fullname" . }}
labels:
{{- include "base.labels" . | nindent 4 }}
stringData:
{{- toYaml .Values.base.secret | nindent 2 -}}
{{- end -}}

43
charts/worker/values.yaml Normal file
View File

@ -0,0 +1,43 @@
# -- Base block to define the inputs for image, secret and configmap env
base:
# -- Image block with all image details
image:
# -- Default image pull policy
pullPolicy: "IfNotPresent"
# -- Additional command arguments which needs to be passed
command: []
# -- Default image repository
repository: nginx
# -- Default image tag
tag: latest
# -- Image pull secrets for private repository authentication
pullSecrets: ""
# secret:
# FOO_SECRET: BAR
# config:
# FOO_CONFIG: BAR
# -- Autoscaling properties with target CPU and Memory details
autoscaling:
enabled: false
targetCPUUtilizationPercentage: 65
targetMemoryUtilizationPercentage: 65
minReplicas: 10
maxReplicas: 50
# -- Kubernetes resource in terms of requests and limits
resources: {}
# -- Number of replicas for deployment, it will be overridden in case autoscaling is enabled
replicaCount: 2
# -- Kubernetes volumes definition which needs to be mounted
volumes:
# -- List of configmaps with mount path and data
# configMaps:
# - name: web
# mountPath: /test
# data:
# test.txt: |-
# Dummy text