Compare commits

...

269 Commits

Author SHA1 Message Date
sharvarikhamkar1304 fe8cec4075
feat : Added helm chart for ingress management (#285)
* Add ingress-management

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Update values.yaml

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Move ingress-management chart under charts directory

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Create README.md

* Update README.md

* Update README.md

* Update README.md

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Fix: Update Chart.yaml metadata with repo and maintainer info

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Fix maintainer name

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* update

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Delete charts/ingress-management/etc directory

* Delete charts/ingress-management/LICENSE

* Update README.md

* Update README.md

* fix: update helm chart for ingress-management

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Update values.yaml

* Update values.yaml

* Update values.yaml

* Fix: update values.yaml

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>

* Update values.yaml

* Update httproute.yaml

* Update values.yaml

* Update httproute.yaml

* Update values.yaml

* Update values.yaml

* Update values.yaml

* Update httproute.yaml

* Update values.yaml

* Update values.yaml

* Update values.yaml

* Update values.yaml

---------

Signed-off-by: sharvarikhamkar1304 <sharvari.khamkar@mygurukulam.co>
2025-08-05 13:41:19 +05:30
Abhishek Dubey 22a3df98c8
[COE][Add] Added helm chart for worker based deployment (#281)
* Added helm chart for worker based deployment

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

* Added helm chart for worker based deployment

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

* Added helm chart for worker based deployment

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

---------

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>
2025-07-04 00:52:37 +05:30
Prashantdev780 9a6b440441
Helm fix for liveliness probe and readiness probe (#279) 2025-06-30 20:07:40 +05:30
Sandeep Rawat 6306426bed
Merge pull request #278 from OT-CONTAINER-KIT/helm_fix
Fixed secrets block in deployment.yml of microservice
2025-06-30 18:34:38 +05:30
Prashantdev780 ff645ddecd
Update Chart.yaml 2025-06-30 18:32:37 +05:30
Prashant Sharma 3619d89003 Fixed secrets block in deployment.yml of microservice 2025-06-30 17:13:05 +05:30
Abhishek Dubey 7706b2da4f
Added helm chart for web service deployment (#271)
Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>
2025-02-10 16:51:35 +05:30
hiteshmakol1 eaa6fc3aa6
Karpenter 0.3.0 Modifications (#270)
* Added new code for Karpenter 0.3.0 including new chart version, interruptionQueue parameter, modified README, modified template yaml

Signed-off-by: Hitesh Makol <hitesh.makol@opstree.com>

* Fixed Linting for Chart.yaml

Signed-off-by: Hitesh Makol <hitesh.makol@opstree.com>

* Fixed CI steps for testing chart

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

* Fixed CI steps for testing chart

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

---------

Signed-off-by: Hitesh Makol <hitesh.makol@opstree.com>
Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>
Co-authored-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>
2025-01-14 12:08:56 +05:30
Abhishek Dubey d03ba9f362
Added base helm chart for k8s (#269)
Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>
2025-01-13 13:10:37 +05:30
Abhishek Dubey 57fbc499a4
Fixed CI steps for testing chart (#268)
* Fixed CI steps for testing chart

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

* Fixed CI steps for testing chart

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

* Fixed CI steps for testing chart

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

---------

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>
2025-01-13 12:31:44 +05:30
hiteshmakol1 d50ec64e5c
Karpenter 0.2.0 Modifications (#266)
* Added Comment in Chart, Changed example.yaml file , Added comments in values file

* Incorporated Review Comments
2025-01-08 11:55:00 +05:30
Abhishek Dubey 92971b2ab9
Update release.yaml 2025-01-07 15:44:49 +05:30
Abhishek Dubey d491b314b2
Update release.yaml 2025-01-07 15:42:45 +05:30
Abhishek Dubey c8fac9acd1
Update ct.yaml 2025-01-07 15:40:48 +05:30
hiteshmakol1 021f437068
Enhanced README for the Karpenter Helm chart (#264)
* Modifie README for karpenter helm chart

* Incorporated Review Comments
2025-01-07 15:12:15 +05:30
hiteshmakol1 44318da8ef
Addition of Readme File for Helm Chart (#263)
* ADDED README FILE

* Removed Extra nodepool yaml file, fixed spacing

* Added example field and comments in values.yaml

* Add comments in ReadMe File

* Modifed README file

* Added example in README file

* Added Example folder and modified README file
2025-01-02 15:02:52 +05:30
hiteshmakol1 905122a30a
Added fix for karpenter (#261)
* Karpenter Helm Chart - includes prerequisites - IAM_Role, Tagging, AWS_AUth as well

* Update values.yaml

Updated Values.yaml

* Added Dependency Chart, updated values.yaml

* Added template folder

* Removed extra Files

* Incorporated Review Comments

* Added nodePool YAML template , modified values yaml

* Modified nodePool and values yaml files

* Added comments in values.yaml

---------

Co-authored-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>
2024-12-31 13:24:34 +05:30
hiteshmakol1 088280e7c8
Added support for karpenter helm (#260)
* Karpenter Helm Chart - includes prerequisites - IAM_Role, Tagging, AWS_AUth as well

* Update values.yaml

Updated Values.yaml

* Added Dependency Chart, updated values.yaml

* Added template folder

* Removed extra Files

* Incorporated Review Comments

* Added nodePool YAML template , modified values yaml

* Added functionality to loop over different node pools

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>

---------

Signed-off-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>
Co-authored-by: Abhishek Dubey <abhishekbhardwaj510@gmail.com>
2024-12-30 23:42:29 +05:30
hiteshmakol1 b84aaf41ee
Karpenter Helm Chart (#256)
* Karpenter Helm Chart - includes prerequisites - IAM_Role, Tagging, AWS_AUth as well

* Update values.yaml

Updated Values.yaml

* Added Dependency Chart, updated values.yaml

* Added template folder

* Removed extra Files

* Incorporated Review Comments
2024-12-27 13:29:25 +05:30
tarunsinghot 03d5a2dfcd
Added Percona MongoDB helm chart with backup support (#230)
* added helm chart for psmdb operator and db

* tested backup and restore

* updated doc

* added end of line in all files

* updated values file

* added templates for backup and restore files

* worked on feedbacks
2024-12-13 17:37:23 +05:30
Shubham Gupta 5dbad2900f
Remove (#246)
redis-operator, redis-cluster, redis-replication, redis-sentinel helm chart

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2024-12-13 16:58:36 +05:30
Ashwani Singh f105d47a46
Fix the parameter variable type 2024-10-15 12:50:29 +05:30
Ashwani Singh a1c4cd456a
Fix the variable name for msteams 2024-10-15 12:42:46 +05:30
Ashwani Singh bdc8a635d4
MS Team notification 2024-10-12 11:24:44 +05:30
Ashwani Singh 9ae246a8ad
Merge pull request #248 from OT-CONTAINER-KIT/fix-hardcoded-values
Remove hardcoded values
2024-09-29 22:16:40 +05:30
Ashwani Singh 150a3a3703 Bump helm chart version
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-09-29 22:14:47 +05:30
Ashwani Singh 2b71e438dc Remove extra line
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-09-29 22:14:47 +05:30
Ashwani Singh 8ebc884cdc Remove hardcoded values
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-09-29 22:14:47 +05:30
Ashwani Singh a524e4d67e
Merge pull request #247 from OT-CONTAINER-KIT/victoriametrics
Victoriametrics
2024-09-23 10:10:41 +05:30
Ashwani Singh 6970ab0047 Fix the helm lint
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-09-23 10:08:11 +05:30
Ashwani Singh d12dafe780 New release for victoriametrics
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-09-23 10:08:11 +05:30
Ashwani Singh 595310ad8f Tune the victoriametrics storage
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-09-23 10:08:11 +05:30
Ashwani Singh b1addb8156 Create victoriametrics chart
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-09-23 10:08:11 +05:30
Ashwani Singh a443cc1027 VictoriaMetrics
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-09-23 10:08:11 +05:30
Ashwani Singh 927ed46445
Merge pull request #245 from OT-CONTAINER-KIT/feaute/enable-event
bump up pga helm chart version
2024-09-04 23:25:47 +05:30
Ashwani Singh d9dee00107
bump up pga helm chart version 2024-09-04 23:25:23 +05:30
Ashwani Singh 42f4386d51
Fix aliases 2024-09-03 09:51:51 +05:30
Ashwani Singh 13781c3145
Merge pull request #244 from OT-CONTAINER-KIT/loki-scalable
Bump up helm chart version
2024-09-03 09:36:09 +05:30
Ashwani Singh ee36252929
Bump up helm chart version 2024-09-03 09:35:20 +05:30
Ashwani Singh 1390a84052
Merge pull request #243 from OT-CONTAINER-KIT/loki-scalable
Restructure loki helm chart dir
2024-09-03 09:34:16 +05:30
Ashwani Singh 500dae93c1
Restructure loki helm chart dir 2024-09-03 09:33:35 +05:30
Ashwani Singh 5143d7f313
Merge pull request #241 from OT-CONTAINER-KIT/feature/tempo-stan
Feature/tempo standalone
2024-09-02 22:28:18 +05:30
Ashwani Singh 3903846786
Fix the typo 2024-09-02 22:27:35 +05:30
Ashwani Singh 9df77202ee
Example yaml for tempo 2024-09-02 22:27:02 +05:30
Ashwani Singh ebdeecba52
Fix the helm chart name 2024-09-02 22:23:07 +05:30
Ashwani Singh a588fed2b2
Create wrapper helmchart for tempo standalone 2024-09-02 22:22:07 +05:30
Ashwani Singh 1f23c76538
Merge pull request #240 from OT-CONTAINER-KIT/otel-operator
Fix the chart name
2024-09-02 16:12:34 +05:30
Ashwani Singh 708a057d55 Fix the chart name 2024-09-02 16:10:37 +05:30
Ashwani Singh 9d2174073e
Merge pull request #239 from OT-CONTAINER-KIT/otel-operator
Helm chart for otel operator
2024-09-02 16:04:43 +05:30
Ashwani Singh b9eee37b07 Helm chart for otel operator 2024-09-02 16:03:48 +05:30
Ashwani Singh 9d60b6fd1e
Merge pull request #235 from OT-CONTAINER-KIT/tapan_k8s_events
Collect Kubernetes events
2024-08-28 23:58:07 +05:30
Ashwani Singh 5d419a5776 Add Event collector in the helm
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-28 11:41:10 +05:30
Tapan Kumar Sahu 60f91ce7dc adding k8s-events to values.ymal
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-28 11:41:10 +05:30
Ashwani Singh f10d5f54ac
Merge pull request #234 from OT-CONTAINER-KIT/issues/217
Issues/217
2024-08-17 23:44:09 +05:30
Ashwani Singh 324a5cc042 Fix helm lint
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-17 23:40:48 +05:30
Ashwani Singh b6be300dcb Fix terminationGracePeriodSeconds
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-17 23:40:48 +05:30
Ashwani Singh fa42ff56d4 Create examples for sa and affinity
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-17 23:40:48 +05:30
Ashwani Singh e47fd97d41 Add pod affinity and topology constraints
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-17 23:40:48 +05:30
Ashwani Singh 63f44f09d4
Merge pull request #233 from OT-CONTAINER-KIT/pga-grafana-statefulset
Create grafana statefulset
2024-08-13 07:21:08 +05:30
Ashwani Singh 1166e82930
Create grafana statefulset 2024-08-13 07:20:44 +05:30
Ashwani Singh d0adc83e9e
Merge pull request #232 from OT-CONTAINER-KIT/fix-values-file
Fix values file
2024-08-13 00:33:15 +05:30
Ashwani Singh 4f3b7ba621
bump up chart version 2024-08-13 00:32:43 +05:30
Ashwani Singh 22cf2005b9
fix values 2024-08-13 00:32:08 +05:30
Ashwani Singh 60663844c7
Merge pull request #231 from OT-CONTAINER-KIT/prom-alerts
Datasource
2024-08-13 00:02:54 +05:30
Ashwani Singh 8f836c8633 Datasource 2024-08-13 00:00:29 +05:30
Ashwani Singh c9e77e71ec
Merge pull request #229 from OT-CONTAINER-KIT/prom-alerts
Add Alerts and Dashboards
2024-08-08 15:29:07 +05:30
Ashwani Singh 3c6428cf16
Add Alerts and Dashboards 2024-08-08 15:27:17 +05:30
Ashwani Singh 40b0388d26
Merge pull request #226 from OT-CONTAINER-KIT/loki-helm-chart
Loki helm chart
2024-08-02 16:07:57 +05:30
Ashwani Singh 00e4533cfd
Merge branch 'main' into loki-helm-chart 2024-08-02 16:07:50 +05:30
Ashwani Singh 403730ffae Rename logging 2024-08-02 16:05:56 +05:30
Ashwani Singh b3bd87ba29 Ignore DS_Store 2024-08-02 16:04:09 +05:30
Ashwani Singh 8bfefd838d datasource 2024-08-02 16:03:07 +05:30
Ashwani Singh da1b6ba087 Update grafana datasource 2024-08-02 16:03:07 +05:30
Ashwani Singh d700877362 ignore from lint
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
Ashwani Singh 30c6966eee exclude dependency chart
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
Ashwani Singh 51bc70820e Ignore community chart to lint validation
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
Ashwani Singh 8dc905c75a Fix 141:43 [brackets] too many spaces inside brackets
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
Ashwani Singh fa59e776e2 Fix indentation of yaml
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
Ashwani Singh 3f437d57ff Fix no new line character at the end of file
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
Ashwani Singh 3df9b650eb Change chart path
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
Ashwani Singh 424b1d50c1 Add maintainers
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
Ashwani Singh 81473ad6ce Ignore Chart.lock file
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
Ashwani Singh 562ff2a6d8 Remove readme file from wrong place
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
Ashwani Singh d82c0b5c9f Remove percona files
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
Ashwani Singh 0c5c2da504 Fix the typo
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
Ashwani Singh a362d56613 Create datasources
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
Ashwani Singh ba70b8ab93 Remove spaces
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
Ashwani Singh 2ab1ba98ec Create readme
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
ayushh09 aa0ca72635 Create Monitoring_Setup_with_Helm.md
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
Ashwani Singh 074a4430db Add tempo datasource
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
Ashwani Singh 6792931839 Hard cord the helm chart version
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
Ashwani Singh 649a180119 Hard cord the helm chart version
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
Ashwani Singh 2994870f9b fix the namespace
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:06 +05:30
Ashwani Singh 0dfb3e00a1 Resolve dep
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:03:04 +05:30
Ashwani Singh 2c009495fa Resolve dep
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:01:46 +05:30
Ashwani Singh 00eb83cd8e disable thanos in default values yaml
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:01:46 +05:30
Ashwani Singh e13b4b75f1 Remove dependency chart
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:01:46 +05:30
Ashwani Singh a0e17bd7ff Add external labels to make the Prometheus cluster distinguishable
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:01:45 +05:30
Ashwani Singh daffd824f7 thanos sidecar
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:01:45 +05:30
Ashwani Singh 5018233aed thanos setup
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:01:45 +05:30
Ashwani Singh df0180c07a ds for grafana
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:01:45 +05:30
Ashwani Singh f1c0862b47 PGA helm chart
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:01:45 +05:30
Ashwani Singh 02f3d40218 PGA helm chart
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 16:01:42 +05:30
Khushi Malhotra e46df16ff4 Update Chart.yaml 2024-08-02 15:59:10 +05:30
Ashwani Singh 862af169e8 Fix parameter name
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 15:59:10 +05:30
Ashwani Singh 94eb015857 Bump up chart version
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 15:59:10 +05:30
Ashwani Singh c99aa713cd Fix Maintainers
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 15:59:10 +05:30
Ashwani Singh 5733d03b01 Add deployment strategy feature
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-08-02 15:59:10 +05:30
Ashwani Singh aa3c55ee27
Merge pull request #225 from OT-CONTAINER-KIT/grafana-datasource
Grafana datasource
2024-08-02 15:57:46 +05:30
Ashwani Singh 797b4ff43f datasource 2024-08-02 15:56:58 +05:30
Ashwani Singh 5a14fbb599 Setup standalone loki 2024-08-02 15:05:03 +05:30
Ashwani Singh f20d0d3fbf Config for distributed loki 2024-08-02 14:54:16 +05:30
Ashwani Singh 7be42ef493 loki scalable 2024-08-02 14:50:24 +05:30
Ashwani Singh 8dbb152546 Update grafana datasource 2024-08-01 09:52:23 +05:30
Ashwani Singh fb86adf536
Merge pull request #215 from OT-CONTAINER-KIT/helm-pga-setup
PGA helm chart
2024-07-31 23:27:28 +05:30
Ashwani Singh a17d450042 ignore from lint
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh f761e135c5 exclude dependency chart
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 89c9c576bb Ignore community chart to lint validation
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 99a3f12300 Fix 141:43 [brackets] too many spaces inside brackets
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 269a50e601 Fix indentation of yaml
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 5d4ba97d81 Fix no new line character at the end of file
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 007264c828 Change chart path
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 356e19aca0 Add maintainers
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 350fbd2410 Ignore Chart.lock file
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh dbda50060d Remove readme file from wrong place
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 1068c594e2 Remove percona files
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 523f99ae5d Fix the typo
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 429b3c5005 Create datasources
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 07cb782176 Remove spaces
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 0f91686a78 Create readme
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
ayushh09 bc3bc3653e Create Monitoring_Setup_with_Helm.md
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 11a2ceb084 Add tempo datasource
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 1da31e0a57 Hard cord the helm chart version
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 7f9e1a15f8 Hard cord the helm chart version
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 2bd9699230 fix the namespace
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh ab6b793689 Resolve dep
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh a98b4311fe Resolve dep
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 4b3c52f7be disable thanos in default values yaml
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 6053ba62c9 Remove dependency chart
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 2690c7145b Add external labels to make the Prometheus cluster distinguishable
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 6256b982cf thanos sidecar
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh fb62eff635 thanos setup
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 26f368385e ds for grafana
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 23:19:15 +05:30
Ashwani Singh 68d9bac31a PGA helm chart
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-31 22:55:39 +05:30
Khushi Malhotra d5edc1d30d
Update Chart.yaml 2024-07-30 15:16:07 +05:30
Ashwani Singh eb041277fe
Merge pull request #223 from OT-CONTAINER-KIT/ms-deployment-strategy
Ms deployment strategy
2024-07-30 00:03:54 +05:30
Ashwani Singh 7f4e354d75 Fix parameter name
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-29 23:58:40 +05:30
Ashwani Singh ccbc743d42 Bump up chart version
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-29 23:58:40 +05:30
Ashwani Singh 2ef6f7f643 Fix Maintainers
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-29 23:58:40 +05:30
Ashwani Singh 658b1b58d3 Add deployment strategy feature
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-29 23:58:40 +05:30
Ashwani Singh d2a05e84e3 . 2024-07-24 00:21:44 +05:30
Ashwani Singh a7ec240b7a promtail 2024-07-18 21:19:05 +05:30
Ashwani Singh 34950430e4 loki 2024-07-18 16:02:56 +05:30
Ashwani Singh 555b0ee4c6
Merge pull request #209 from OT-CONTAINER-KIT/ms-health-check
Implement application health check and hpa
2024-07-09 12:43:25 +05:30
Ashwani Singh 334cd43dd6 Fix maintainers list
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-09 12:42:34 +05:30
Ashwani Singh 5593407ee6 Update maintainers list
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-09 12:42:34 +05:30
Ashwani Singh 7b2e601284 Fix readme
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-09 12:42:34 +05:30
Ashwani Singh da0962b89b remove trailing spaces
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-09 12:42:34 +05:30
Ashwani Singh 58be3fe189 demo-dev namespace
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-09 12:42:34 +05:30
Ashwani Singh d7b80fa6d2 Change namespace
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-09 12:42:34 +05:30
Ashwani Singh d6045e8716 Generate helm docs
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-09 12:42:34 +05:30
Ashwani Singh d4a1e8f53f validate k8s api version
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-09 12:42:34 +05:30
Ashwani Singh 7e2f84d8da example
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-09 12:42:34 +05:30
Ashwani Singh 0b4593f389 create notes
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-09 12:42:34 +05:30
tripathishikha1 4434b007fd Update deploy-nginx.yaml
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-09 12:42:34 +05:30
tripathishikha1 688f9a55d4 Update deploy-nginx.yaml
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-09 12:42:34 +05:30
Ashwani Singh 6b9a43f6eb create pvc and configmap
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-09 12:42:34 +05:30
Ashwani Singh 898d793b96 Update according to recommendation
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-09 12:42:34 +05:30
Ashwani Singh 6576ecbbe5 Implement application health check and hpa
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-09 12:40:07 +05:30
Ashwani Singh 1a5113d3b2
Merge pull request #207 from OT-CONTAINER-KIT/ms-todo-list
Ms todo list
2024-07-04 12:01:45 +05:30
Ashwani Singh 640b1b1059 fix the heading
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-03 16:10:31 +05:30
Ashwani Singh 05bacde3eb fix typo
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-03 16:09:44 +05:30
Ashwani Singh 7f14587498 todo list for ot microservices helm chart
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-03 16:02:59 +05:30
Ashwani Singh d8d2508681
Merge pull request #202 from OT-CONTAINER-KIT/ms-helper-tpl
Ms helper tpl
2024-07-02 17:35:11 +05:30
Ashwani Singh 8195e90ce6 add helm chart maintainers
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-02 16:54:55 +05:30
Ashwani Singh 4660f0e7b2 add helm chart maintainers
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-02 16:35:57 +05:30
Ashwani Singh eee0afb4fb add helm chart maintainers
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-02 16:32:42 +05:30
Ashwani Singh a30775cc73 remove trailing spaces
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-02 16:27:09 +05:30
Ashwani Singh 8dd4b7d325 remove trailing spaces
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-02 13:59:49 +05:30
Ashwani Singh 6f23fab566 fix helm chart version
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-02 12:16:19 +05:30
Ashwani Singh e928649e6e fix parameters
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-02 12:07:37 +05:30
Ashwani Singh 1822d4ad20 fix helper tempate
Signed-off-by: Ashwani Singh <ashwani.singh@opstree.com>
2024-07-02 12:07:37 +05:30
Shubham Gupta b28e579e7e
[Release] : Helm chart for the v0.16.0 (#190)
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2024-05-20 18:34:57 +05:30
Shubham Gupta 3295652d13
fix: Drop Cert manager as a helm chart dependency (#191)
* fix: Drop Cert manager as a helm chart dependency

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix: version bump

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2024-05-20 18:32:20 +05:30
Sandeep Rawat a9c8caf0e5
Merge pull request #181 from rishops/basic-mircoservice-chart
feat: add basic helm chart for ms deployment
2024-01-30 11:18:32 +05:30
rishops 4b55fa8f34 feat: add basic helm chart for ms deployment 2024-01-30 11:07:43 +05:30
Andrey Kartashov c54e65c3fa
remove unnecessary quotes (#171)
* fix for #170

Signed-off-by: Andrei Kartashov <a@ioot.xyz>

* Update redis-cluster.yaml

Signed-off-by: Andrei Kartashov <a@ioot.xyz>

* Update redis-replication.yaml

Signed-off-by: Andrei Kartashov <a@ioot.xyz>

* Update redis-sentinel.yaml

Signed-off-by: Andrei Kartashov <a@ioot.xyz>

* fix CI

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* version  bump

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix CI

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Andrei Kartashov <a@ioot.xyz>
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
Co-authored-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-11-08 13:24:58 +05:30
EStork09 cfe2734d3f
Added name overwrite function to redis charts (#168)
* fix redis-sentinel selector role value (#164)

* fix redis-sentinel selector role value

Signed-off-by: whzghb <631064936@qq.com>

* fix lints

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix linl-2

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: whzghb <631064936@qq.com>
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
Co-authored-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
Signed-off-by: EStork09 <estork@live.com>

* Added name overwrite function to redis charts

Signed-off-by: EStork09 <estork@live.com>

* Bumpped versions

Signed-off-by: EStork09 <estork@live.com>

* Moved fields for name under specific configuration

Signed-off-by: EStork09 <estork@live.com>

* Adjusted values.yaml to address linting report

Signed-off-by: EStork09 <estork@live.com>

* Addressed linting errors

Signed-off-by: EStork09 <estork@live.com>

---------

Signed-off-by: whzghb <631064936@qq.com>
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
Signed-off-by: EStork09 <estork@live.com>
Co-authored-by: whzghb <41436057+whzghb@users.noreply.github.com>
Co-authored-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-10-18 17:50:55 +05:30
whzghb b5a8effe90
fix redis-sentinel selector role value (#164)
* fix redis-sentinel selector role value

Signed-off-by: whzghb <631064936@qq.com>

* fix lints

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix linl-2

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: whzghb <631064936@qq.com>
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
Co-authored-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-10-16 15:12:13 +05:30
Shubham Gupta b800ff9c69
Make chart lint and fix the deprecated (#163)
* Make chart lint and fix the deprecated

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* lint always

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-10-13 13:09:27 +05:30
Shubham Gupta a4fc5c00e0
update chart versions (#162)
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-10-13 05:13:51 +05:30
Shubham Gupta edf8b6186c
add config (#161)
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-10-13 05:10:59 +05:30
Shubham Gupta 0a703bcc57
add python (#160)
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-10-13 05:05:33 +05:30
Shubham Gupta 771c90fc2c
[Fix] : Releaser Add Tool kit (#159)
* fix releaser

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* add tool kit

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-10-13 04:50:28 +05:30
Shubham Gupta b8ef66d9d7
fix releaser (#158)
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-10-13 04:47:51 +05:30
Shubham Gupta 7266aa2b95
[Release] : Release Charts (#157)
* add redis chart for latest version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* redis replication charts

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* release sentinel charts

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* remove name

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* remove spacing

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix arguments

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix name

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix : sidecar image name

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* update crds

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix intall workflow

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fixes

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* try

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* remove storage from sentinel

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-10-13 04:37:56 +05:30
Shubham Gupta 969264f867
[Fix] : Failing Workflow test-chart.yaml (#154)
* fix the release of chart

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* remove the ./

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix the install

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* only install redis

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-10-13 02:07:38 +05:30
Shubham Gupta 571b5ee06a
release chart fixes (#153)
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-10-11 01:33:51 +05:30
Shubham Gupta a459132dce
[ Fix ] : Redis Cluster Chart (#152)
* space

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix name

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix lint

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix helpers.tpl

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* bump version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-10-11 01:00:06 +05:30
Shubham Gupta 55f12b4fdb
fix workflow (#147)
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-10-02 14:31:59 +05:30
Shubham Gupta 8ee0b6dd81
update dependency (#146)
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-10-02 14:15:43 +05:30
Shubham Gupta 8c6f1f24dd
workflow (#145)
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-10-02 14:11:09 +05:30
Shubham Gupta 2cfbfa94c4
releaser + bump (#144)
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-10-02 13:53:09 +05:30
Shubham Gupta 1625cb5e4d
Bump Chart Version (#143)
* fix operator chart

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* bump charts

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-09-29 19:13:08 +05:30
Shubham Gupta 2be4a0fada
[ Release] : Helm chart v0.15.1 Redis Cluster and Redis Operator (#141)
* fix operator chart

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* update crds

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix operator

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* quick release for app verison v0.15.1

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* bump chart

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-09-29 19:03:18 +05:30
Shubham Gupta 50a7aa7a90
[Add] : Update the status crd (#135)
* Add : status field

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fixes small

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-09-11 02:27:27 +05:30
Shubham Gupta 967773e71d
Fix : Helm Chart Cert Issues (#134)
* Fix : Helm Chart

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix : markdown lint

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-09-09 19:10:43 +05:30
Shubham Gupta 95a9cd910e
Update README.md 2023-09-05 13:39:47 +05:30
syedsadath-17 4c3018d4c0
[Feature]: Cert-Manager Dependency (#133)
* add: [Feature] Cert-Manager Dependency

Signed-off-by: sadath-12 <sadathsadu2002@gmail.com>

* add: [Feature]Optional crd install

Signed-off-by: sadath-12 <sadathsadu2002@gmail.com>

* push changes

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* wnwanted

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: sadath-12 <sadathsadu2002@gmail.com>
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
Co-authored-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-09-05 12:58:33 +05:30
Shubham Gupta c61d42b2b6
[Feature] : Add Issue Template to Helm chart Repo (#130)
* fix small bugs

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* uncomment

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* Add : templates

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-09-04 13:01:12 +05:30
udit-purplle ecfa834ccd
apiVersion upgraded (#125)
Previous version depreciated
2023-09-04 12:55:43 +05:30
Shubham Gupta 6f4f5eeffe
fix small bugs (#128)
* fix small bugs

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* uncomment

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-09-04 02:08:25 +05:30
Shubham Gupta d770e30ced
[Add] : Helm chart for the support of multiple versions (#126)
* crd for multiple versions

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* update crd conversion

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* certificate + deployment + service

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix : cert-manager

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix : deployment

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* update : app & chart version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* add me to maintainers

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix : secret name in deployment

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* manage labels

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix : lables

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* Add: annotation cert-manager.io/inject-ca-from

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* move file

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* rerun

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix : container

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* Add lables on cert-manager

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* comment fields in crd

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix : namesapce in crd field

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-09-02 20:01:34 +05:30
Shubham Gupta de104bb442
[Feature] : Add cert-manager to actions (#127)
* modify action

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* add flag

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-09-01 04:08:56 +05:30
Shubham Gupta 26f0f09704
Refactor : GitHub actions (#120)
* bump operator chart version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* bump chart version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* remove trailing space

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* operator version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix pod security context left in redis cluster

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* New Release For Helm

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* modify the Github action of the charts

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* replace engineerd kind setup with helm/kind setup

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* only test Redis Charts

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* add "/"

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* get logs

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* Install all the helm charts

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-07-12 17:42:03 +05:30
Shubham Gupta d116afdc9a
Upgrade Helm chart from v1 to v2 (#121)
* bump operator chart version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* bump chart version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* remove trailing space

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* operator version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix pod security context left in redis cluster

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* New Release For Helm

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* move the redis chart from version 2 to version 3

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-07-12 00:56:49 +05:30
Shubham Gupta 721bff5f8e
Release v0.15.3 Of the Helm chart (#119)
* bump operator chart version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* bump chart version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* remove trailing space

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* operator version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix pod security context left in redis cluster

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* New Release For Helm

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-07-11 22:57:43 +05:30
Shubham Gupta 269ced8966
Fix left podSecurity Context (#113)
* bump operator chart version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* bump chart version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* remove trailing space

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* operator version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix pod security context left in redis cluster

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-07-04 00:33:18 +05:30
Shubham Gupta 7bdfc51f34
Bump chart version (#112)
* bump operator chart version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* bump chart version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* remove trailing space

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* operator version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-07-03 21:05:41 +05:30
Shubham Gupta 9a0adbfd3b
bump operator chart version (#111)
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-07-03 19:49:09 +05:30
Shubham Gupta 595c193836
Update chart version (#108) 2023-07-03 19:42:54 +05:30
Shubham Gupta 915cab8469
Update sentinel CRD (#107)
* Upgrade the charts for the v0.15.0

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* Fix Bug : Security Context Field Change

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* Update sentinel CRD

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* sync charts

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix error

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* new release

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* Revert "sync charts"

This reverts commit 4f4f4f8492.

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-07-03 19:32:14 +05:30
Shubham Gupta c4411f504d
Name error in Security context (#106)
* Upgrade the charts for the v0.15.0

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* Fix Bug : Security Context Field Change

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* bug : name error

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-07-03 18:49:49 +05:30
Shubham Gupta 25bcbd661e
Bug : Field Name change from securityContext to podSecurityContext (#105)
* Upgrade the charts for the v0.15.0

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* Fix Bug : Security Context Field Change

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-07-03 15:46:37 +05:30
xdvpser d15aae58c0
redis-operator: Add `extraArgs` option argument to allow providing additional arguments. (#82) (#83) 2023-07-03 15:46:22 +05:30
Andrei Dan Bucur 43e1a46f5b
feature: Add pod annotations to redis cluster template. (#89)
Signed-off-by: Andrei Dan Bucur <andreidan.bucur@logmein.com>
2023-06-30 03:05:52 +05:30
Shubham Gupta 50023a90c6
Upgrade the charts for the v0.15.0 (#104)
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-06-30 02:55:00 +05:30
Shubham Gupta 87c1d6189b
Preparing Helm for th Release v0.15.0
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-06-30 02:28:39 +05:30
Shubham Gupta a1a1c55225
fix mongo storage CLass (#92)
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-04-21 00:23:41 +05:30
Shubham Gupta 3003dacf21
Merge pull request #87 from shubham-cmyk/mongodb
Mongodb : Rbac Issue with Role Binding Incorrect Name
2023-04-03 17:57:51 +05:30
Shubham Gupta 80e419af87
Add Maintainer Name
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-04-03 17:55:35 +05:30
Shubham Gupta efea04a4eb
chart bump
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-04-03 17:53:18 +05:30
Shubham Gupta 9c1a03b7be
Fix : Rbac issue role binding error
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-04-03 17:52:48 +05:30
Shubham Gupta d6a8257597
Merge pull request #86 from shubham-cmyk/sentinelACL
Fix  : Additional config in sentinel
2023-03-30 11:44:06 +05:30
Shubham Gupta b560a1700f
fix : additional config in sentinel
Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-03-30 11:37:18 +05:30
Shubham Gupta 085046c183
[Bug] : Fix Redis-Cluster external service. (#79)
* fix : redis external service

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* add : version bump

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* add : version bump

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-02-24 16:00:42 +05:30
genofire 77cd9e84db
fix: appVersion v0.14.0 in helmchart (and use as default value) (#71)
* fix: appVersion in helmchart (and as default value)

Signed-off-by: genofire <geno+dev@fireorbit.de>

* fix: increase chart version

---------

Signed-off-by: genofire <geno+dev@fireorbit.de>
2023-02-23 15:11:35 +05:30
Shubham Gupta dd0de2f8f9
Feature[Add] : Helm Chart for Sentinel and Replication (#72)
* fix:  additionalRedisConfig

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* add TLS  on redis

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* add: TLS redis Cluster

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* add: sentinel + replication

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* service monitor sync

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* update: Chart Version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix : version Bump

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* fix : lint


Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-02-22 14:36:17 +05:30
iamabhishek-dubey 86183b42e0 Released version of redis cluster v0.14.0
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2023-02-20 23:27:16 +05:30
iamabhishek-dubey d95339cf60 Updated helm chart for pdb issue
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2023-02-20 23:15:42 +05:30
iamabhishek-dubey 4b6e4bf7fa Updated helm chart for pdb issue
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2023-02-20 23:14:26 +05:30
iamabhishek-dubey acea94947f Fixed conflict for redis operator
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2023-02-20 23:12:57 +05:30
Shubham Gupta b420c2e031
[Feature] : Add CRD of redis replication and redis replication (#66)
* add crd  Redis-Sentinel

* add Crd Replication

* add : redis-replication CRD

* add : redis-sentinel CRD

* fix Bump

* Revert "fix Bump"

This reverts commit 8bc0706dac.

* fix : Bump

* fix : lint bump

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

* update: chart version

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>

---------

Signed-off-by: Shubham Gupta <iamshubhamgupta2001@gmail.com>
2023-02-20 13:56:22 +05:30
iamabhishek-dubey 9a086bc1b5 Fixed crds
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2023-01-17 14:21:06 +05:30
Abhishek Dubey 2dd8175abf
Released v0.13.0 of redis operator (#60)
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>

Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2023-01-16 17:48:41 +05:30
Abdul Rauf dded5a484a
ci: update github actions (#52)
* ci: update actions/checkout to v3

Signed-off-by: Abdul Rauf <abdulraufmujahid@gmail.com>

* ci: update actions/setup-python to v4

Signed-off-by: Abdul Rauf <abdulraufmujahid@gmail.com>

* ci: update azure/setup-helm to v3

Signed-off-by: Abdul Rauf <abdulraufmujahid@gmail.com>

* ci: update helm/chart-testing-action to v2.3.1

Signed-off-by: Abdul Rauf <abdulraufmujahid@gmail.com>

Signed-off-by: Abdul Rauf <abdulraufmujahid@gmail.com>
2023-01-07 23:06:11 +05:30
Abdul Rauf b3d08f8205
fix(charts/redis-cluster): pdb misconfiguration issue (#50)
* charts/redis-cluster: conditionally set minAvailable and maxUnavailable

Both shouldn't be set at once

Signed-off-by: Abdul Rauf <abdulraufmujahid@gmail.com>

* fix(charts/redis-cluster): pdb issue because of setting both maxUnavailable and minAvailable

Signed-off-by: Abdul Rauf <abdulraufmujahid@gmail.com>

* bump redis-cluster version to 0.12.2

Signed-off-by: Abdul Rauf <abdulraufmujahid@gmail.com>

Signed-off-by: Abdul Rauf <abdulraufmujahid@gmail.com>
2022-11-10 16:16:31 +05:30
Abhishek Dubey b9fb08c0eb
[BugFix]Fixed PDB definition issue (#47)
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-10-17 19:25:37 +05:30
Sandeep Rawat 2c488fb858
Merge pull request #46 from iamabhishek-dubey/redis-0.12.0
[Development]Updated redis operator version 0.12.0
2022-10-17 09:06:04 +05:30
iamabhishek-dubey 0453331887 Fixed CI for README checks
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-10-13 16:08:27 +05:30
iamabhishek-dubey 663e60aba1 Fixed CI for README checks
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-10-13 16:04:49 +05:30
iamabhishek-dubey 540274650a Fixed CI warnings
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-10-13 16:01:12 +05:30
iamabhishek-dubey 69e6220ff0 Fixed CI warnings
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-10-13 15:56:39 +05:30
iamabhishek-dubey 0e7aa3b034 Fixed CI warnings
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-10-13 15:54:24 +05:30
iamabhishek-dubey 50a111ae79 Updated redis cluster README file
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-10-13 15:48:34 +05:30
iamabhishek-dubey fc6d6fbddc Updated redis operator version 0.12.0
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-10-13 15:45:31 +05:30
iamabhishek-dubey 9410d5503f Updated redis operator version 0.12.0
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-10-13 15:43:19 +05:30
iamabhishek-dubey b52182f145 Updated logging operator stack v0.4.0
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-09-03 12:30:40 +05:30
iamabhishek-dubey c669325610 Updated helm chart with logging operator v0.3.2
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-08-07 02:14:42 +05:30
iamabhishek-dubey e416210b52 Fixed resource issue in elastic and kibana
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-07-16 00:16:40 +05:30
iamabhishek-dubey 81ef7b21c4 Added logic for adding customlabels
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-07-12 19:24:47 +05:30
iamabhishek-dubey 91a8804901 Added namespace for watch
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-07-12 16:02:39 +05:30
iamabhishek-dubey 785a15826f Fix version of PDB
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-07-05 14:32:34 +05:30
iamabhishek-dubey 8f510089a6 Fix version of PDB
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-07-05 14:24:41 +05:30
iamabhishek-dubey 02da88341a Updated version for redis cluster 0.11.0
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-07-05 12:58:40 +05:30
Raphael Zöllner 6d51ea2a57
Fix affinity indentation of redis-cluster helm chart (#35)
Signed-off-by: Raphael Zöllner <raphael.zoellner@regiocom.com>
2022-07-05 12:54:26 +05:30
iamabhishek-dubey e62d4b328e Updated redis standalone to v0.11.0
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-07-05 12:52:59 +05:30
iamabhishek-dubey c73e3e13a2 Added redis operator version 0.11.0
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-07-05 12:48:26 +05:30
iamabhishek-dubey 4cb9db4323 Added ingress for kibana and external service
Signed-off-by: iamabhishek-dubey <abhishekbhardwaj510@gmail.com>
2022-05-31 17:36:41 +05:30
145 changed files with 6799 additions and 6568 deletions

29
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@ -0,0 +1,29 @@
---
name: Bug report
about: Create a bug report to help us improve
title: ''
labels: 'bug'
assignees: ''
---
**Does this issue reproduce with the latest release?**
**What operating system and processor architecture are you using (`kubectl version`)?**
<details><summary><code>kubectl version</code> Output</summary><br><pre>
$ kubectl version
</pre></details>
**What did you do?**
<!--
If possible, provide a recipe for reproducing the error.
A detailed sequence of steps describing what to do to observe the issue is good.
A complete runnable bash shell script is best.
-->
**What did you expect to see?**
**What did you see instead?**

View File

@ -0,0 +1,16 @@
---
name: Documentation Update
about: Propose changes to the project's documentation
labels: documentation
---
<!-- Please answer these questions before submitting your documentation upadte. Thanks! -->
**Which document needs to be updated?**
<!-- Specify the document or section that needs an update. -->
**Expected changes**
<!-- Describe what you'd like to see updated. -->
**Additional context**
<!-- Add any other context or screenshots about the documentation update request here. -->

View File

@ -0,0 +1,22 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: 'enhancement'
assignees: ''
---
<!-- Please answer these questions before submitting your feature request. Thanks! -->
**Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the solution you'd like**
<!-- A clear and concise description of what you want to happen. -->
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->

View File

@ -0,0 +1,13 @@
---
name: General Question
about: Ask a question or need support
labels: question
---
<!-- Please answer these questions before submitting your documentation upadte. Thanks! -->
**Describe your question**
<!-- Provide details about your question or the support needed. -->
**Additional context**
<!-- Add any other context or screenshots about the question here. -->

View File

@ -8,29 +8,31 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@v1
- name: Install Helm
uses: azure/setup-helm@v3
with:
version: v3.5.0
- uses: actions/setup-python@v2
version: v3.16.2
- uses: actions/setup-python@v4
with:
python-version: 3.7
python-version: '3.9'
check-latest: true
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.0.1
uses: helm/chart-testing-action@v2.6.0
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --config ct.yaml)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
echo "changed=true" >> $GITHUB_OUTPUT
fi
- name: Run chart-testing (lint)
run: ct lint --config ct.yaml
- name: Create kind cluster
uses: helm/kind-action@v1.1.0
if: steps.list-changed.outputs.changed == 'true'
- name: Run chart-testing (install)
run: ct install --config ct.yaml
run: |
ct lint --config ct.yaml

View File

@ -12,7 +12,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Lint Code Base
uses: docker://github/super-linter:v3.12.0
env:
@ -25,3 +25,4 @@ jobs:
VALIDATE_YAML: false
DEFAULT_BRANCH: main
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FILTER_REGEX_EXCLUDE: .*(README\.md|NOTES.txt).*

View File

@ -11,7 +11,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Fetch history
run: git fetch --prune --unshallow
@ -21,12 +21,45 @@ jobs:
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
# See https://github.com/helm/chart-releaser-action/issues/6
- name: Install Helm
uses: azure/setup-helm@v3
with:
version: v3.16.2
- uses: actions/setup-python@v4
with:
python-version: '3.9'
check-latest: true
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.6.0
- name: Add Helm Repository
run: |
curl -fsSLo get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
helm repo add jetstack https://charts.jetstack.io
- name: Update Helm Repositories
run: helm repo update
- name: Update Chart Dependencies for karpenter
run: helm dependency update charts/karpenter
- name: List Changed Charts
id: list-changed
run: |
changed_charts=$(ct list-changed --config ct.yaml)
echo "Changed charts: $changed_charts"
echo "changed_charts=$changed_charts" >> $GITHUB_ENV
- name: Package and Release Charts
run: |
for CHART in ${{ steps.list-changed.outputs.changed_charts }}; do
echo "Packaging $CHART..."
helm package charts/$CHART
done
- name: Run chart-releaser
uses: helm/chart-releaser-action@v1.0.0
uses: helm/chart-releaser-action@v1.5.0
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

37
.github/workflows/test-charts.yaml vendored Normal file
View File

@ -0,0 +1,37 @@
name: Install and Test Helm Chart
on: pull_request
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Create k8s Kind Cluster
uses: helm/kind-action@v1.8.0
with:
cluster_name: kind
- name: Install Helm
uses: azure/setup-helm@v3
with:
version: v3.16.2
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.6.0
- uses: actions/setup-python@v4
with:
python-version: '3.9'
check-latest: true
- name: Install and test Helm charts
run: |
kubectl cluster-info --context kind-kind
changed=$(ct list-changed --config ct.yaml)
ct install --config ct.yaml || true

2
.gitignore vendored
View File

@ -1 +1,3 @@
*.tgz
Chart.lock
.DS_Store

View File

@ -14,14 +14,6 @@ helm repo add ot-helm https://ot-container-kit.github.io/helm-charts
You can then run `helm search repo ot-helm` to see the charts.
### Helm Charts List
Currently supported helm charts are:-
- [Redis Operator](./charts/redis-operator)
- [Redis Standalone](./charts/redis)
- [Redis Cluster](./charts/redis-cluster)
- [K8s Vault Webhook](./charts/k8s-vault-webhook)
### Pre-Requisities
@ -47,3 +39,9 @@ Useful Helm Client Commands:
- View available charts: `helm search repo`
- Install a chart: `helm install my-release ot-helm/<package-name>`
- Upgrade your application: `helm upgrade`
## Contact Information
This project is managed by [OpsTree Solutions](http://opstree.com). For any queries or suggestions, you can reach out to us at [opensource@opstree.com](mailto:opensource@opstree.com).
Join our Slack Channel: [#redis-operator](https://opstree.slack.com/archives/C05MBRB50JG).

21
charts/base/Chart.yaml Normal file
View File

@ -0,0 +1,21 @@
---
apiVersion: v1
description: A base helm chart which will be used by different helm charts
engine: gotpl
maintainers:
- name: iamabhishek-dubey
email: "abhishek.dubey@opstree.com"
url: https://github.com/iamabhishek-dubey
name: base
sources:
- https://github.com/ot-container-kit/helm-charts
version: 0.1.0
appVersion: "0.1.0"
home: https://github.com/ot-container-kit/helm-charts
keywords:
- deployment
- base
- opstree
- kubernetes
- openshift
icon: https://raw.githubusercontent.com/OT-CONTAINER-KIT/helm-charts/main/static/helm-chart-logo.svg

28
charts/base/README.md Normal file
View File

@ -0,0 +1,28 @@
# base
![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square) ![AppVersion: 0.1.0](https://img.shields.io/badge/AppVersion-0.1.0-informational?style=flat-square)
A base helm chart which will be used by different helm charts.
**Homepage:** <https://github.com/ot-container-kit/helm-charts>
## Maintainers
| Name | Email | Url |
|-------------------|----------------------------|----------------------------------------|
| iamabhishek-dubey | abhishek.dubey@opstree.com | <https://github.com/iamabhishek-dubey> |
## Source Code
* <https://github.com/ot-container-kit/helm-charts>
## Values
| Key | Type | Default | Description |
|----------------------------|--------|---------|--------------------------------------------------------------------------------|
| config | object | `{}` | ConfigMap key value pair to create configs |
| serviceAccount.annotations | object | `{}` | Annotations to add to the service account |
| serviceAccount.name | string | `""` | If not set and create is true, a name is generated using the fullname template |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2)

View File

@ -0,0 +1,13 @@
{{- define "configmap" -}}
{{- if .Values.base.config -}}
{{- $top := . -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "base.fullname" . }}
labels:
{{- include "base.labels" . | nindent 4 }}
data:
{{- toYaml .Values.base.config | nindent 2 -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,42 @@
{{/*
Create a default fully qualified app name.
We truncate service name aka .Release.Name at 59 chars because some Kubernetes name fields are limited to 63 (by the DNS naming spec).
We append 4 characters for chart type at the end which is -web or -crn or -wrk or -job or -sts.
*/}}
{{- define "base.fullname" -}}
{{- $name := .Release.Name | trunc 59 | trimSuffix "-" }}
{{- printf "%s-%s" $name .Chart.Name }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "base.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "base.labels" -}}
helm.sh/chart: {{ include "base.chart" . }}
{{ include "base.selectorLabels" . }}
{{- if .Release.Revision }}
app.kubernetes.io/version: {{ .Release.Revision | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "base.selectorLabels" -}}
app.kubernetes.io/name: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "base.serviceAccountName" -}}
{{- default (include "base.fullname" .) .Values.base.serviceAccount.name }}
{{- end }}

View File

@ -0,0 +1,12 @@
{{- define "serviceAccount" -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "base.serviceAccountName" . }}
labels:
{{- include "base.labels" . | nindent 4 }}
{{- with .Values.base.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

13
charts/base/values.yaml Normal file
View File

@ -0,0 +1,13 @@
# Default values for base template.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
serviceAccount:
# -- Annotations to add to the service account
annotations: {}
# -- The name of the service account to use.
# -- If not set and create is true, a name is generated using the fullname template
name: ""
# -- ConfigMap key value pair to create configs
config: {}

View File

@ -7,8 +7,8 @@ maintainers:
name: elasticsearch
sources:
- https://github.com/ot-container-kit/logging-operator
version: 0.3.1
appVersion: "0.3.0"
version: 0.4.0
appVersion: "0.4.0"
home: https://github.com/ot-container-kit/logging-operator
keywords:
- operator

View File

@ -2,6 +2,8 @@
Elasticsearch is a poplular NoSQL database which gets used for multiple purpose like:- database, logging, searching, etc. his helm chart needs [Logging Operator](../logging-operator) inside Kubernetes cluster. The elasticsearch definition can be modified or changed by [values.yaml](./values.yaml).
Documentation -> https://ot-logging-operator.netlify.app/
```shell
$ helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
$ helm install <my-release> ot-helm/elasticsearch --namespace <namespace>
@ -31,6 +33,8 @@ $ helm delete <my-release> --namespace <namespace>
|----------------------------------|-----------------|--------------------------------------------------------------------|
| clusterName | elastic-prod | Name of the elasticsearch cluster |
| esVersion | 7.17.0 | Major and minor version of elaticsearch |
| esPlugins | [] | Plugins list to install inside elasticsearch |
| esKeystoreSecret | - | Keystore secret to include in elasticsearch cluster |
| customConfiguration | {} | Additional configuration parameters for elasticsearch |
| esSecurity.enabled | true | To enabled the xpack security of elasticsearch |
| esMaster.replicas | 3 | Number of replicas for elasticsearch master node |

View File

@ -8,4 +8,4 @@ Get the list of pods by executing:
kubectl get pods --namespace {{ .Release.Namespace }} -l 'role in (master,data,ingestion,client)'
For getting the credential for admin user:
kubectl get secrets -n {{ .Release.Namespace }} {{ .Release.Name }} -o jsonpath="{.data.password}" | base64 -d
kubectl get secrets -n {{ .Release.Namespace }} {{ .Release.Name }}-password -o jsonpath="{.data.password}" | base64 -d

View File

@ -13,6 +13,12 @@ metadata:
spec:
esClusterName: {{ .Values.clusterName }}
esVersion: "{{ .Values.esVersion }}"
{{- if .Values.esPlugins }}
esPlugins: {{ .Values.esPlugins }}
{{- end }}
{{- if .Values.esKeystoreSecret }}
esKeystoreSecret: {{ .Values.esKeystoreSecret }}
{{- end }}
esMaster:
replicas: {{ .Values.esMaster.replicas }}
storage:

View File

@ -5,6 +5,9 @@ esVersion: "7.17.0"
#customConfiguration:
# cluster.routing.allocation.disk.watermark.low: "87%"
#esPlugins: ["repository-s3"]
#esKeystoreSecret: keystore-secret
esMaster:
replicas: 3
storage:

View File

@ -7,8 +7,8 @@ maintainers:
name: fluentd
sources:
- https://github.com/ot-container-kit/logging-operator
version: 0.3.0
appVersion: "0.3.0"
version: 0.4.0
appVersion: "0.4.0"
home: https://github.com/ot-container-kit/logging-operator
keywords:
- operator

View File

@ -4,13 +4,13 @@ Fluentd is a CNCF graduated project that provides the capability of log shipping
```shell
$ helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
$ helm install <my-release> ot-helm/fluentd --namespace <namespace>
$ helm install <my-release> ot-helm/kibana --namespace <namespace>
```
Fluentd setup can be upgraded by using `helm upgrade` command:-
```shell
$ helm upgrade <my-release> ot-helm/fluentd --install --namespace <namespace>
$ helm upgrade <my-release> ot-helm/kibana --install --namespace <namespace>
```
For uninstalling the chart:-

View File

@ -0,0 +1,16 @@
apiVersion: v2
name: ingress-management
description: A Helm chart to manage Ingress traffic
version: 0.1.0
appVersion: "1.0"
home: https://github.com/ot-container-kit/helm-charts
maintainers:
- name: sharvarikhamkar1304
keywords:
- ingress
- kong
- httpRoute
- kubernetes
icon: https://raw.githubusercontent.com/OT-CONTAINER-KIT/helm-charts/main/static/helm-chart-logo.svg
sources:
- https://github.com/ot-container-kit/helm-charts

View File

@ -0,0 +1,49 @@
# Ingress Management Helm Chart
A simple and reusable Helm chart to manage Kubernetes Gateway API HTTPRoutes for routing traffic to backend services.
This chart helps manage HTTPRoute resources to expose services using the Kubernetes Gateway API. You can customize host, path, service, and namespace via values.
## Homepage
[https://github.com/ot-container-kit/helm-charts](https://github.com/ot-container-kit/helm-charts)
## Maintainers
| Name | URL |
| ---------------- | --------------------------------------------- |
| sharvari-khamkar | [GitHub](https://github.com/sharvari-khamkar) |
## Source Code
[GitHub - ot-container-kit/helm-charts](https://github.com/ot-container-kit/helm-charts)
## Requirements
| Repository | Name | Version |
| ------------------------------------------------------------------------------------------------ | ---- | ------- |
| [https://ot-container-kit.github.io/helm-charts](https://ot-container-kit.github.io/helm-charts) | base | 0.1.0 |
## Values
| **Attribute** | **Scope** | **Example** | **Description** | **Default** |
|------------------|------------------|------------------------|------------------------------------------------------------------------|--------------|
| <br> `name` <br> <br> | <br> Global <br> <br> | <br> `"my-app"` <br> <br> | <br> Name of the HTTPRoute and backend service (the app name)<br><br> | `""` |
| <br> `namespace` <br> <br> | <br> Global <br> <br> | <br> `"default"` <br> <br> | <br> Kubernetes namespace where resources like HTTPRoute will be deployed<br><br> | `""` |
| <br> `host` <br> <br> | Routing | `"app.example.com"` | Hostname to expose the app<br><br> | `""` |
| <br>`path` <br> <br> | Routing | `"/api"` | Path under the host<br><br> | `""` |
| <br>`service.name` <br> <br> | Service Config | `"my-backend-svc"` | Name of the backend service to which traffic will be routed<br><br> | `""` |
| <br>`service.kind` <br> <br> | Service Config | `"Service"` | Kind of backend resource (Service by default)<br><br> | `"Service"` |
| <br>`service.port` <br> <br> | Service Config | `80` | Port on which the backend service listens<br><br> | `80` |

View File

@ -0,0 +1,46 @@
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: {{ required "A valid 'name' is required!" .Values.name }}
{{- if .Values.labels }}
labels:
{{ toYaml .Values.labels | indent 4 }}
{{- end }}
{{- if .Values.annotations }}
annotations:
{{ toYaml .Values.annotations | indent 4 }}
{{- end }}
spec:
{{- if .Values.parentRefs }}
parentRefs:
{{- range .Values.parentRefs }}
- name: {{ .name }}
{{- if .namespace }}
namespace: {{ .namespace }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.hostnames }}
hostnames:
{{- range .Values.hostnames }}
- "{{ . }}"
{{- end }}
{{- end }}
rules:
{{- range .Values.rules }}
- matches:
{{- range .matches }}
- path:
type: {{ .path.type }}
value: {{ .path.value | quote }}
{{- end }}
backendRefs:
{{- range .backendRefs }}
- name: {{ .name }}
kind: {{ .kind | default "Service" }}
port: {{ .port }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,60 @@
---
# charts/ingress-management/values.yaml
# -- Name of the HTTPRoute and backend service (typically the app name)
name: ""
# -- Labels to apply to the HTTPRoute metadata
labels:
app: ""
# -- Optional annotations to apply to the HTTPRoute resource
annotations: {}
# -- Reference to the Gateway (parentRefs)
parentRefs:
- name: ""
namespace: ""
# -- Hostnames to be matched in the HTTPRoute
hostnames:
- ""
# -- Routing rules for HTTPRoute
rules:
- matches:
- path:
type: PathPrefix
value: ""
backendRefs:
- name: ""
kind: Service
port: 80
# -----------------------------------------------------
# Example values.yaml File
# -----------------------------------------------------
# name: open-webui
# labels:
# app: open-webui
# annotations:
# konghq.com/protocols: https
# konghq.com/https-redirect-status-code: "301"
# parentRefs:
# - name: kong
# namespace: default
# hostnames:
# - bp-ai.opstree.dev
# rules:
# - matches:
# - path:
# type: PathPrefix
# value: /
# backendRefs:
# - name: open-webui
# kind: Service
# port: 80

View File

@ -1,5 +1,5 @@
{{- if .Values.podDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: {{ template "k8s-vault-webhook.fullname" . }}

View File

@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -0,0 +1,9 @@
apiVersion: v2
name: ot-karpenter
version: 0.3.0
maintainers:
- name: opstree
dependencies:
- name: karpenter
version: 1.1.1
repository: oci://public.ecr.aws/karpenter

View File

@ -0,0 +1,78 @@
# Karpenter
Karpenter is an open-source Kubernetes cluster autoscaler built for efficiency and speed. This Helm chart installs Karpenter in your Kubernetes cluster and can be used to manage your node pools for dynamically scaling your infrastructure. This chart supports automated deployment of Karpenter, including the creation of NodePools, EC2NodeClasses, IAM roles, and other necessary resources.
To install Karpenter, use the following commands:
```shell
$ helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
$ helm install karpenter ot-helm/karpenter --namespace <namespace> --dependency-update --create-namespace
```
Adds the ot-helm repository to Helm, which contains the Karpenter Helm chart.
Installs the Karpenter chart from the ot-helm repository.
To upgrade the setup:
```shell
$ helm upgrade karpenter ot-helm/karpenter --install --namespace <namespace> --create-namespace
```
Upgrades an existing Karpenter release or installs it if it doesn't exist.
To uninstall the chart:
```shell
$ helm delete karpenter --namespace <namespace>
```
Deletes the Karpenter release from the specified namespace.
Replace <namespace> with the namespace where Karpenter is installed.
### Pre-Requisites
- Kubernetes => 1.18+
- Helm => 3.X
- Karpenter Operator => 0.1.0
- Open ID Connector (EKS) => https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html
- IAM Roles for Karpenter
- Add tags to subnets and security groups
- Update aws-auth ConfigMap
### Parameters
| **Name** | **Value** | **Description** |
|--------------------------------------------------------------------|:-------------------------------|------------------------------------------------|
| `karpenter.settings.clusterName` | `my-cluster` | The name of your Kubernetes cluster |
| `karpenter.serviceAccount.annotations.eks.amazonaws.com/role-arn` | Required | IAM role ARN for Karpenter controller |
| `karpenter.controller.resources.requests.cpu` | `1` | CPU request for Karpenter controller |
| `karpenter.controller.resources.requests.memory` | `1Gi` | Memory request for Karpenter controller |
| `karpenter.controller.resources.limits.cpu` | `1` | CPU limit for Karpenter controller |
| `karpenter.controller.resources.limits.memory` | `1Gi` | Memory limit for Karpenter controller |
| `nodePools` | [] | List of NodePools to be created |
| `nodePools.name` | default-nodepool | Name of the NodePool |
| `nodePools.labels` - If not required can be omitted | {} | Labels for the NodePool |
| `nodePools.annotations` - If not required can be omitted | {} | Annotations for the NodePool |
| `nodePools.requirements` - Can be empty [] | [] | Node requirements like CPU, memory, etc. |
| `nodePools.taints` - If not required can be omitted | [] | Taints for the NodePool |
| `nodePools.expireAfter` | 720h | Expiration duration for idle NodePools |
| `nodePools.limits.cpu` - Required Field | "1000m" | CPU limit for the NodePool |
| `nodePools.limits.memory`- If not required can be omitted | "2Gi" | Memory limit for the NodePool |
| `nodePools.disruption.consolidationPolicy` - Required Field | WhenEmptyOrUnderutilized | Consolidation policy for underutilized nodes |
| `nodePools.disruption.consolidateAfter` - Required Field | 1m | Time before consolidating underutilized nodes |
### Notes:
- Refer to Example Folder for a example values.yaml file
- Karpenter automatically creates and manages NodePools as part of the installation process.
- Make sure to configure the IAM roles required by Karpenter for it to interact with EC2 instances and manage resources along with all prerequisites.
- The chart will ensure the Karpenter controller and NodePools are deployed correctly with all required configurations.

View File

@ -0,0 +1,82 @@
#This example below has 2 nodepools for reference
# Custom values for your chart
clusterName: "" # Name of the EKS cluster (for identification in the chart and Karpenter)
awsPartition: "" # AWS partition, default is 'aws' (used in multi-region or partitioned environments)
awsAccountId: 3333 # AWS account ID where the resources will be provisioned
# Karpenter chart overrides
karpenter:
settings:
clusterName: "" # Cluster name for the Karpenter controller to identify and manage nodes in this cluster
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::3333:role/KarpenterControllerRole-demo-eks # IAM role for Karpenter controller's access to AWS services
controller:
resources:
requests:
cpu: "1" # CPU resource request for the Karpenter controller (minimum resources Karpenter will be allocated)
memory: "1Gi" # Memory resource request for the Karpenter controller
limits:
cpu: "1" # CPU resource limit for the Karpenter controller (maximum resources Karpenter can consume)
memory: "1Gi" # Memory resource limit for the Karpenter controller
# NodePools define groups of nodes with specific requirements
nodePools:
- name: default # Name of the node pool, used for identification
limits: # Required Field
cpu: "1000"
memory: "1000Gi"
disruption: # Required Field
consolidationPolicy: WhenEmptyOrUnderutilized
consolidateAfter: 1m
requirements: # Node pool requirements for instance types and other properties
- key: kubernetes.io/arch
operator: In # Specifies the architecture for nodes
values:
- "amd64"
- key: kubernetes.io/os
operator: In # Specifies the OS type for nodes
values:
- "linux" # The node pool requires Linux OS
- key: karpenter.sh/capacity-type
operator: In # Specifies the capacity type for nodes
values:
- "on-demand"
- key: karpenter.k8s.aws/instance-category
operator: In # Specifies allowed EC2 instance categories
values:
- "t" # Instance category t (e.g., T2, T3)
- "m"
- "r"
minValues: 2 # Minimum number of instances of each category
- key: karpenter.k8s.aws/instance-family
operator: Exists # Specifies that instances in the family must exist (e.g., m5, r5)
minValues: 5 # Minimum number of instances in the specified family
- key: karpenter.k8s.aws/instance-family
operator: In # Specifies that the instance family must match one of the listed values
values:
- "m5"
- "m5d"
- "c5"
- "c5d"
- "c4"
- "r4"
minValues: 3 # Minimum number of instances from these families
- key: node.kubernetes.io/instance-type
operator: Exists # Ensures that the node pool has specific instance types
minValues: 10 # Minimum number of instances of the specified types
- key: karpenter.k8s.aws/instance-generation
operator: Gt # Specifies that the instance generation must be greater than a particular value
values:
- "2" # Instance generation must be greater than 2 (i.e., newer generation)
nodeClass:
group: karpenter.k8s.aws # Node class group for Karpenter
kind: EC2NodeClass # Kind of node class, EC2NodeClass indicates AWS EC2 instances
name: default # The name of the node class (default for this pool)

View File

@ -0,0 +1,33 @@
{{- range .Values.ec2NodeClasses }}
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
name: {{ .name }}
spec:
amiFamily: {{ .amiFamily | default "AL2" }}
role: {{ .role }}
{{- if .detailedMonitoring }}
detailedMonitoring: {{ .detailedMonitoring }}
{{- end }}
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: "{{ $.Values.clusterName }}"
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: "{{ $.Values.clusterName }}"
amiSelectorTerms:
- id: "{{ .amiSelector.arm }}"
- id: "{{ .amiSelector.amd }}"
{{- if .amiSelector.gpu }}
- id: "{{ .amiSelector.gpu }}"
{{- end }}
{{- if .amiSelector.name }}
- name: "{{ .amiSelector.name }}"
{{- end }}
{{- if .tags }}
tags:
{{- range $key, $value := .tags }}
{{ $key }}: "{{ $value }}"
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,73 @@
{{- range .Values.nodePools }}
---
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: {{ .name }}
spec:
template:
metadata:
labels:
{{- if .labels }}
{{- range $key, $value := .labels }}
{{ $key }}: {{ $value }}
{{- end }}
{{- else }}
{} # Empty labels object if no labels are defined
{{- end }}
annotations:
{{- if .annotations }}
{{- range $key, $value := .annotations }}
{{ $key }}: {{ $value }}
{{- end }}
{{- else }}
{} # Empty annotations object if no annotations are defined
{{- end }}
spec:
requirements:
{{- if .requirements }}
{{- if gt (len .requirements) 0 }}
{{- range .requirements }}
- key: {{ .key }}
operator: {{ .operator }}
values:
{{ toYaml .values | indent 12 }}
{{- if .minValues }}
minValues: {{ .minValues }}
{{- end }}
{{- end }}
{{- else }}
[] # Render an empty array explicitly when no requirements are defined
{{- end }}
{{- else }}
[] # Ensure that an empty array is rendered even if the user does not specify requirements
{{- end }}
taints:
{{- if .taints }}
{{- range .taints }}
- key: {{ .key }}
{{- if .value }}
value: {{ .value }}
{{- end }}
effect: {{ .effect }}
{{- end }}
{{- else }}
[] # Empty taints array if no taints are defined
{{- end }}
nodeClassRef:
group: {{ .nodeClass.group | default "karpenter.k8s.aws" }}
kind: {{ .nodeClass.kind | default "EC2NodeClass" }}
name: {{ .nodeClass.name }}
expireAfter: {{ .expireAfter | default "720h" }}
limits:
{{- if .limits.cpu }}
cpu: {{ .limits.cpu }}
{{- end }}
{{- if .limits.memory }}
memory: {{ .limits.memory }}
{{- end }}
disruption:
consolidationPolicy: {{ .disruption.consolidationPolicy | default "WhenEmptyOrUnderutilized" }}
consolidateAfter: {{ .disruption.consolidateAfter | default "1m" }}
{{- end }}

View File

@ -0,0 +1,110 @@
# Custom values for your chart
# Name of the EKS cluster (for identification in the chart and Karpenter)
clusterName: ""
# AWS partition, default is 'aws' (used in multi-region or partitioned environments)
awsPartition: ""
# AWS account ID where the resources will be provisioned
awsAccountId: 3333
# Karpenter chart overrides
karpenter:
settings:
# Cluster name for the Karpenter controller to identify and manage nodes in this cluster
clusterName: ""
# Name of SQS queue for handling EC2 instance interruptions
# interruptionQueue: ""
serviceAccount:
annotations:
# IAM role ARN for Karpenter controller's access to AWS services
eks.amazonaws.com/role-arn: arn:aws:iam::3333:role/KarpenterControllerRole-demo-eks
# Karpenter controller resources can be customized in this section below
# controller:
# resources:
# requests:
# cpu: "1" # CPU resource request for the Karpenter controller (minimum resources Karpenter will be allocated)
# memory: "1Gi" # Memory resource request for the Karpenter controller
# limits:
# cpu: "1" # CPU resource limit for the Karpenter controller (maximum resources Karpenter can consume)
# memory: "1Gi" # Memory resource limit for the Karpenter controller
# EC2NodeClasses define the EC2 instance classes that Karpenter can use
ec2NodeClasses:
- name: default
# Amazon Linux 2 AMI family
amiFamily: AL2
# "KarpenterNodeRole-my-eks-cluster" # Name of karpenter Node Role ( NOT THE ARN )
role:
amiSelector:
# To get the AMI ID, run the commands below in the AWS CLI and replace the AMI ID in the values.yaml file
# ARM_AMI_ID="$(aws ssm get-parameter --name /aws/service/eks/optimized-ami/${K8S_VERSION}/amazon-linux-2-arm64/recommended/image_id --query Parameter.Value --output text)"
arm:
# AMD_AMI_ID="$(aws ssm get-parameter --name /aws/service/eks/optimized-ami/${K8S_VERSION}/amazon-linux-2/recommended/image_id --query Parameter.Value --output text)"
amd:
# GPU_AMI_ID="$(aws ssm get-parameter --name /aws/service/eks/optimized-ami/${K8S_VERSION}/amazon-linux-2-gpu/recommended/image_id --query Parameter.Value --output text)"
# gpu: ami-gpu-id
# amazon-eks-node-1.27-* # Optional: EKS Node AMI Name
# name:
# Optional, propagates tags to underlying EC2 resources
# tags:
# environment: production
# team: "engineering"
# owner: "admin@company.com"
# Enable detailed monitoring for the EC2 instance
# detailedMonitoring: true
# NodePools define groups of nodes with specific requirements
nodePools:
- name: default # Name of the node pool, preset here is set to default nodepool
requirements: # List of node requirements for scheduling
- key: kubernetes.io/arch # Architecture requirement (e.g., amd64, arm64)
operator: In # Only nodes with the specified architecture will be selected
values:
- "amd64" # Specifies that the node should have an amd64 architecture
- key: kubernetes.io/os # OS requirement (e.g., linux, windows)
operator: In # Only nodes with the specified OS will be selected
values:
- "linux" # Specifies that the node should run Linux
- key: karpenter.sh/capacity-type # Defines the instance's capacity type
operator: In # Only nodes with the specified capacity type will be selected
values:
- "on-demand" # Specifies that the node should be an on-demand instance, can be "spot" as well
- key: karpenter.k8s.aws/instance-category # Defines the instance category (e.g., t, m, r)
operator: In # Only nodes with the specified instance category will be selected
values:
- "t" # These can be customized as per need
- "m"
- "r"
# - key: karpenter.k8s.aws/instance-family # Uncomment to define the instance family (e.g., t3, m5, r5)
# operator: In
# values:
# - "t3a"
- key: karpenter.k8s.aws/instance-generation # Instance generation requirement
operator: Gt # Greater than the specified value
values:
- "2" # Specifies that only instance generations greater than 2 are allowed
nodeClass: # Defines the node class, which is linked to EC2NodeClass
group: karpenter.k8s.aws # Group of the EC2NodeClass
kind: EC2NodeClass # Type of node class, which is EC2NodeClass in this case
name: default # Name of the EC2NodeClass to use for the node pool (name of the EC2 instance class)
expireAfter: 720h # Maximum lifetime of the node pool before it expires (720 hours = 30 days)
limits: # Resource limits for the node pool
cpu: "1000" # Maximum CPU limit for the node pool
memory: "1Gi"
disruption: # Policy for handling disruption in the node pool
consolidationPolicy: WhenEmptyOrUnderutilized # Consolidate nodes when they are empty or underutilized
consolidateAfter: 1m # Time after which consolidation will occur, in this case, 1 minute
# Uncomment Below annotations key ( next 3 Lines ) if you want to use annotations
# annotations: # Annotations are key-value pairs that provide additional metadata for the node pool
# example.com/owner: "my-team" # An example annotation that associates the node pool with a team
# example.com/maintainer: "admin@company.com" # Example annotation for the maintainer's contact information
# Uncomment below taint key ( next 4 Lines ) if you want to use taints
# taints: # Taints are used to control which pods can be scheduled on the node pool
# - key: "example.com/special-taint" # Taint key that identifies the taint
# value: "special-value" # Value associated with the taint
# effect: "NoExecute" # Effect of the taint. In this case, NoExecute means pods won't be scheduled on tainted nodes
# Comment Labels Key below if you dont want to use Labels
labels: # Labels are key-value pairs used for categorizing the node pool
environment: production # Label indicating that this node pool is for production use
team: "engineering" # Label associating the node pool with the engineering team

View File

@ -7,8 +7,8 @@ maintainers:
name: kibana
sources:
- https://github.com/ot-container-kit/logging-operator
version: 0.3.0
appVersion: "0.3.0"
version: 0.4.0
appVersion: "0.4.0"
home: https://github.com/ot-container-kit/logging-operator
keywords:
- operator

View File

@ -28,7 +28,7 @@ $ helm delete <my-release> --namespace <namespace>
### Parameters
| **Name** | **Value** | **Description** |
|----------------------------------|-----------------------------------|------------------------------------------------------------|
|-----------------------------------|-----------------------------------|------------------------------------------------------------|
| replicas | 1 | Number of deployment replicas for kibana |
| esCluster.esURL | https://elasticsearch-master:9200 | Hostname or URL of the elasticsearch server |
| esCluster.esVersion | 7.17.0 | Version of the kibana in pair with elasticsearch |
@ -39,4 +39,8 @@ $ helm delete <my-release> --namespace <namespace>
| tolerations | {} | Tolerations and taints for kibana visualization pods |
| esSecurity.enabled | true | To enabled the xpack security of kibana |
| esSecurity.elasticSearchPassword | elasticsearch-password | Credentials for elasticsearch authentication |
| externalService.enabled | false | To create a LoadBalancer service of kibana |
| ingress.enabled | false | To enable the ingress resource for kibana |
| ingress.host | kibana.opstree.com | Hostname or URL on which kibana will be exposed |
| ingress.tls.enabled | false | To enable SSL on kibana ingress resource |
| ingress.tls.secret | tls-secret | SSL certificate for kibana ingress resource |

View File

@ -1,17 +1,24 @@
{{- if eq .Values.externalConfig.enabled true }}
{{- if (eq .Values.externalService.enabled true) }}
---
apiVersion: v1
kind: ConfigMap
kind: Service
metadata:
name: {{ .Release.Name }}-ext-config
labels:
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
data:
redis-external.conf: |
{{ .Values.externalConfig.data | nindent 4 }}
app.kubernetes.io/component: visualization
name: {{ .Release.Name }}
spec:
ports:
- name: http
port: 5601
protocol: TCP
targetPort: 5601
selector:
app: {{ .Release.Name }}
service: kibana
type: LoadBalancer
{{- end }}

View File

@ -0,0 +1,32 @@
{{- if (eq .Values.ingress.enabled true) }}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
labels:
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: visualization
name: {{ .Release.Name }}
spec:
{{- if (eq .Values.ingress.tls.enabled true) }}
tls:
- hosts:
- {{ .Values.ingress.host }}
secretName: {{ .Values.ingress.tls.secret }}
{{- end }}
rules:
- host: {{ .Values.ingress.host }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Release.Name }}
port:
number: 5601
{{- end }}

View File

@ -2,7 +2,14 @@
apiVersion: logging.logging.opstreelabs.in/v1beta1
kind: Kibana
metadata:
name: kibana
name: {{ .Release.Name }}
labels:
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: visualization
spec:
replicas: {{ .Values.replicas }}
esCluster:

View File

@ -45,4 +45,14 @@ tolerations: []
esSecurity:
enabled: true
elasticSearchPassword: elasticsearch-password
elasticSearchPassword: elasticsearch-sa-token
externalService:
enabled: false
ingress:
enabled: false
host: kibana.opstree.com
tls:
enabled: true
secret: tls-secret

View File

@ -1,6 +1,6 @@
---
apiVersion: v2
appVersion: "0.3.0"
appVersion: "0.4.0"
description: Helm chart to deploy and manage EFK stack in Kubernetes
engine: gotpl
maintainers:
@ -9,7 +9,7 @@ maintainers:
name: logging-operator
sources:
- https://github.com/OT-CONTAINER-KIT/logging-operator
version: 0.3.0
version: 0.4.0
home: https://github.com/OT-CONTAINER-KIT/logging-operator
icon: https://raw.githubusercontent.com/OT-CONTAINER-KIT/logging-operator/master/static/logging-operator-logo.svg
keywords:

View File

@ -3630,6 +3630,8 @@ spec:
type: string
type: object
type: object
esKeystoreSecret:
type: string
esMaster:
default:
jvmMaxMemory: 1g
@ -4826,6 +4828,10 @@ spec:
type: string
type: object
type: object
esPlugins:
items:
type: string
type: array
esSecurity:
description: Security defines the security config of Elasticsearch
properties:

View File

@ -1,7 +1,7 @@
---
loggingOperator:
imageName: quay.io/opstree/logging-operator
imageTag: v0.3.1
imageTag: v0.4.0
imagePullPolicy: Always
port: 8081

27
charts/loki/Chart.yaml Normal file
View File

@ -0,0 +1,27 @@
apiVersion: v2
name: loki
description: A Helm chart for loki
type: application
version: 1.0.1
appVersion: 1.0.0
dependencies:
- name: loki-distributed
version: 0.76.1
repository: https://grafana.github.io/helm-charts
alias: distributed
tags:
- logging
condition: distributed.enabled
- name: promtail
version: 6.16.4
repository: https://grafana.github.io/helm-charts
alias: promtail
tags:
- logging
- name: loki
version: 6.7.3
repository: https://grafana.github.io/helm-charts
alias: standalone
tags:
- logging
condition: standalone.enabled

View File

@ -0,0 +1,501 @@
logging:
gateway:
# image:
# registry:
# repository:
# tag: 1.20.2-alpine
enabled: true
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 2
resources:
requests:
memory: 500Mi
cpu: 200m
limits:
memory: 500Mi
cpu: 200m
nginxConfig:
file: |
worker_processes 5; ## Default: 1
error_log /dev/stderr;
pid /tmp/nginx.pid;
worker_rlimit_nofile 8192;
events {
worker_connections 4096; ## Default: 1024
}
http {
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
client_max_body_size 5M;
proxy_http_version 1.1;
default_type application/octet-stream;
log_format {{ .Values.gateway.nginxConfig.logFormat }}
{{- if .Values.gateway.verboseLogging }}
access_log /dev/stderr main;
{{- else }}
map $status $loggable {
~^[23] 0;
default 1;
}
access_log /dev/stderr main if=$loggable;
{{- end }}
sendfile on;
tcp_nopush on;
{{- if .Values.gateway.nginxConfig.resolver }}
resolver {{ .Values.gateway.nginxConfig.resolver }};
{{- else }}
resolver {{ .Values.global.dnsService }}.{{ .Values.global.dnsNamespace }}.svc.{{ .Values.global.clusterDomain }};
{{- end }}
{{- with .Values.gateway.nginxConfig.httpSnippet }}
{{ . | nindent 2 }}
{{- end }}
server {
listen 8080;
{{- if .Values.gateway.basicAuth.enabled }}
auth_basic "Loki";
auth_basic_user_file /etc/nginx/secrets/.htpasswd;
{{- end }}
location = / {
return 200 'OK';
auth_basic off;
access_log off;
}
location = /api/prom/push {
set $api_prom_push_backend http://{{ include "loki.distributorFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
proxy_pass $api_prom_push_backend:3100$request_uri;
proxy_http_version 1.1;
}
location = /api/prom/tail {
set $api_prom_tail_backend http://{{ include "loki.querierFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
proxy_pass $api_prom_tail_backend:3100$request_uri;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
}
# Ruler
location ~ /prometheus/api/v1/alerts.* {
proxy_pass http://{{ include "loki.rulerFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /prometheus/api/v1/rules.* {
proxy_pass http://{{ include "loki.rulerFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /api/prom/rules.* {
proxy_pass http://{{ include "loki.rulerFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /api/prom/alerts.* {
proxy_pass http://{{ include "loki.rulerFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}:3100$request_uri;
}
location ~ /api/prom/.* {
set $api_prom_backend http://{{ include "loki.queryFrontendFullname" . }}-headless.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
proxy_pass $api_prom_backend:3100$request_uri;
proxy_http_version 1.1;
}
location = /loki/api/v1/push {
set $loki_api_v1_push_backend http://{{ include "loki.distributorFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
proxy_pass $loki_api_v1_push_backend:3100$request_uri;
proxy_http_version 1.1;
}
location = /loki/api/v1/tail {
set $loki_api_v1_tail_backend http://{{ include "loki.querierFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
proxy_pass $loki_api_v1_tail_backend:3100$request_uri;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
}
location ~ /loki/api/.* {
set $loki_api_backend http://{{ include "loki.queryFrontendFullname" . }}-headless.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }};
proxy_pass $loki_api_backend:3100$request_uri;
proxy_http_version 1.1;
}
{{- with .Values.gateway.nginxConfig.serverSnippet }}
{{ . | nindent 4 }}
{{- end }}
}
}
loki:
# image:
# registry:
# repository: grafana/loki
# tag: 2.9.2
podAnnotations:
sidecar.istio.io/inject: "false"
storageConfig:
aws:
s3: http://minio:minio123@monitoring-minio.monitoring.svc:9000/loki
s3forcepathstyle: true
region: us-east-1
# aws:
# region: ap-south-1
# bucketnames: jm-prod-loki-app-logs
# s3forcepathstyle: false
# sse_encryption: true
boltdb_shipper:
shared_store: s3
cache_ttl: 24h
schemaConfig:
configs:
- from: "2020-09-07"
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: loki_index_
period: 24h
config: |
auth_enabled: false
server:
{{- toYaml .Values.loki.server | nindent 6 }}
common:
compactor_address: http://{{ include "loki.compactorFullname" . }}:3100
distributor:
ring:
kvstore:
store: memberlist
memberlist:
join_members:
- {{ include "loki.fullname" . }}-memberlist
ingester_client:
grpc_client_config:
grpc_compression: gzip
ingester:
lifecycler:
ring:
kvstore:
store: memberlist
replication_factor: 1
chunk_idle_period: 30m
chunk_block_size: 262144
chunk_encoding: snappy
chunk_retain_period: 1m
max_transfer_retries: 0
wal:
dir: /var/loki/wal
limits_config:
retention_period: 72h
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
max_cache_freshness_per_query: 10m
split_queries_by_interval: 15m
# for big logs tune
per_stream_rate_limit: 512M
per_stream_rate_limit_burst: 1024M
cardinality_limit: 200000
ingestion_burst_size_mb: 1000
ingestion_rate_mb: 10000
max_entries_limit_per_query: 1000000
max_label_value_length: 20480
max_label_name_length: 10240
max_label_names_per_series: 300
{{- if .Values.loki.schemaConfig}}
schema_config:
{{- toYaml .Values.loki.schemaConfig | nindent 2}}
{{- end}}
{{- if .Values.loki.storageConfig}}
storage_config:
{{- if .Values.indexGateway.enabled}}
{{- $indexGatewayClient := dict "server_address" (printf "dns:///%s:9095" (include "loki.indexGatewayFullname" .)) }}
{{- $_ := set .Values.loki.storageConfig.boltdb_shipper "index_gateway_client" $indexGatewayClient }}
{{- end}}
{{- toYaml .Values.loki.storageConfig | nindent 2}}
{{- if .Values.memcachedIndexQueries.enabled }}
index_queries_cache_config:
memcached_client:
addresses: dnssrv+_memcached-client._tcp.{{ include "loki.memcachedIndexQueriesFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}
consistent_hash: true
{{- end}}
{{- end}}
runtime_config:
file: /var/{{ include "loki.name" . }}-runtime/runtime.yaml
chunk_store_config:
max_look_back_period: 0s
{{- if .Values.memcachedChunks.enabled }}
chunk_cache_config:
embedded_cache:
enabled: false
memcached_client:
consistent_hash: true
addresses: dnssrv+_memcached-client._tcp.{{ include "loki.memcachedChunksFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}
{{- end }}
{{- if .Values.memcachedIndexWrites.enabled }}
write_dedupe_cache_config:
memcached_client:
consistent_hash: true
addresses: dnssrv+_memcached-client._tcp.{{ include "loki.memcachedIndexWritesFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}
{{- end }}
table_manager:
retention_deletes_enabled: false
retention_period: 0s
query_range:
align_queries_with_step: true
max_retries: 5
cache_results: true
results_cache:
cache:
{{- if .Values.memcachedFrontend.enabled }}
memcached_client:
addresses: dnssrv+_memcached-client._tcp.{{ include "loki.memcachedFrontendFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}
consistent_hash: true
{{- else }}
embedded_cache:
enabled: true
ttl: 24h
{{- end }}
frontend_worker:
{{- if .Values.queryScheduler.enabled }}
scheduler_address: {{ include "loki.querySchedulerFullname" . }}:9095
{{- else }}
frontend_address: {{ include "loki.queryFrontendFullname" . }}-headless:9095
{{- end }}
frontend:
log_queries_longer_than: 5s
compress_responses: true
{{- if .Values.queryScheduler.enabled }}
scheduler_address: {{ include "loki.querySchedulerFullname" . }}:9095
{{- end }}
tail_proxy_url: http://{{ include "loki.querierFullname" . }}:3100
compactor:
working_directory: /tml/loki/compactor
shared_store: s3
compaction_interval: 2m
retention_enabled: false
ruler:
storage:
type: local
local:
directory: /etc/loki/rules
ring:
kvstore:
store: memberlist
rule_path: /tmp/loki/scratch
alertmanager_url: https://alertmanager.xx
external_url: https://alertmanager.xx
serviceAccount:
create: true
name: loki-sa
imagePullSecrets: []
labels: {}
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::913108190184:role/jm-prod-fluent
automountServiceAccountToken: true
compactor:
enabled: true
retention_enabled: true
shared_store: s3
# nodeSelector:
# appType: monitoring
# tolerations:
# - key: "appType"
# operator: "Equal"
# value: "monitoring"
# effect: "NoSchedule"
queryFrontend:
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 2
resources:
requests:
memory: 500Mi
cpu: 200m
limits:
memory: 500Mi
cpu: 200m
distributor:
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 2
resources:
requests:
cpu: 200m
memory: 500Mi
limits:
cpu: 200m
memory: 500Mi
ingester:
replicas: 2
maxUnavailable: 1
persistence:
enabled: true
claims:
- name: data
size: 1Gi
# storageClass: encrypted-gp3
resources:
requests:
cpu: 200m
memory: 500Mi
limits:
cpu: 200m
memory: 500Mi
# nodeSelector:
# appType: monitoring
# tolerations:
# - key: "appType"
# operator: "Equal"
# value: "monitoring"
# effect: "NoSchedule"
# affinity: ""
querier:
kind: Deployment
replicas: 1
maxUnavailable: 1
# persistence:
# enabled: true
# size: 10Gi
# storageClass: encrypted-gp3
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 2
resources:
requests:
cpu: 200m
memory: 500Mi
limits:
cpu: 200m
memory: 500Mi
memcachedChunks:
enabled: true
replicas: 1
maxUnavailable: 1
persistence:
enabled: true
size: 1Gi
# storageClass: encrypted-gp3
extraArgs:
- -m 2048
- -I 32m
resources:
requests:
cpu: 200m
memory: 500Mi
limits:
cpu: 200m
memory: 500Mi
memcachedFrontend:
enabled: true
replicas: 1
maxUnavailable: 1
persistence:
enabled: true
size: 1Gi
# storageClass: encrypted-gp3
extraArgs:
- -m 2048
- -I 32m
resources:
requests:
cpu: 200m
memory: 500Mi
limits:
cpu: 200m
memory: 500Mi
memcachedIndexQueries:
enabled: true
replicas: 1
maxUnavailable: 1
persistence:
enabled: true
size: 1Gi
# storageClass: encrypted-gp3
extraArgs:
- -m 2048
- -I 32m
resources:
requests:
cpu: 200m
memory: 500Mi
limits:
cpu: 200m
memory: 500Mi
indexGateway:
enabled: true
replicas: 2
maxUnavailable: 1
persistence:
enabled: true
size: 1Gi
# storageClass: encrypted-gp3
resources:
requests:
cpu: 200m
memory: 500Mi
limits:
cpu: 200m
memory: 500Mi
# serviceMonitor:
# enabled: true
# namespace: logging
# namespaceSelector:
# any: true
# labels:
# prometheus: kube
# prometheusRule:
# enabled: false
# namespace: logging
# annotations: {}
# labels:
# app: loki-kube-prometheus
# prometheus: kube
# groups: []
promtail:
config:
logLevel: info
clients:
- url: http://loki-logging-gateway.logging.svc.cluster.local/loki/api/v1/push

View File

@ -0,0 +1,86 @@
logging:
loki:
storage:
type: filesystem
auth_enabled: false
commonConfig:
replication_factor: 1
schemaConfig:
configs:
- from: 2024-04-01
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: loki_index_
period: 24h
ingester:
chunk_encoding: snappy
tracing:
enabled: true
querier:
# Default is 4, if you have enough memory and CPU you can increase, reduce if OOMing
max_concurrent: 2
deploymentMode: SingleBinary
lokiCanary:
enabled: false
test:
enabled: false
singleBinary:
replicas: 1
resources:
limits:
cpu: 3
memory: 4Gi
requests:
cpu: 2
memory: 2Gi
extraEnv:
# Keep a little bit lower than memory limits
- name: GOMEMLIMIT
value: 3750MiB
chunksCache:
# default is 500MB, with limited memory keep this smaller
writebackSizeLimit: 10MB
allocatedMemory: 1024
# Enable minio for storage
minio:
enabled: false
persistence:
size: 10Gi
# Zero out replica counts of other deployment modes
backend:
replicas: 0
read:
replicas: 0
write:
replicas: 0
ingester:
replicas: 0
querier:
replicas: 0
queryFrontend:
replicas: 0
queryScheduler:
replicas: 0
distributor:
replicas: 0
compactor:
replicas: 0
indexGateway:
replicas: 0
bloomCompactor:
replicas: 0
bloomGateway:
replicas: 0
promtail:
config:
logLevel: info
clients:
- url: http://logging-gateway/loki/api/v1/push

90
charts/loki/values.yaml Normal file
View File

@ -0,0 +1,90 @@
distributed:
enabled: false
standalone:
enabled: true
loki:
storage:
type: filesystem
auth_enabled: false
commonConfig:
replication_factor: 1
schemaConfig:
configs:
- from: 2024-04-01
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: loki_index_
period: 24h
ingester:
chunk_encoding: snappy
tracing:
enabled: true
querier:
# Default is 4, if you have enough memory and CPU you can increase, reduce if OOMing
max_concurrent: 2
deploymentMode: SingleBinary
lokiCanary:
enabled: false
test:
enabled: false
singleBinary:
replicas: 1
resources:
limits:
cpu: 3
memory: 4Gi
requests:
cpu: 2
memory: 2Gi
extraEnv:
# Keep a little bit lower than memory limits
- name: GOMEMLIMIT
value: 3750MiB
chunksCache:
# default is 500MB, with limited memory keep this smaller
writebackSizeLimit: 10MB
allocatedMemory: 1024
# Enable minio for storage
minio:
enabled: false
persistence:
size: 10Gi
# Zero out replica counts of other deployment modes
backend:
replicas: 0
read:
replicas: 0
write:
replicas: 0
ingester:
replicas: 0
querier:
replicas: 0
queryFrontend:
replicas: 0
queryScheduler:
replicas: 0
distributor:
replicas: 0
compactor:
replicas: 0
indexGateway:
replicas: 0
bloomCompactor:
replicas: 0
bloomGateway:
replicas: 0
promtail:
config:
logLevel: info
clients:
- url: http://logging-gateway/loki/api/v1/push

View File

@ -0,0 +1,10 @@
apiVersion: v2
name: microservice
description: Basic helm chart for deploying microservices on kubernetes with best practices
type: application
version: 0.1.8
appVersion: "0.1.2"
maintainers:
- name: ashwani-opstree
- name: tripathishikha1
- name: khushimalhoz

View File

@ -0,0 +1,45 @@
# microservice
Basic helm chart for deploying microservices on kubernetes with best practices
![Version: 0.1.2](https://img.shields.io/badge/Version-0.1.2-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 0.1.2](https://img.shields.io/badge/AppVersion-0.1.2-informational?style=flat-square)
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| Ashwani Singh | <ashwani.singh@opstree.com> | |
| Shikha Tripathi | | |
## Installing the Chart
To install the chart with the release name `my-release`:
```console
helm install my-release microservice/
```
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| deployment | object | `{"affinity":{},"annotations":{},"environment":{},"image":{"name":"","pullPolicy":"IfNotPresent","tag":""},"livenessProbe":{"failureThreshold":5,"initialDelaySeconds":250,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":5},"nodeSelector":{},"readinessProbe":{"failureThreshold":5,"initialDelaySeconds":30,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":5},"resources":{},"tolerations":[],"volumeMounts":[],"volumes":{"configMaps":null,"enabled":true,"pvc":{"accessModes":["ReadWriteOnce"],"class":"default","enabled":false,"existing_claim":false,"mountPath":"/pv","name":"pvc","size":"1G"}}}` | Object that configures Deployment instance |
| deployment.image | object | `{"name":"","pullPolicy":"IfNotPresent","tag":""}` | Override default container image format |
| global | object | `{"environment":{},"fullnameOverride":"","imagePullSecrets":[],"nameOverride":"","namespace":"default","replicaCount":1}` | global variables |
| hpa.enabled | bool | `true` | |
| hpa.maxReplicas | int | `1` | |
| hpa.minReplicas | int | `1` | |
| hpa.targetCPU | int | `80` | |
| hpa.targetMemory | int | `80` | |
| kubeVersion | string | `""` | |
| service.annotations | object | `{}` | |
| service.specs[0].name | string | `"http"` | |
| service.specs[0].port | int | `80` | |
| service.type | string | `"ClusterIP"` | |
| serviceAccount.annotations | object | `{}` | |
| serviceAccount.automount | bool | `true` | |
| serviceAccount.create | bool | `false` | |
| serviceAccount.name | string | `""` | |
> **_NOTE:_** Please find the sample helm values yaml in example repository.

View File

@ -0,0 +1,22 @@
{{ template "chart.header" . }}
{{ template "chart.description" . }}
{{ template "chart.versionBadge" . }}{{ template "chart.typeBadge" . }}{{ template "chart.appVersionBadge" . }}
{{ template "chart.maintainersSection" . }}
## Installing the Chart
To install the chart with the release name `my-release`:
```console
$ helm install my-release microservice/
```
{{/* {{ template "chart.requirementsSection" . }} */}}
{{ template "chart.valuesSection" . }}
> **_NOTE:_** Please find the sample helm values yaml in example repository.
{{/* {{ template "helm-docs.versionFooter" . }} */}}

View File

@ -0,0 +1,2 @@
name=opstree
address=opstreesolution

View File

@ -0,0 +1,43 @@
global:
namespace: "demo-dev"
fullnameOverride: "webapp"
deployment:
image:
name: nginx
tag: latest
pullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: "/"
port: http
readinessProbe:
httpGet:
path: "/"
port: http
resources:
requests:
memory: 100Mi
cpu: 100m
limits:
memory: 500Mi
cpu: 500m
volumes:
enabled: true
configMaps:
- name: index
mountPath: /usr/share/nginx/html
data:
index.html: |
Hello! Opstree
topologySpreadConstraints:
whenUnsatisfiable: "DoNotSchedule"
# serviceAccount:
# create: true
# annotations: "aws arn link"
# serviceAccount:
# name: "myserviceaccount"

View File

@ -0,0 +1,5 @@
You have deployed the following release: {{ include "microservice.fullname" . }}.
To get further information, you can run the commands:
$ helm status {{ include "microservice.fullname" . }}
$ helm get all {{ include "microservice.fullname" . }}

View File

@ -0,0 +1,36 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Return the target Kubernetes version
*/}}
{{- define "microservice.capabilities.kubeVersion" -}}
{{- default (default .Capabilities.KubeVersion.Version .Values.kubeVersion) ((.Values.global).kubeVersion) -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for Horizontal Pod Autoscaler.
*/}}
{{- define "microservice.capabilities.hpa.apiVersion" -}}
{{- $kubeVersion := include "microservice.capabilities.kubeVersion" .context -}}
{{- if and (not (empty $kubeVersion)) (semverCompare "<1.23-0" $kubeVersion) -}}
{{- if .beta2 -}}
{{- print "autoscaling/v2beta2" -}}
{{- else -}}
{{- print "autoscaling/v2beta1" -}}
{{- end -}}
{{- else -}}
{{- print "autoscaling/v2" -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for deployment.
*/}}
{{- define "microservice.capabilities.deployment.apiVersion" -}}
{{- $kubeVersion := include "microservice.capabilities.kubeVersion" . -}}
{{- if and (not (empty $kubeVersion)) (semverCompare "<1.14-0" $kubeVersion) -}}
{{- print "extensions/v1beta1" -}}
{{- else -}}
{{- print "apps/v1" -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,54 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Create a defautl fully qualified app name
It will use the release name to give the app name
*/}}
{{- define "microservice.name" -}}
{{- default .Chart.Name .Values.global.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "microservice.fullname" -}}
{{- if .Values.global.fullnameOverride -}}
{{- .Values.global.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.global.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "microservice.labels" -}}
app: {{ include "microservice.fullname" . }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "microservice.selectorLabels" -}}
app: {{ include "microservice.fullname" . }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "microservice.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "microservice.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,30 @@
#ConfigMap mounted as volumes
{{- if .Values.deployment.volumes.configMaps }}
{{- if .Values.deployment.volumes.enabled }}
{{ $header := .Values.deployment.volumes.configFileCommonHeader | default "" }}
{{ $root := . }}
{{ range $cm := .Values.deployment.volumes.configMaps}}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "microservice.fullname" $root }}-{{ $cm.name }}-cm
namespace: {{ $root.Values.global.namespace | quote }}
data:
{{- if $cm.data }}
{{- range $filename, $content := $cm.data }}
# property-like keys; each key maps to a simple value
{{ $filename }}: |-
{{ $content | toString | indent 4}}
{{- end }}
{{- end }}
{{- if $cm.files }}
{{- range $file := $cm.files }}
{{ $file.destination }}: |
{{ $header | toString | indent 4 }}
{{ $root.Files.Get "$file.source" }}
{{- end}}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,139 @@
{{ $root := . }}
---
apiVersion: {{ include "microservice.capabilities.deployment.apiVersion" . }}
kind: Deployment
metadata:
name: {{ include "microservice.fullname" . }}-app
namespace: {{ .Values.global.namespace | quote }}
{{- if .Values.deployment.annotations }}
annotations:
{{- range $key, $value := .Values.deployment.annotations }}
{{ $key }}: {{ $value }}
{{- end }}
{{- end }}
labels:
{{- include "microservice.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.global.replicaCount }}
{{- if .Values.deployment.strategy }}
strategy:
{{- toYaml .Values.deployment.strategy | nindent 4 }}
{{- end }}
selector:
matchLabels:
{{- include "microservice.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "microservice.selectorLabels" . | nindent 8 }}
{{- if .Values.deployment.podAnnotations }}
annotations:
{{- range $key, $value := .Values.deployment.podAnnotations }}
{{ $key }}: {{ $value }}
{{- end }}
{{- end }}
spec:
{{- with .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.serviceAccount.create }}
serviceAccountName: {{ include "microservice.serviceAccountName" . }}-sa
{{- end }}
{{- if .Values.serviceAccount.name }}
serviceAccountName: {{ .Values.serviceAccount.name }}
{{- end }}
terminationGracePeriodSeconds: {{ .Values.deployment.terminationGracePeriodSeconds }}
containers:
- name: {{ include "microservice.fullname" . }}
image: "{{ .Values.deployment.image.name }}:{{ .Values.deployment.image.tag }}"
imagePullPolicy: {{ .Values.deployment.image.pullPolicy }}
{{- if .Values.deployment.command }}
command: {{ .Values.deployment.command }}
{{- end }}
{{- if .Values.deployment.args }}
args: {{ .Values.deployment.args }}
{{- end }}
ports:
{{- range .Values.service.specs}}
- name: {{ .name }}
containerPort: {{ .targetPort | default .port}}
protocol: {{ .protocol | default "TCP" }}
{{- end }}
{{- if (merge .Values.global.environment .Values.deployment.environment) }}
env:
{{- range $name, $value := merge .Values.global.environment .Values.deployment.environment}}
- name: {{ $name | quote}}
value: {{ $value | quote }}
{{- end }}
{{- end }}
{{- if and .Values.deployment.healthProbes.enabled .Values.deployment.livenessProbe.httpGet }}
livenessProbe:
{{- toYaml .Values.deployment.livenessProbe | nindent 12 }}
{{- end }}
{{- if and .Values.deployment.healthProbes.enabled .Values.deployment.readinessProbe.httpGet }}
readinessProbe:
{{- toYaml .Values.deployment.readinessProbe | nindent 12 }}
{{- end }}
resources:
{{- toYaml .Values.deployment.resources | nindent 12 }}
{{- if .Values.deployment.volumes.enabled }}
volumeMounts:
{{- range $conf := .Values.deployment.volumes.configMaps }}
- mountPath: {{ $conf.mountPath }}
name: {{ include "microservice.fullname" $root }}-{{ $conf.name }}-cm
{{- end }}
{{- if .Values.deployment.volumes.pvc.enabled }}
- mountPath: {{ .Values.volumes.pvc.mountPath }}
name: {{ .Values.volumes.pvc.existing_claim | default .Values.volumes.pvc.name }}-volume
{{- end }}
{{- end }}
{{- if .Values.deployment.volumes.enabled }}
volumes:
{{- range $conf := .Values.deployment.volumes.configMaps }}
- name: {{ include "microservice.fullname" $root }}-{{ $conf.name }}-cm
configMap:
name: {{ include "microservice.fullname" $root }}-{{ $conf.name }}-cm
{{- end }}
{{- if .Values.deployment.volumes.pvc.enabled}}
- name: {{ .Values.deployment.volumes.pvc.existing_claim | default .Values.volumes.pvc.name }}-volume
persistentVolumeClaim:
claimName: {{ .Values.deployment.volumes.pvc.existing_claim | default .Values.volumes.pvc.name }}
{{- end}}
{{- end }}
{{- with .Values.deployment.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if and .Values.deployment.affinity.enabled (or .Values.deployment.affinity.preferred.enabled .Values.deployment.affinity.required.enabled) }}
affinity:
podAntiAffinity:
{{- if .Values.deployment.affinity.preferred.enabled }}
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: {{ include "microservice.fullname" . }}
topologyKey: {{ .Values.deployment.affinity.topologyKey }}
{{- end }}
{{- if and .Values.deployment.affinity.required.enabled (not .Values.deployment.affinity.preferred.enabled) }}
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ include "microservice.fullname" . }}
topologyKey: {{ .Values.deployment.affinity.topologyKey }}
{{- end }}
{{- end }}
{{- if .Values.deployment.topologySpreadConstraints.enabled }}
topologySpreadConstraints:
- maxSkew: 1
topologyKey: {{ .Values.deployment.topologySpreadConstraints.topologyKey }}
whenUnsatisfiable: "{{ .Values.deployment.topologySpreadConstraints.whenUnsatisfiable }}"
labelSelector:
matchLabels:
app: {{ include "microservice.fullname" . }}
{{- if ( eq .Values.deployment.topologySpreadConstraints.whenUnsatisfiable "DoNotSchedule")}}
minDomains: 2
{{- end }}
{{- end }}

View File

@ -0,0 +1,41 @@
{{- if .Values.hpa.enabled }}
apiVersion: {{ include "microservice.capabilities.hpa.apiVersion" ( dict "context" $ ) }}
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "microservice.fullname" . }}-hpa
namespace: {{ .Values.global.namespace | quote }}
labels:
{{- include "microservice.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: {{ include "microservice.capabilities.deployment.apiVersion" . }}
kind: Deployment
name: {{ include "microservice.fullname" . }}-app
minReplicas: {{ .Values.hpa.minReplicas }}
maxReplicas: {{ .Values.hpa.maxReplicas }}
metrics:
{{- if .Values.hpa.targetMemory }}
- type: Resource
resource:
name: memory
{{- if semverCompare "<1.23-0" (include "microservice.capabilities.kubeVersion" .) }}
targetAverageUtilization: {{ .Values.hpa.targetMemory }}
{{- else }}
target:
type: Utilization
averageUtilization: {{ .Values.hpa.targetMemory }}
{{- end }}
{{- end }}
{{- if .Values.hpa.targetCPU }}
- type: Resource
resource:
name: cpu
{{- if semverCompare "<1.23-0" (include "microservice.capabilities.kubeVersion" .) }}
targetAverageUtilization: {{ .Values.hpa.targetCPU }}
{{- else }}
target:
type: Utilization
averageUtilization: {{ .Values.hpa.targetCPU }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,21 @@
{{- if .Values.deployment.volumes.pvc.enabled }}
{{- if .Values.deployment.volumes.pvc.existing_claim -}}
{{- else -}}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.deployment.volumes.pvc.name }}
namespace: {{ .Values.global.namespace | quote }}
spec:
{{- if .Values.deployment.volumes.pvc.class }}
storageClassName: {{ .Values.deployment.volumes.pvc.class }}
{{- end }}
accessModes:
{{- range $accessMode := .Values.deployment.volumes.pvc.accessModes }}
- {{ $accessMode }}
{{- end }}
resources:
requests:
storage: {{ .Values.deployment.volumes.pvc.size }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,34 @@
{{- $root:= . }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "microservice.fullname" . }}-svc
namespace: {{ .Values.global.namespace | quote }}
{{- if .Values.service.annotations }}
annotations:
{{- range $key, $value := .Values.service.annotations }}
{{ $key }}: {{ $value }}
{{- end }}
{{- end }}
labels:
{{- include "microservice.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
selector:
{{- include "microservice.selectorLabels" . | nindent 4 }}
ports:
{{- range $spec := .Values.service.specs }}
- name: {{ $spec.name }}
port: {{ $spec.port }}
protocol: {{ $spec.protocol | default "TCP" }}
{{- if $spec.targetPort }}
targetPort: {{ $spec.targetPort }}
{{- else }}
targetPort: {{ $spec.name }}
{{- end}}
{{- if $spec.nodePort }}
nodePort: {{ $spec.nodePort }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,14 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "microservice.serviceAccountName" . }}-sa
namespace: {{ .Values.global.namespace | quote }}
labels:
{{- include "microservice.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
automountServiceAccountToken: {{ .Values.serviceAccount.automount }}
{{- end }}

View File

@ -0,0 +1,155 @@
# -- global variables
global:
namespace: "default"
replicaCount: 1
nameOverride: ""
fullnameOverride: ""
imagePullSecrets: []
environment: {}
# list of key: value
# GLOBAL1: value
## @param kubeVersion Override Kubernetes version
##
kubeVersion: ""
# -- Object that configures Deployment instance
deployment:
# -- Override default container image format
image:
name: ""
tag: ""
pullPolicy: IfNotPresent
strategy: {}
# Annotation for the Deployment
annotations: {}
podAnnotations: {}
terminationGracePeriodSeconds: 60
healthProbes:
enabled: true
# livenessProbe: {}
livenessProbe:
# httpGet:
# path: "/"
# port: http
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
# readinessProbe: {}
readinessProbe:
# httpGet:
# path: "/"
# port: http
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
# command: ["/bin/sh","-c"]
# args: ["echo 'consuming a message'; sleep 5"]
environment: {}
# VAR1: value1
resources: {}
# resources:
# requests:
# memory: 100Mi
# cpu: 100m
# limits:
# memory: 100Mi
# cpu: 100m
# Additional volumes on the output Deployment definition.
volumes:
enabled: true
pvc:
enabled: false
existing_claim: false
name: pvc
mountPath: /pv
size: 1G
class: "default"
accessModes:
- ReadWriteOnce
# configFileCommonHeader: |
# line1
# line2
configMaps:
# - name: test
# mountPath: /test
# data:
# test.conf: |
# hello
# hello2
# - name: test-from-file
# mountPath: /test2
# files:
# - source: config.conf
# destination: application.conf
# - name: test-mixed
# mountPath: /test3
# data:
# test2.conf: |
# another hello
# files:
# - source: config.conf
# destination: application2.conf
# Additional volumeMounts on the output Deployment definition.
volumeMounts: []
# - name: foo
# mountPath: "/etc/foo"
# readOnly: true
nodeSelector: {}
tolerations: []
affinity:
enabled: true
preferred:
enabled: true
required:
enabled: false
topologyKey: "topology.kubernetes.io/zone"
topologySpreadConstraints:
enabled: true
# whenUnsatisfiable: "DoNotSchedule" OR "ScheduleAnyway"
whenUnsatisfiable: "ScheduleAnyway"
topologyKey: "topology.kubernetes.io/zone"
hpa:
enabled: true
minReplicas: 1
maxReplicas: 1
targetCPU: 80
targetMemory: 80
service:
type: ClusterIP
annotations: {}
specs:
- port: 80
name: http
serviceAccount:
create: false
automount: true
annotations: {}
name: ""

View File

@ -6,10 +6,11 @@ engine: gotpl
maintainers:
- name: Abhishek Dubey
- name: Sandeep Rawat
- name: Shubham Gupta
name: mongodb-operator
sources:
- https://github.com/OT-CONTAINER-KIT/mongodb-operator
version: 0.3.0
version: 0.3.1
home: https://github.com/OT-CONTAINER-KIT/mongodb-operator
icon: https://raw.githubusercontent.com/OT-CONTAINER-KIT/mongodb-operator/main/static/mongodb-operator-logo.svg
keywords:

View File

@ -12,7 +12,7 @@ metadata:
app.kubernetes.io/version: {{ .Chart.AppVersion }}
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccountName }}
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole

View File

@ -29,8 +29,6 @@ readinessProbe:
initialDelaySeconds: 5
periodSeconds: 10
serviceAccountName: mongodb-operator
priorityClassName: ""
nodeSelector: {}
tolerateAllTaints: false

View File

@ -7,7 +7,7 @@ maintainers:
name: mongodb
sources:
- https://github.com/ot-container-kit/mongodb-operator
version: 0.3.0
version: 0.3.1
appVersion: "0.3.0"
home: https://github.com/ot-container-kit/mongodb-operator
keywords:

View File

@ -40,7 +40,9 @@ spec:
storage:
accessModes: {{ .Values.storage.accessModes }}
storageSize: {{ .Values.storage.storageSize }}
{{- if .Values.storage.storageClass }}
storageClass: {{ .Values.storage.storageClass }}
{{- end }}
{{- end }}
mongoDBSecurity:
mongoDBAdminUser: admin

View File

@ -40,10 +40,10 @@ storage:
enabled: true
accessModes: ["ReadWriteOnce"]
storageSize: 1Gi
storageClass: csi-cephfs-sc
storageClass:
mongoDBMonitoring:
enabled: true
enabled: false
image:
name: bitnami/mongodb-exporter
tag: 0.11.2-debian-10-r382

View File

@ -0,0 +1,14 @@
apiVersion: v2
name: otel-operator
description: A Helm chart for Opentelemetry Operator
type: application
version: 1.0.0
appVersion: 1.0.0
dependencies:
- name: opentelemetry-operator
version: 0.64.2
repository: https://open-telemetry.github.io/opentelemetry-helm-charts/
alias: operator
tags:
- operator
condition: operator.enabled

View File

@ -0,0 +1,12 @@
operator:
enabled: true
fullnameOverride: otel
manager:
collectorImage:
repository: otel/opentelemetry-collector-contrib
tag: latest
admissionWebhooks:
certManager:
enabled: false
autoGenerateCert:
enabled: true

65
charts/pga/Chart.yaml Normal file
View File

@ -0,0 +1,65 @@
apiVersion: v2
name: pga
description: A Helm chart for prometheus, grafana and alertmanager
type: application
version: 1.0.3
appVersion: 1.0.1
maintainers:
- name: ashwani-opstree
dependencies:
- name: kube-prometheus-stack
version: 61.3.1
repository: https://prometheus-community.github.io/helm-charts/
alias: app
tags:
- monitoring
condition: app.enabled
- name: kube-prometheus-stack
version: 61.3.1
repository: https://prometheus-community.github.io/helm-charts/
alias: kube
tags:
- monitoring
condition: kube.enabled
- name: prometheus-adapter
version: 4.10.0
repository: https://prometheus-community.github.io/helm-charts/
tags:
- monitoring
alias: adapter
condition: adapter.enabled
- name: prometheus-pushgateway
version: 2.14.0
repository: https://prometheus-community.github.io/helm-charts/
tags:
- monitoring
alias: pushgateway
condition: pushgateway.enabled
- name: prometheus-blackbox-exporter
version: 8.17.0
repository: https://prometheus-community.github.io/helm-charts/
tags:
- blackbox
alias: blackbox
condition: blackbox.enabled
- name: thanos
version: 15.7.12
repository: https://charts.bitnami.com/bitnami
tags:
- thanos
alias: thanos
condition: thanos.enabled
- name: kubernetes-event-exporter
version: 3.2.10
repository: https://charts.bitnami.com/bitnami
alias: k8s-events
tags:
- monitoring
condition: k8s-events.enabled

36
charts/pga/README.md Normal file
View File

@ -0,0 +1,36 @@
# Prometheus Monitoring Setup with Helm
This document provides detailed instructions for setting up Prometheus monitoring in a Kubernetes cluster using Helm charts. Follow these commands to deploy Prometheus and its associated components.
## 1. Apply Custom Resource Definitions (CRDs)
Run the following commands to apply each CRD:
```bash
kubectl apply --server-side=true -f https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-61.5.0/charts/kube-prometheus-stack/charts/crds/crds/crd-alertmanagers.yaml
kubectl apply --server-side=true -f https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-61.5.0/charts/kube-prometheus-stack/charts/crds/crds/crd-alertmanagerconfigs.yaml
kubectl apply --server-side=true -f https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-61.5.0/charts/kube-prometheus-stack/charts/crds/crds/crd-podmonitors.yaml
kubectl apply --server-side=true -f https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-61.5.0/charts/kube-prometheus-stack/charts/crds/crds/crd-probes.yaml
kubectl apply --server-side=true -f https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-61.5.0/charts/kube-prometheus-stack/charts/crds/crds/crd-prometheusagents.yaml
kubectl apply --server-side=true -f https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-61.5.0/charts/kube-prometheus-stack/charts/crds/crds/crd-prometheuses.yaml
kubectl apply --server-side=true -f https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-61.5.0/charts/kube-prometheus-stack/charts/crds/crds/crd-prometheusrules.yaml
kubectl apply --server-side=true -f https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-61.5.0/charts/kube-prometheus-stack/charts/crds/crds/crd-scrapeconfigs.yaml
kubectl apply --server-side=true -f https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-61.5.0/charts/kube-prometheus-stack/charts/crds/crds/crd-servicemonitors.yaml
kubectl apply --server-side=true -f https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-61.5.0/charts/kube-prometheus-stack/charts/crds/crds/crd-thanosrulers.yaml
```
## 2. Update Helm Chart Dependencies
```bash
helm dep update
```
Updates Helm chart dependencies.
## 3. Create a Namespace for Monitoring
```bash
kubectl create ns monitoring
```
Creates a Kubernetes namespace named monitoring.
## 4. Render chart templates locally and apply
```bash
helm template --name-template=monitoring . -n monitoring -f values.yaml | kubectl apply -f -
```

View File

@ -0,0 +1,52 @@
global:
resolve_timeout: 5m
route:
group_wait: 30s
group_interval: 5m
repeat_interval: 30m
receiver: "null"
group_by:
- job
- alertname
- severity
routes:
- receiver: "null"
match:
alertname: Watchdog
- receiver: "alerts-infra"
group_wait: 10s
continue: true
match_re:
severity: warning|high
channel: slack
team: devops
receivers:
- name: "null"
# - name: "email"
# email_configs:
# - to: ''
# from: ''
# smarthost: ''
# auth_username: ''
# auth_password: ''
# require_tls: yes
# send_resolved: true
- name: "alerts-infra"
slack_configs:
- api_url: 'https://hooks.slack.com/services/'
send_resolved: true
channel: '#alerts-infra'
icon_url: https://avatars3.githubusercontent.com/u/3380462
title: '[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] {{ .CommonLabels.alertname }}'
text: >-
{{ range .Alerts }}
*Alert:* {{ .Annotations.description }} - `{{ .Labels.severity }}`
*Description:* {{ .Annotations.description }}
*Graph:* <{{ .GeneratorURL }}|:chart_with_upwards_trend:> *Runbook:* <{{ .Annotations.runbook }}|:spiral_note_pad:>
*Details:*
{{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
{{ end }}
{{ end }}

View File

@ -0,0 +1,17 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namePrefix: alertmanager-
namespace: monitoring
commonLabels:
app: kube-alertmanager
release: kube
prometheus: kube
generatorOptions:
disableNameSuffixHash: true
secretGenerator:
- name: kube-alertmanager
files:
- config/alertmanager.yaml
type: Opaque

View File

@ -0,0 +1,60 @@
app:
enabled: false
kube:
enabled: true
grafana:
enabled: true
testFramework:
enabled: false
sidecar:
datasources:
defaultDatasourceEnabled: false
resources:
requests:
cpu: 1
memory: 2Gi
limits:
cpu: 1
memory: 2Gi
persistence:
enabled: true
type: sts
storageClassName: buildpiper-storage
accessModes:
- ReadWriteOnce
size: 1Gi
finalizers:
- kubernetes.io/pvc-protection
alertmanager:
enabled: false
prometheus:
enabled: true
prometheusSpec:
retention: 3d
resources:
requests:
cpu: 1
memory: 1Gi
limits:
cpu: 2
memory: 2Gi
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: buildpiper-storage
resources:
requests:
storage: 20Gi
pushgateway:
enabled: false
blackbox:
enabled: false
adapter:
enabled: false
thanos:
enabled: false

View File

@ -0,0 +1,48 @@
app:
enabled: false
kube:
enabled: true
grafana:
enabled: true
testFramework:
enabled: false
sidecar:
datasources:
defaultDatasourceEnabled: false
alertmanager:
alertmanagerSpec:
storage:
volumeClaimTemplate:
spec:
storageClassName: buildpiper-storage
prometheus:
enabled: true
prometheusSpec:
retention: 7d
resources:
requests:
cpu: 1
memory: 1Gi
limits:
cpu: 2
memory: 2Gi
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: buildpiper-storage
resources:
requests:
storage: 15Gi
pushgateway:
enabled: false
blackbox:
enabled: false
adapter:
enabled: false
thanos:
enabled: false

View File

@ -0,0 +1,19 @@
app:
enabled: false
kube:
enabled: true
grafana:
enabled: true
sidecar:
datasources:
defaultDatasourceEnabled: false
pushgateway:
enabled: false
blackbox:
enabled: false
adapter:
enabled: true

View File

@ -0,0 +1,257 @@
app:
enabled: false
kube:
enabled: true
fullnameOverride: kube
commonLabels:
prometheus: kube
defaultRules:
create: false
alertmanager:
enabled: true
alertmanagerSpec:
retention: 240h
resources:
requests:
cpu: 250m
memory: 500Mi
limits:
cpu: 250m
memory: 500Mi
storage:
volumeClaimTemplate:
spec:
# storageClassName: encrypted-gp3
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
grafana:
enabled: true
sidecar:
datasources:
defaultDatasourceEnabled: false
kubeApiServer:
enabled: true
kubelet:
enabled: true
namespace: kube-system
kubeControllerManager:
enabled: false
coreDns:
enabled: true
kubeEtcd:
enabled: false
kubeScheduler:
enabled: false
kubeProxy:
enabled: false
kubeStateMetrics:
enabled: true
kube-state-metrics:
customLabels:
prometheus: kube
enabled: true
podSecurityPolicy:
enabled: false
resources:
requests:
cpu: 250m
memory: 500Mi
limits:
cpu: 250m
memory: 500Mi
nodeExporter:
enabled: true
prometheus-node-exporter:
prometheus:
monitor:
additionalLabels:
prometheus: kube
# rbac:
# pspEnabled: false
# image:
# repository:
# tag: latest
# pullPolicy: Always
prometheusOperator:
enabled: true
admissionWebhooks:
enabled: false
deployment:
enabled: true
tls:
enabled: false
prometheus:
enabled: true
thanosService:
enabled: true
thanosServiceMonitor:
enabled: true
prometheusSpec:
externalLabels:
kkubernetes_cluster: opstree
prometheus_cluster: kube
# get more details https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#monitoring.coreos.com/v1.ThanosSpec
thanos:
version: 0.35.1
# image: quay.io/thanos/thanos:v0.35.1
blockSize: 5m
objectStorageConfig:
existingSecret:
key: objstore.yml
name: monitoring-thanos-objstore-secret
# nodeSelector:
# appType: monitoring
# tolerations:
# - key: "appType"
# operator: "Equal"
# value: "monitoring"
# effect: "NoSchedule"
# remoteWrite:
# - url: https://app.last9.io/jupiter/prometheus/write
# basicAuth:
# username:
# name: promsecret
# key: username
# password:
# name: promsecret
# key: password
## # Do not add the writeRelabelConfigs section if you want to
## # send all metrics via remote write
## writeRelabelConfigs:
# - sourceLabels: [ __name__ ]
# regex: 'istio*'
# action: keep
# image:
# tag: v2.41.0
retention: 1h
replicas: 2
# externalUrl: "http://kube-opstree.prod.internal/"
resources:
requests:
cpu: "500m"
memory: 500Mi
limits:
cpu: "500m"
memory: 500Mi
storageSpec:
volumeClaimTemplate:
spec:
# storageClassName: encrypted-gp3
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
serviceMonitorSelector:
matchExpressions:
- key: prometheus
operator: In
values:
- kube
podMonitorSelector:
matchExpressions:
- key: prometheus
operator: In
values:
- kube
ruleSelector:
matchLabels:
prometheus: kube
service:
name: kube-prometheus
pushgateway:
enabled: false
serviceMonitor:
enabled: true
namespace: monitoring
additionalLabels:
prometheus: app
extraArgs:
- --log.level=debug
- --push.disable-consistency-check
resources:
limits:
cpu: 1
memory: 4096Mi
requests:
cpu: 500m
memory: 4096Mi
blackbox:
enabled: false
serviceMonitor:
enabled: true
defaults:
additionalMetricsRelabels: {}
labels:
prometheus: app
interval: 30s
scrapeTimeout: 30s
module: http_2xx
config:
modules:
http_2xx:
prober: http
timeout: 5s
http:
valid_http_versions: [ "HTTP/1.0", "HTTP/1.1", "HTTP/2.0" ]
no_follow_redirects: false
preferred_ip_protocol: "ip4"
fail_if_ssl: false
fail_if_not_ssl: false
adapter:
enabled: false
thanos:
enabled: true
objstoreConfig: |-
type: s3
config:
bucket: thanos
endpoint: monitoring-minio.monitoring.svc.cluster.local:9000
access_key: minio
secret_key: minio123
insecure: true
query:
dnsDiscovery:
sidecarsService: kube-thanos-discovery
sidecarsNamespace: monitoring
bucketweb:
enabled: true
compactor:
enabled: false
storegateway:
enabled: true
ruler:
enabled: true
serviceMonitor:
namespace: monitoring
alertmanagers:
- http://kube-alertmanager.monitoring.svc.cluster.local:9093
config: |-
groups:
- name: "metamonitoring"
rules:
- alert: "PrometheusDown"
expr: absent(up{prometheus="monitoring/kube-prometheus"})
metrics:
enabled: true
serviceMonitor:
namespace: monitoring
enabled: true
minio:
enabled: true
auth:
rootPassword: minio123
rootUser: minio
monitoringBuckets: thanos
accessKey:
password: minio
secretKey:
password: minio123

View File

@ -0,0 +1,7 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: monitoring
nameSuffix: -grafana-dashboard
resources:
- opentelemetry-apm

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,13 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
generatorOptions:
labels:
grafana_dashboard: "1"
disableNameSuffixHash: true
annotations:
k8s-sidecar-target-directory: "/tmp/dashboards/otel-apm"
configMapGenerator:
- name: apm
files:
- apm.json

View File

@ -0,0 +1,21 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-alertmanager-datasource
namespace: monitoring
labels:
grafana_datasource: "1"
app: kube-grafana
prometheus: kube
data:
kube-alertmanager.yaml: |-
apiVersion: 1
datasources:
- name: "kube-alertmanager"
type: alertmanager
uid: alertmanager
url: http://kube-alertmanager.monitoring:9093/
access: proxy
jsonData:
handleGrafanaManagedAlerts: false
implementation: prometheus

View File

@ -0,0 +1,8 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- loki.yaml
- prometheus.yaml
- tempo.yaml
- alertmanager.yaml

View File

@ -0,0 +1,33 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-loki-datasource
namespace: monitoring
labels:
grafana_datasource: "1"
app: kube-grafana
prometheus: kube
data:
kube-loki.yaml: |-
apiVersion: 1
datasources:
- uid: logging
orgId: 1
name: logging
type: loki
typeName: Loki
access: proxy
url: http://loki-logging-gateway.logging.svc
password: ''
user: ''
database: ''
basicAuth: false
isDefault: false
jsonData:
derivedFields:
- datasourceUid: tempo
matcherRegex: (?:trace_id)=(\w+)
name: TraceID
url: $${__value.raw}
readOnly: false
editable: true

View File

@ -0,0 +1,27 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-prometheus-datasource
namespace: monitoring
labels:
grafana_datasource: "1"
app: kube-grafana
prometheus: kube
data:
kube-prometheus.yaml: |-
apiVersion: 1
datasources:
- name: "kube-prom"
type: prometheus
uid: prometheus
url: http://kube-prometheus.monitoring:9090/
access: proxy
isDefault: true
jsonData:
httpMethod: POST
timeInterval: 30s
exemplarTraceIdDestinations:
- datasourceUid: tempo
name: TraceID
readOnly: false
editable: true

View File

@ -0,0 +1,34 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: tempo-datasource
namespace: monitoring
labels:
grafana_datasource: "1"
app: kube-grafana
prometheus: kube
data:
tempo.yaml: |-
apiVersion: 1
datasources:
- name: "tempo"
type: tempo
uid: tempo
url: http://tempo.observability.svc.cluster.local:3100/
access: proxy
jsonData:
handleGrafanaManagedAlerts: false
implementation: prometheus
nodeGraph:
enabled: true
search:
hide: false
lokiSearch:
datasourceUid: loki
tracesToLogs:
datasourceUid: loki
filterBySpanID: false
filterByTraceID: true
mapTagNamesEnabled: false
tags:
- app

View File

@ -0,0 +1,5 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- thanos.yaml

View File

@ -0,0 +1,27 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-thanos-datasource
namespace: monitoring
labels:
grafana_datasource: "1"
app: kube-grafana
prometheus: kube
data:
kube-thanos.yaml: |-
apiVersion: 1
datasources:
- name: "kube-thanos"
type: prometheus
uid: thanos
url: http://monitoring-thanos-query-frontend.monitoring:9090/
access: proxy
jsonData:
httpMethod: POST
timeInterval: 30s
exemplarTraceIdDestinations:
- datasourceUid: tempo
name: TraceID
httpMethod: POST
readOnly: false
editable: true

359
charts/pga/values.yaml Normal file
View File

@ -0,0 +1,359 @@
app:
enabled: false
fullnameOverride: app
commonLabels:
prometheus: app
defaultRules:
create: false
alertmanager:
enabled: false
grafana:
enabled: false
kubeApiServer:
enabled: false
kubelet:
enabled: false
kubeControllerManager:
enabled: false
coreDns:
enabled: false
kubeEtcd:
enabled: false
kubeScheduler:
enabled: false
kubeProxy:
enabled: false
kubeStateMetrics:
enabled: false
kube-state-metrics:
enabled: false
nodeExporter:
enabled: false
prometheusOperator:
enabled: false
admissionWebhooks:
enabled: false
configReloaderCpu: 300m
configReloaderMemory: 300Mi
prometheus:
enabled: true
prometheusSpec:
# nodeSelector:
# appType: monitoring
# tolerations:
# - key: "appType"
# operator: "Equal"
# value: "monitoring"
# effect: "NoSchedule"
retention: 30d
replicas: 1
# externalUrl: ""
resources:
requests:
cpu: "1"
memory: 1Gi
limits:
cpu: "1"
memory: 1Gi
storageSpec:
volumeClaimTemplate:
spec:
# storageClassName: encrypted-gp3
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
alertingEndpoints:
- name: kube-alertmanager
namespace: monitoring
port: web
pathPrefix: /
apiVersion: v2
serviceMonitorSelector:
matchExpressions:
- key: prometheus
operator: In
values:
- app
podMonitorSelector:
matchExpressions:
- key: prometheus
operator: In
values:
- app
ruleSelector:
matchLabels:
prometheus: app
additionalScrapeConfigs:
- job_name: kubernets-servics-probe
metrics_path: /probe
params:
module:
- http_2xx
kubernetes_sd_configs:
- role: service
scrape_interval: 30s
scrape_timeout: 25s
relabel_configs:
- source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_probe
regex: true
action: keep
- source_labels:
- __meta_kubernetes_service_name
target_label: service
- source_labels:
- __address__,__meta_kubernetes_service_annotation_prometheus_io_path
regex: (.+);(.+)
target_label: __param_target
replacement: ${1}${2}
- source_labels:
- __param_target ]
target_label: instance
- source_labels: []
target_label: __address__
replacement: monitoring-prometheus-blackbox-exporter:9115
service:
name: app-prometheus
kube:
enabled: true
fullnameOverride: kube
commonLabels:
prometheus: kube
defaultRules:
create: false
alertmanager:
enabled: true
alertmanagerSpec:
retention: 240h
resources:
requests:
cpu: 250m
memory: 500Mi
limits:
cpu: 250m
memory: 500Mi
storage:
volumeClaimTemplate:
spec:
# storageClassName: encrypted-gp3
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
grafana:
enabled: true
testFramework:
enabled: false
sidecar:
datasources:
defaultDatasourceEnabled: false
kubeApiServer:
enabled: true
kubelet:
enabled: true
namespace: kube-system
kubeControllerManager:
enabled: false
coreDns:
enabled: true
kubeEtcd:
enabled: false
kubeScheduler:
enabled: false
kubeProxy:
enabled: false
kubeStateMetrics:
enabled: true
kube-state-metrics:
customLabels:
prometheus: kube
enabled: true
podSecurityPolicy:
enabled: false
resources:
requests:
cpu: 250m
memory: 500Mi
limits:
cpu: 250m
memory: 500Mi
nodeExporter:
enabled: true
prometheus-node-exporter:
prometheus:
monitor:
additionalLabels:
prometheus: kube
# rbac:
# pspEnabled: false
# image:
# repository:
# tag: latest
# pullPolicy: Always
prometheusOperator:
enabled: true
admissionWebhooks:
enabled: false
deployment:
enabled: true
tls:
enabled: false
prometheus:
enabled: true
prometheusSpec:
# nodeSelector:
# appType: monitoring
# tolerations:
# - key: "appType"
# operator: "Equal"
# value: "monitoring"
# effect: "NoSchedule"
# remoteWrite:
# - url: https://app.last9.io/jupiter/prometheus/write
# basicAuth:
# username:
# name: promsecret
# key: username
# password:
# name: promsecret
# key: password
## # Do not add the writeRelabelConfigs section if you want to
## # send all metrics via remote write
## writeRelabelConfigs:
# - sourceLabels: [ __name__ ]
# regex: 'istio*'
# action: keep
# image:
# tag: v2.41.0
retention: 30d
replicas: 1
# externalUrl: "http://kube-opstree.prod.internal/"
resources:
requests:
cpu: "500m"
memory: 500Mi
limits:
cpu: "500m"
memory: 500Mi
storageSpec:
volumeClaimTemplate:
spec:
# storageClassName: encrypted-gp3
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
serviceMonitorSelector:
matchExpressions:
- key: prometheus
operator: In
values:
- kube
podMonitorSelector:
matchExpressions:
- key: prometheus
operator: In
values:
- kube
ruleSelector:
matchLabels:
prometheus: kube
service:
name: kube-prometheus
pushgateway:
enabled: false
serviceMonitor:
enabled: true
namespace: monitoring
additionalLabels:
prometheus: app
extraArgs:
- --log.level=debug
- --push.disable-consistency-check
resources:
limits:
cpu: 1
memory: 4096Mi
requests:
cpu: 500m
memory: 4096Mi
blackbox:
enabled: false
serviceMonitor:
enabled: true
defaults:
additionalMetricsRelabels: {}
labels:
prometheus: app
interval: 30s
scrapeTimeout: 30s
module: http_2xx
config:
modules:
http_2xx:
prober: http
timeout: 5s
http:
valid_http_versions:
- "HTTP/1.0"
- "HTTP/1.1"
- "HTTP/2.0"
no_follow_redirects: false
preferred_ip_protocol: "ip4"
fail_if_ssl: false
fail_if_not_ssl: false
adapter:
enabled: false
k8s-events:
enabled: true
serviceAccount:
create: false
metrics:
enabled: true
serviceMonitor:
enabled: true
labels:
prometheus: kube
release: monitoring
config:
logLevel: debug
logFormat: json
receivers:
- name: "loki"
loki:
url: http://logging-loki-gateway.logging.svc.cluster.local/loki/api/v1/push
layout:
message: "{{ .msg }}"
reason: "{{ .Reason }}"
type: "{{ .Type }}"
count: "{{ .Count }}"
kind: "{{ .InvolvedObject.Kind }}"
name: "{{ .InvolvedObject.Name }}"
namespace: "{{ .Namespace }}"
component: "{{ .Source.Component }}"
host: "{{ .Source.Host }}"
route:
routes:
- match:
- receiver: "loki"
rbac:
rules:
- apiGroups: [metrics.k8s.io]
resources: [pods, nodes]
verbs: [get, list, watch]
- apiGroups: ["*"]
resources: ["*"]
verbs: ["get", "watch", "list"]
thanos:
enabled: false

View File

@ -0,0 +1,21 @@
apiVersion: v2
name: psmdb-operator-db
description: A Helm chart for Percona Operator and Percona Server for MongoDB
type: application
version: 1.0.0
appVersion: 1.0.0
dependencies:
- name: psmdb-operator
version: 1.18.0
repository: https://percona.github.io/percona-helm-charts/
alias: psmdb-operator
tags:
- psmdb-operator
condition: psmdb-operator.enabled
- name: psmdb-db
version: 1.18.0
repository: https://percona.github.io/percona-helm-charts/
alias: psmdb-db
tags:
- psmdb-db
condition: psmdb-db.enabled

View File

@ -0,0 +1,2 @@
Backup and Restore have been tested using backup.yaml and restore.yaml files respectively using Azure Blob Storage.
For using cloud storage as backup, a Kubernetes secret need to be made: https://docs.percona.com/percona-operator-for-mongodb/backup-tutorial.html#configure-backup-storage

View File

@ -0,0 +1,266 @@
# Percona Server for MongoDB
This chart deploys Percona Operator and Percona Server for MongoDB Cluster on Kubernetes controlled by Percona Operator for MongoDB.
Useful links:
- [Operator Github repository](https://github.com/percona/percona-server-mongodb-operator)
- [Operator Documentation](https://www.percona.com/doc/kubernetes-operator-for-psmongodb/index.html)
## Pre-requisites
* Kubernetes 1.26+
* Helm v3
# Chart Details
This chart will deploy the Operator Pod and Percona Server for MongoDB Cluster in Kubernetes. It will create a Custom Resource, and the Operator will trigger the creation of corresponding Kubernetes primitives: StatefulSets, Pods, Secrets, etc.
## Installing the Chart
To install the chart with the `psmdb` release name using a dedicated namespace (recommended):
```sh
helm dependency build
helm install my-db <path-to-chart> --namespace my-namespace
```
The chart can be customized using the following configurable parameters:
| Parameter | Description | Default |
| ------------------------------- | ------------------------------------------------------------------------------|---------------------------------------|
| `crVersion` | CR Cluster Manifest version | `1.16.2` |
| `pause` | Stop PSMDB Database safely | `false` |
| `unmanaged` | Start cluster and don't manage it (cross cluster replication) | `false` |
| `unsafeFlags.tls` | Allows users from configuring a cluster without TLS/SSL certificates | `false` |
| `unsafeFlags.replsetSize` | Allows users from configuring a cluster with unsafe parameters: starting it with less than 3 replica set instances or with an even number of replica set instances without additional arbiter | `false` |
| `unsafeFlags.mongosSize` | Allows users from configuring a sharded cluster with less than 3 config server Pods or less than 2 mongos Pods | `false` |
| `unsafeFlags.terminationGracePeriod` | Allows users from configuring a sharded cluster without termination grace period for replica set | `false` |
| `unsafeFlags.backupIfUnhealthy` | Allows running backup on a cluster with failed health checks | `false` |
| `clusterServiceDNSSuffix` | The (non-standard) cluster domain to be used as a suffix of the Service name | `""` |
| `clusterServiceDNSMode` | Mode for the cluster service dns (Internal/ServiceMesh) | `""` |
| `annotations` | PSMDB custom resource annotations | `{}` |
| `ignoreAnnotations` | The list of annotations to be ignored by the Operator | `[]` |
| `ignoreLabels` | The list of labels to be ignored by the Operator | `[]` |
| `multiCluster.enabled` | Enable Multi Cluster Services (MCS) cluster mode | `false` |
| `multiCluster.DNSSuffix` | The cluster domain to be used as a suffix for multi-cluster Services used by Kubernetes | `""` |
| `updateStrategy` | Regulates the way how PSMDB Cluster Pods will be updated after setting a new image | `SmartUpdate` |
| `upgradeOptions.versionServiceEndpoint` | Endpoint for actual PSMDB Versions provider | `https://check.percona.com/versions/` |
| `upgradeOptions.apply` | PSMDB image to apply from version service - recommended, latest, actual version like 4.4.2-4 | `disabled` |
| `upgradeOptions.schedule` | Cron formatted time to execute the update | `"0 2 * * *"` |
| `upgradeOptions.setFCV` | Set feature compatibility version on major upgrade | `false` |
| `finalizers:delete-psmdb-pvc` | Set this if you want to delete database persistent volumes on cluster deletion | `[]` |
| `finalizers:delete-psmdb-pods-in-order` | Set this if you want to delete PSMDB pods in order (primary last) | `[]` |
| `image.repository` | PSMDB Container image repository | `percona/percona-server-mongodb` |
| `image.tag` | PSMDB Container image tag | `6.0.9-7` |
| `imagePullPolicy` | The policy used to update images | `Always` |
| `imagePullSecrets` | PSMDB Container pull secret | `[]` |
| `initImage.repository` | Repository for custom init image | `""` |
| `initImage.tag` | Tag for custom init image | `""` |
| `initContainerSecurityContext` | A custom Kubernetes Security Context for a Container for the initImage | `{}` |
| `tls.mode` | Control usage of TLS (allowTLS, preferTLS, requireTLS, disabled) | `preferTLS` |
| `tls.certValidityDuration` | The validity duration of the external certificate for cert manager | `""` |
| `tls.allowInvalidCertificates` | If enabled the mongo shell will not attempt to validate the server certificates | `true` |
| `tls.issuerConf.name` | A cert-manager issuer name | `""` |
| `tls.issuerConf.kind` | A cert-manager issuer kind | `""` |
| `tls.issuerConf.group` | A cert-manager issuer group | `""` |
| `secrets.users` | The name of the Secrets object for the MongoDB users required to run the operator | `""` |
| `secrets.encryptionKey` | Set secret for data at rest encryption key | `""` |
| `secrets.vault` | Specifies a secret object to provide integration with HashiCorp Vault | `""` |
| `secrets.ldapSecret` | Specifies a secret object for LDAP over TLS connection between MongoDB and OpenLDAP server | `""` |
| `secrets.sse` | The name of the Secrets object for server side encryption credentials | `""` |
| `secrets.ssl` | A secret with TLS certificate generated for external communications | `""` |
| `secrets.sslInternal` | A secret with TLS certificate generated for internal communications | `""` |
| `pmm.enabled` | Enable integration with [Percona Monitoring and Management software](https://www.percona.com/blog/2020/07/23/using-percona-kubernetes-operators-with-percona-monitoring-and-management/) | `false` |
| `pmm.image.repository` | PMM Container image repository | `percona/pmm-client` |
| `pmm.image.tag` | PMM Container image tag | `2.41.2` |
| `pmm.serverHost` | PMM server related K8S service hostname | `monitoring-service` |
||
| `replsets.rs0.name` | ReplicaSet name | `rs0` |
| `replsets.rs0.size` | ReplicaSet size (pod quantity) | `3` |
| `replsets.rs0.terminationGracePeriodSeconds` | The amount of seconds Kubernetes will wait for a clean replica set Pods termination | `""` |
| `replsets.rs0.externalNodes` | ReplicaSet external nodes (cross cluster replication) | `[]` |
| `replsets.rs0.configuration` | Custom config for mongod in replica set | `""` |
| `replsets.rs0.topologySpreadConstraints` | Control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains | `{}` |
| `replsets.rs0.serviceAccountName` | Run replicaset Containers under specified K8S SA | `""` |
| `replsets.rs0.affinity.antiAffinityTopologyKey` | ReplicaSet Pod affinity | `kubernetes.io/hostname` |
| `replsets.rs0.affinity.advanced` | ReplicaSet Pod advanced affinity | `{}` |
| `replsets.rs0.tolerations` | ReplicaSet Pod tolerations | `[]` |
| `replsets.rs0.priorityClass` | ReplicaSet Pod priorityClassName | `""` |
| `replsets.rs0.annotations` | ReplicaSet Pod annotations | `{}` |
| `replsets.rs0.labels` | ReplicaSet Pod labels | `{}` |
| `replsets.rs0.nodeSelector` | ReplicaSet Pod nodeSelector labels | `{}` |
| `replsets.rs0.livenessProbe` | ReplicaSet Pod livenessProbe structure | `{}` |
| `replsets.rs0.readinessProbe` | ReplicaSet Pod readinessProbe structure | `{}` |
| `replsets.rs0.storage` | Set cacheSizeRatio or other custom MongoDB storage options | `{}` |
| `replsets.rs0.podSecurityContext` | Set the security context for a Pod | `{}` |
| `replsets.rs0.containerSecurityContext` | Set the security context for a Container | `{}` |
| `replsets.rs0.runtimeClass` | ReplicaSet Pod runtimeClassName | `""` |
| `replsets.rs0.sidecars` | ReplicaSet Pod sidecars | `{}` |
| `replsets.rs0.sidecarVolumes` | ReplicaSet Pod sidecar volumes | `[]` |
| `replsets.rs0.sidecarPVCs` | ReplicaSet Pod sidecar PVCs | `[]` |
| `replsets.rs0.podDisruptionBudget.maxUnavailable` | ReplicaSet failed Pods maximum quantity | `1` |
| `replsets.rs0.splitHorizons` | External URI for Split-horizon for replica set Pods of the exposed cluster | `{}` |
| `replsets.rs0.expose.enabled` | Allow access to replicaSet from outside of Kubernetes | `false` |
| `replsets.rs0.expose.exposeType` | Network service access point type | `ClusterIP` |
| `replsets.rs0.expose.loadBalancerSourceRanges` | Limit client IP's access to Load Balancer | `{}` |
| `replsets.rs0.expose.serviceAnnotations` | ReplicaSet service annotations | `{}` |
| `replsets.rs0.expose.serviceLabels` | ReplicaSet service labels | `{}` |
| `replsets.rs0.schedulerName` | ReplicaSet Pod schedulerName | `""` |
| `replsets.rs0.resources` | ReplicaSet Pods resource requests and limits | `{}` |
| `replsets.rs0.volumeSpec` | ReplicaSet Pods storage resources | `{}` |
| `replsets.rs0.volumeSpec.emptyDir` | ReplicaSet Pods emptyDir K8S storage | `{}` |
| `replsets.rs0.volumeSpec.hostPath` | ReplicaSet Pods hostPath K8S storage | |
| `replsets.rs0.volumeSpec.hostPath.path` | ReplicaSet Pods hostPath K8S storage path | `""` |
| `replsets.rs0.volumeSpec.hostPath.type` | Type for hostPath volume | `Directory` |
| `replsets.rs0.volumeSpec.pvc` | ReplicaSet Pods PVC request parameters | |
| `replsets.rs0.volumeSpec.pvc.annotations` | The Kubernetes annotations metadata for Persistent Volume Claim | `{}` |
| `replsets.rs0.volumeSpec.pvc.labels` | The Kubernetes labels metadata for Persistent Volume Claim | `{}` |
| `replsets.rs0.volumeSpec.pvc.storageClassName` | ReplicaSet Pods PVC target storageClass | `""` |
| `replsets.rs0.volumeSpec.pvc.accessModes` | ReplicaSet Pods PVC access policy | `[]` |
| `replsets.rs0.volumeSpec.pvc.resources.requests.storage` | ReplicaSet Pods PVC storage size | `3Gi` |
| `replsets.rs0.hostAliases` | The IP address for Kubernetes host aliases | `[]` |
| `replsets.rs0.nonvoting.enabled` | Add MongoDB nonvoting Pods | `false` |
| `replsets.rs0.nonvoting.podSecurityContext` | Set the security context for a Pod | `{}` |
| `replsets.rs0.nonvoting.containerSecurityContext` | Set the security context for a Container | `{}` |
| `replsets.rs0.nonvoting.size` | Number of nonvoting Pods | `1` |
| `replsets.rs0.nonvoting.configuration` | Custom config for mongod nonvoting member | `""` |
| `replsets.rs0.nonvoting.serviceAccountName` | Run replicaset nonvoting Container under specified K8S SA | `""` |
| `replsets.rs0.nonvoting.affinity.antiAffinityTopologyKey` | Nonvoting Pods affinity | `kubernetes.io/hostname` |
| `replsets.rs0.nonvoting.affinity.advanced` | Nonvoting Pods advanced affinity | `{}` |
| `replsets.rs0.nonvoting.tolerations` | Nonvoting Pod tolerations | `[]` |
| `replsets.rs0.nonvoting.priorityClass` | Nonvoting Pod priorityClassName | `""` |
| `replsets.rs0.nonvoting.annotations` | Nonvoting Pod annotations | `{}` |
| `replsets.rs0.nonvoting.labels` | Nonvoting Pod labels | `{}` |
| `replsets.rs0.nonvoting.nodeSelector` | Nonvoting Pod nodeSelector labels | `{}` |
| `replsets.rs0.nonvoting.podDisruptionBudget.maxUnavailable` | Nonvoting failed Pods maximum quantity | `1` |
| `replsets.rs0.nonvoting.resources` | Nonvoting Pods resource requests and limits | `{}` |
| `replsets.rs0.nonvoting.volumeSpec` | Nonvoting Pods storage resources | `{}` |
| `replsets.rs0.nonvoting.volumeSpec.emptyDir` | Nonvoting Pods emptyDir K8S storage | `{}` |
| `replsets.rs0.nonvoting.volumeSpec.hostPath` | Nonvoting Pods hostPath K8S storage | |
| `replsets.rs0.nonvoting.volumeSpec.hostPath.path` | Nonvoting Pods hostPath K8S storage path | `""` |
| `replsets.rs0.nonvoting.volumeSpec.hostPath.type` | Type for hostPath volume | `Directory` |
| `replsets.rs0.nonvoting.volumeSpec.pvc` | Nonvoting Pods PVC request parameters | |
| `replsets.rs0.nonvoting.volumeSpec.pvc.annotations` | The Kubernetes annotations metadata for Persistent Volume Claim | `{}` |
| `replsets.rs0.nonvoting.volumeSpec.pvc.labels` | The Kubernetes labels metadata for Persistent Volume Claim | `{}` |
| `replsets.rs0.nonvoting.volumeSpec.pvc.storageClassName` | Nonvoting Pods PVC target storageClass | `""` |
| `replsets.rs0.nonvoting.volumeSpec.pvc.accessModes` | Nonvoting Pods PVC access policy | `[]` |
| `replsets.rs0.nonvoting.volumeSpec.pvc.resources.requests.storage` | Nonvoting Pods PVC storage size | `3Gi` |
| `replsets.rs0.arbiter.enabled` | Create MongoDB arbiter service | `false` |
| `replsets.rs0.arbiter.size` | MongoDB arbiter Pod quantity | `1` |
| `replsets.rs0.arbiter.serviceAccountName` | Run replicaset arbiter Container under specified K8S SA | `""` |
| `replsets.rs0.arbiter.affinity.antiAffinityTopologyKey` | MongoDB arbiter Pod affinity | `kubernetes.io/hostname` |
| `replsets.rs0.arbiter.affinity.advanced` | MongoDB arbiter Pod advanced affinity | `{}` |
| `replsets.rs0.arbiter.tolerations` | MongoDB arbiter Pod tolerations | `[]` |
| `replsets.rs0.arbiter.priorityClass` | MongoDB arbiter priorityClassName | `""` |
| `replsets.rs0.arbiter.annotations` | MongoDB arbiter Pod annotations | `{}` |
| `replsets.rs0.arbiter.labels` | MongoDB arbiter Pod labels | `{}` |
| `replsets.rs0.arbiter.nodeSelector` | MongoDB arbiter Pod nodeSelector labels | `{}` |
| |
| `sharding.enabled` | Enable sharding setup | `true` |
| `sharding.balancer.enabled` | Enable/disable balancer | `true` |
| `sharding.configrs.size` | Config ReplicaSet size (pod quantity) | `3` |
| `sharding.configrs.terminationGracePeriodSeconds` | The amount of seconds Kubernetes will wait for a clean replica set Pods termination | `""` |
| `sharding.configrs.externalNodes` | Config ReplicaSet external nodes (cross cluster replication) | `[]` |
| `sharding.configrs.configuration` | Custom config for mongod in config replica set | `""` |
| `sharding.configrs.topologySpreadConstraints` | Control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains | `{}` |
| `sharding.configrs.serviceAccountName` | Run sharding configrs Containers under specified K8S SA | `""` |
| `sharding.configrs.affinity.antiAffinityTopologyKey` | Config ReplicaSet Pod affinity | `kubernetes.io/hostname` |
| `sharding.configrs.affinity.advanced` | Config ReplicaSet Pod advanced affinity | `{}` |
| `sharding.configrs.tolerations` | Config ReplicaSet Pod tolerations | `[]` |
| `sharding.configrs.priorityClass` | Config ReplicaSet Pod priorityClassName | `""` |
| `sharding.configrs.annotations` | Config ReplicaSet Pod annotations | `{}` |
| `sharding.configrs.labels` | Config ReplicaSet Pod labels | `{}` |
| `sharding.configrs.nodeSelector` | Config ReplicaSet Pod nodeSelector labels | `{}` |
| `sharding.configrs.livenessProbe` | Config ReplicaSet Pod livenessProbe structure | `{}` |
| `sharding.configrs.readinessProbe` | Config ReplicaSet Pod readinessProbe structure | `{}` |
| `sharding.configrs.storage` | Set cacheSizeRatio or other custom MongoDB storage options | `{}` |
| `sharding.configrs.podSecurityContext` | Set the security context for a Pod | `{}` |
| `sharding.configrs.containerSecurityContext` | Set the security context for a Container | `{}` |
| `sharding.configrs.runtimeClass` | Config ReplicaSet Pod runtimeClassName | `""` |
| `sharding.configrs.sidecars` | Config ReplicaSet Pod sidecars | `{}` |
| `sharding.configrs.sidecarVolumes` | Config ReplicaSet Pod sidecar volumes | `[]` |
| `sharding.configrs.sidecarPVCs` | Config ReplicaSet Pod sidecar PVCs | `[]` |
| `sharding.configrs.podDisruptionBudget.maxUnavailable` | Config ReplicaSet failed Pods maximum quantity | `1` |
| `sharding.configrs.expose.enabled` | Allow access to cfg replica from outside of Kubernetes | `false` |
| `sharding.configrs.expose.exposeType` | Network service access point type | `ClusterIP` |
| `sharding.configrs.expose.loadBalancerSourceRanges` | Limit client IP's access to Load Balancer | `{}` |
| `sharding.configrs.expose.serviceAnnotations` | Config ReplicaSet service annotations | `{}` |
| `sharding.configrs.expose.serviceLabels` | Config ReplicaSet service labels | `{}` |
| `sharding.configrs.resources.limits.cpu` | Config ReplicaSet resource limits CPU | `300m` |
| `sharding.configrs.resources.limits.memory` | Config ReplicaSet resource limits memory | `0.5G` |
| `sharding.configrs.resources.requests.cpu` | Config ReplicaSet resource requests CPU | `300m` |
| `sharding.configrs.resources.requests.memory` | Config ReplicaSet resource requests memory | `0.5G` |
| `sharding.configrs.volumeSpec.hostPath` | Config ReplicaSet hostPath K8S storage | |
| `sharding.configrs.volumeSpec.hostPath.path` | Config ReplicaSet hostPath K8S storage path | `""` |
| `sharding.configrs.volumeSpec.hostPath.type` | Type for hostPath volum | `Directory` |
| `sharding.configrs.volumeSpec.emptyDir` | Config ReplicaSet Pods emptyDir K8S storage | |
| `sharding.configrs.volumeSpec.pvc` | Config ReplicaSet Pods PVC request parameters | |
| `sharding.configrs.volumeSpec.pvc.annotations` | The Kubernetes annotations metadata for Persistent Volume Claim | `{}` |
| `sharding.configrs.volumeSpec.pvc.labels` | The Kubernetes labels metadata for Persistent Volume Claim | `{}` |
| `sharding.configrs.volumeSpec.pvc.storageClassName` | Config ReplicaSet Pods PVC storageClass | `""` |
| `sharding.configrs.volumeSpec.pvc.accessModes` | Config ReplicaSet Pods PVC access policy | `[]` |
| `sharding.configrs.volumeSpec.pvc.resources.requests.storage` | Config ReplicaSet Pods PVC storage size | `3Gi` |
| `sharding.configrs.hostAliases` | The IP address for Kubernetes host aliases | `[]` |
| `sharding.mongos.size` | Mongos size (pod quantity) | `3` |
| `sharding.mongos.terminationGracePeriodSeconds` | The amount of seconds Kubernetes will wait for a clean mongos Pods termination | `""` |
| `sharding.mongos.configuration` | Custom config for mongos | `""` |
| `sharding.mongos.topologySpreadConstraints` | Control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains | `{}` |
| `sharding.mongos.serviceAccountName` | Run sharding mongos Containers under specified K8S SA | `""` |
| `sharding.mongos.affinity.antiAffinityTopologyKey` | Mongos Pods affinity | `kubernetes.io/hostname` |
| `sharding.mongos.affinity.advanced` | Mongos Pods advanced affinity | `{}` |
| `sharding.mongos.tolerations` | Mongos Pods tolerations | `[]` |
| `sharding.mongos.priorityClass` | Mongos Pods priorityClassName | `""` |
| `sharding.mongos.annotations` | Mongos Pods annotations | `{}` |
| `sharding.mongos.labels` | Mongos Pods labels | `{}` |
| `sharding.mongos.nodeSelector` | Mongos Pods nodeSelector labels | `{}` |
| `sharding.mongos.livenessProbe` | Mongos Pod livenessProbe structure | `{}` |
| `sharding.mongos.readinessProbe` | Mongos Pod readinessProbe structure | `{}` |
| `sharding.mongos.podSecurityContext` | Set the security context for a Pod | `{}` |
| `sharding.mongos.containerSecurityContext` | Set the security context for a Container | `{}` |
| `sharding.mongos.runtimeClass` | Mongos Pod runtimeClassName | `""` |
| `sharding.mongos.sidecars` | Mongos Pod sidecars | `{}` |
| `sharding.mongos.sidecarVolumes` | Mongos Pod sidecar volumes | `[]` |
| `sharding.mongos.sidecarPVCs` | Mongos Pod sidecar PVCs | `[]` |
| `sharding.mongos.podDisruptionBudget.maxUnavailable` | Mongos failed Pods maximum quantity | `1` |
| `sharding.mongos.resources.limits.cpu` | Mongos Pods resource limits CPU | `300m` |
| `sharding.mongos.resources.limits.memory` | Mongos Pods resource limits memory | `0.5G` |
| `sharding.mongos.resources.requests.cpu` | Mongos Pods resource requests CPU | `300m` |
| `sharding.mongos.resources.requests.memory` | Mongos Pods resource requests memory | `0.5G` |
| `sharding.mongos.expose.exposeType` | Mongos service exposeType | `ClusterIP` |
| `sharding.mongos.expose.servicePerPod` | Create a separate ClusterIP Service for each mongos instance | `false` |
| `sharding.mongos.expose.loadBalancerSourceRanges` | Limit client IP's access to Load Balancer | `{}` |
| `sharding.mongos.expose.serviceAnnotations` | Mongos service annotations | `{}` |
| `sharding.mongos.expose.serviceLabels` | Mongos service labels | `{}` |
| `sharding.mongos.expose.nodePort` | Custom port if exposing mongos via NodePort | `""` |
| `sharding.mongos.hostAliases` | The IP address for Kubernetes host aliases | `[]` |
| |
| `backup.enabled` | Enable backup PBM agent | `true` |
| `backup.annotations` | Backup job annotations | `{}` |
| `backup.podSecurityContext` | Set the security context for a Pod | `{}` |
| `backup.containerSecurityContext` | Set the security context for a Container | `{}` |
| `backup.restartOnFailure` | Backup Pods restart policy | `true` |
| `backup.image.repository` | PBM Container image repository | `percona/percona-backup-mongodb` |
| `backup.image.tag` | PBM Container image tag | `2.3.0` |
| `backup.storages` | Local/remote backup storages settings | `{}` |
| `backup.pitr.enabled` | Enable point in time recovery for backup | `false` |
| `backup.pitr.oplogOnly` | Start collecting oplogs even if full logical backup doesn't exist | `false` |
| `backup.pitr.oplogSpanMin` | Number of minutes between the uploads of oplogs | `10` |
| `backup.pitr.compressionType` | The point-in-time-recovery chunks compression format | `""` |
| `backup.pitr.compressionLevel` | The point-in-time-recovery chunks compression level | `""` |
| `backup.configuration.backupOptions` | Custom configuration settings for backup | `{}` |
| `backup.configuration.restoreOptions` | Custom configuration settings for restore | `{}` |
| `backup.tasks` | Backup working schedule | `{}` |
| `users` | PSMDB essential users | `{}` |
Specify parameters using `--set key=value[,key=value]` argument to `helm install`
Notice that you can use multiple replica sets only with sharding enabled.
## Examples
### Deploy a replica set with disabled backups and no mongos pods
This is great for a dev PSMDB/MongoDB cluster as it doesn't bother with backups and sharding setup.
```bash
$ helm install dev --namespace psmdb . \
--set runUid=1001 --set "replsets.rs0.volumeSpec.pvc.resources.requests.storage=20Gi" \
--set backup.enabled=false --set sharding.enabled=false
```

View File

@ -0,0 +1,18 @@
{{- if .Values.backup.enabled }}
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDBBackup
metadata:
name: {{ .Values.backup.name }}
{{- if .Values.backup.annotations }}
annotations:
{{ .Values.backup.annotations | toYaml | indent 4 }}
{{- end }}
{{- if .Values.backup.labels }}
labels:
{{ .Values.backup.labels | toYaml | indent 4 }}
{{- end }}
spec:
clusterName: {{ .Values.backup.clusterName }}
storageName: {{ .Values.backup.storageName }}
type: {{ .Values.backup.type }}
{{- end }}

View File

@ -0,0 +1,17 @@
{{- if .Values.restore.enabled }}
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDBRestore
metadata:
name: {{ .Values.restore.name }}
{{- if .Values.restore.annotations }}
annotations:
{{ .Values.restore.annotations | toYaml | indent 4 }}
{{- end }}
{{- if .Values.restore.labels }}
labels:
{{ .Values.restore.labels | toYaml | indent 4 }}
{{- end }}
spec:
clusterName: {{ .Values.restore.clusterName }}
backupName: {{ .Values.restore.backupName }}
{{- end }}

View File

@ -0,0 +1,805 @@
psmdb-operator:
enabled: true
# Default values for psmdb-operator.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: percona/percona-server-mongodb-operator
tag: 1.18.0
pullPolicy: IfNotPresent
# disableTelemetry: according to
# https://docs.percona.com/percona-operator-for-mongodb/telemetry.html
# this is how you can disable telemetry collection
# default is false which means telemetry will be collected
disableTelemetry: false
# set if you want to specify a namespace to watch
# defaults to `.Release.namespace` if left blank
# multiple namespaces can be specified and separated by comma
# watchNamespace:
# set if you want that watched namespaces are created by helm
# createNamespace: false
# set if operator should be deployed in cluster wide mode. defaults to false
watchAllNamespaces: false
# rbac: settings for deployer RBAC creation
rbac:
# rbac.create: if false RBAC resources should be in place
create: true
# serviceAccount: settings for Service Accounts used by the deployer
serviceAccount:
# serviceAccount.create: Whether to create the Service Accounts or not
create: true
# annotations to add to the service account
annotations: {}
# annotations to add to the operator deployment
annotations: {}
# labels to add to the operator deployment
labels: {}
# annotations to add to the operator pod
podAnnotations: {}
# prometheus.io/scrape: "true"
# prometheus.io/port: "8080"
# labels to the operator pod
podLabels: {}
podSecurityContext: {}
# runAsNonRoot: true
# runAsUser: 2
# runAsGroup: 2
# fsGroup: 2
# fsGroupChangePolicy: "OnRootMismatch"
securityContext: {}
# allowPrivilegeEscalation: false
# capabilities:
# drop:
# - ALL
# seccompProfile:
# type: RuntimeDefault
# set if you want to use a different operator name
# defaults to `percona-server-mongodb-operator`
# operatorName:
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
env:
resyncPeriod: 5s
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
logStructured: false
logLevel: "INFO"
psmdb-db:
enabled: true
# Default values for psmdb-cluster.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# Platform type: kubernetes, openshift
# platform: kubernetes
# Cluster DNS Suffix
# clusterServiceDNSSuffix: svc.cluster.local
# clusterServiceDNSMode: "Internal"
finalizers:
## Set this if you want that operator deletes the primary pod last
- percona.com/delete-psmdb-pods-in-order
## Set this if you want to delete database persistent volumes on cluster deletion
# - percona.com/delete-psmdb-pvc
## Set this if you want to delete all pitr chunks on cluster deletion
# - percona.com/delete-pitr-chunks
nameOverride: ""
fullnameOverride: ""
crVersion: 1.18.0
pause: false
unmanaged: false
unsafeFlags:
tls: false
replsetSize: true
mongosSize: false
terminationGracePeriod: false
backupIfUnhealthy: false
enableVolumeExpansion: false
annotations: {}
# ignoreAnnotations:
# - service.beta.kubernetes.io/aws-load-balancer-backend-protocol
# ignoreLabels:
# - rack
multiCluster:
enabled: false
# DNSSuffix: svc.clusterset.local
updateStrategy: SmartUpdate
upgradeOptions:
versionServiceEndpoint: https://check.percona.com
apply: disabled
schedule: "0 2 * * *"
setFCV: false
image:
repository: percona/percona-server-mongodb
tag: 7.0.14-8-multi
imagePullPolicy: Always
# imagePullSecrets: []
# initImage:
# repository: percona/percona-server-mongodb-operator
# tag: 1.18.0
# initContainerSecurityContext: {}
# tls:
# mode: preferTLS
# # 90 days in hours
# certValidityDuration: 2160h
# allowInvalidCertificates: true
# issuerConf:
# name: special-selfsigned-issuer
# kind: ClusterIssuer
# group: cert-manager.io
secrets: {}
# If you set users secret here the operator will use existing one or generate random values
# If not set the operator generates the default secret with name <cluster_name>-secrets
# users: my-cluster-name-secrets
# encryptionKey: my-cluster-name-mongodb-encryption-key
# keyFile: my-cluster-name-mongodb-keyfile
# vault: my-cluster-name-vault
# ldapSecret: my-ldap-secret
# sse: my-cluster-name-sse
pmm:
enabled: false
image:
repository: percona/pmm-client
tag: 2.43.2
serverHost: monitoring-service
# mongodParams: ""
# mongosParams: ""
# resources: {}
# containerSecurityContext: {}
replsets:
rs0:
name: rs0
size: 3
# terminationGracePeriodSeconds: 300
# externalNodes:
# - host: 34.124.76.90
# - host: 34.124.76.91
# port: 27017
# votes: 0
# priority: 0
# - host: 34.124.76.92
# configuration: |
# operationProfiling:
# mode: slowOp
# systemLog:
# verbosity: 1
# serviceAccountName: percona-server-mongodb-operator
# topologySpreadConstraints:
# - labelSelector:
# matchLabels:
# app.kubernetes.io/name: percona-server-mongodb
# maxSkew: 1
# topologyKey: kubernetes.io/hostname
# whenUnsatisfiable: DoNotSchedule
# replsetOverrides:
# my-cluster-name-rs0-0:
# host: my-cluster-name-rs0-0.example.net:27017
# tags:
# key: value-0
# my-cluster-name-rs0-1:
# host: my-cluster-name-rs0-1.example.net:27017
# tags:
# key: value-1
# my-cluster-name-rs0-2:
# host: my-cluster-name-rs0-2.example.net:27017
# tags:
# key: value-2
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
# advanced:
# podAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# - labelSelector:
# matchExpressions:
# - key: security
# operator: In
# values:
# - S1
# topologyKey: failure-domain.beta.kubernetes.io/zone
# tolerations: []
# primaryPreferTagSelector:
# region: us-west-2
# zone: us-west-2c
# priorityClass: ""
# annotations: {}
# labels: {}
# podSecurityContext: {}
# containerSecurityContext: {}
# nodeSelector: {}
# livenessProbe:
# failureThreshold: 4
# initialDelaySeconds: 60
# periodSeconds: 30
# timeoutSeconds: 10
# startupDelaySeconds: 7200
# readinessProbe:
# failureThreshold: 8
# initialDelaySeconds: 10
# periodSeconds: 3
# successThreshold: 1
# timeoutSeconds: 2
# runtimeClassName: image-rc
# storage:
# engine: wiredTiger
# wiredTiger:
# engineConfig:
# cacheSizeRatio: 0.5
# directoryForIndexes: false
# journalCompressor: snappy
# collectionConfig:
# blockCompressor: snappy
# indexConfig:
# prefixCompression: true
# inMemory:
# engineConfig:
# inMemorySizeRatio: 0.5
# sidecars:
# - image: busybox
# command: ["/bin/sh"]
# args: ["-c", "while true; do echo echo $(date -u) 'test' >> /dev/null; sleep 5;done"]
# name: rs-sidecar-1
# volumeMounts:
# - mountPath: /volume1
# name: sidecar-volume-claim
# - mountPath: /secret
# name: sidecar-secret
# - mountPath: /configmap
# name: sidecar-config
# sidecarVolumes:
# - name: sidecar-secret
# secret:
# secretName: mysecret
# - name: sidecar-config
# configMap:
# name: myconfigmap
# sidecarPVCs:
# - apiVersion: v1
# kind: PersistentVolumeClaim
# metadata:
# name: sidecar-volume-claim
# spec:
# resources:
# requests:
# storage: 1Gi
# volumeMode: Filesystem
# accessModes:
# - ReadWriteOnce
podDisruptionBudget:
maxUnavailable: 1
# splitHorizons:
# my-cluster-name-rs0-0:
# external: rs0-0.mycluster.xyz
# external-2: rs0-0.mycluster2.xyz
# my-cluster-name-rs0-1:
# external: rs0-1.mycluster.xyz
# external-2: rs0-1.mycluster2.xyz
# my-cluster-name-rs0-2:
# external: rs0-2.mycluster.xyz
# external-2: rs0-2.mycluster2.xyz
expose:
enabled: false
type: ClusterIP
# loadBalancerIP: 10.0.0.0
# loadBalancerSourceRanges:
# - 10.0.0.0/8
# annotations:
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# labels:
# some-label: some-key
# internalTrafficPolicy: Local
# schedulerName: ""
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
# emptyDir: {}
# hostPath:
# path: /data
# type: Directory
pvc:
# annotations:
# volume.beta.kubernetes.io/storage-class: example-hostpath
# labels:
# rack: rack-22
# storageClassName: standard
# accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
# hostAliases:
# - ip: "10.10.0.2"
# hostnames:
# - "host1"
# - "host2"
nonvoting:
enabled: false
# podSecurityContext: {}
# containerSecurityContext: {}
size: 3
# configuration: |
# operationProfiling:
# mode: slowOp
# systemLog:
# verbosity: 1
# serviceAccountName: percona-server-mongodb-operator
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
# advanced:
# podAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# - labelSelector:
# matchExpressions:
# - key: security
# operator: In
# values:
# - S1
# topologyKey: failure-domain.beta.kubernetes.io/zone
# tolerations: []
# priorityClass: ""
# annotations: {}
# labels: {}
# nodeSelector: {}
podDisruptionBudget:
maxUnavailable: 1
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
# emptyDir: {}
# hostPath:
# path: /data
# type: Directory
pvc:
# annotations:
# volume.beta.kubernetes.io/storage-class: example-hostpath
# labels:
# rack: rack-22
# storageClassName: standard
# accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
arbiter:
enabled: false
size: 1
# serviceAccountName: percona-server-mongodb-operator
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
# advanced:
# podAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# - labelSelector:
# matchExpressions:
# - key: security
# operator: In
# values:
# - S1
# topologyKey: failure-domain.beta.kubernetes.io/zone
# tolerations: []
# priorityClass: ""
# annotations: {}
# labels: {}
# nodeSelector: {}
sharding:
enabled: true
balancer:
enabled: true
configrs:
size: 3
# terminationGracePeriodSeconds: 300
# externalNodes:
# - host: 34.124.76.90
# - host: 34.124.76.91
# port: 27017
# votes: 0
# priority: 0
# - host: 34.124.76.92
# configuration: |
# operationProfiling:
# mode: slowOp
# systemLog:
# verbosity: 1
# serviceAccountName: percona-server-mongodb-operator
# topologySpreadConstraints:
# - labelSelector:
# matchLabels:
# app.kubernetes.io/name: percona-server-mongodb
# maxSkew: 1
# topologyKey: kubernetes.io/hostname
# whenUnsatisfiable: DoNotSchedule
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
# advanced:
# podAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# - labelSelector:
# matchExpressions:
# - key: security
# operator: In
# values:
# - S1
# topologyKey: failure-domain.beta.kubernetes.io/zone
# tolerations: []
# priorityClass: ""
# annotations: {}
# labels: {}
# podSecurityContext: {}
# containerSecurityContext: {}
# nodeSelector: {}
# livenessProbe: {}
# readinessProbe: {}
# runtimeClassName: image-rc
# sidecars:
# - image: busybox
# command: ["/bin/sh"]
# args: ["-c", "while true; do echo echo $(date -u) 'test' >> /dev/null; sleep 5;done"]
# name: rs-sidecar-1
# volumeMounts:
# - mountPath: /volume1
# name: sidecar-volume-claim
# sidecarPVCs: []
# sidecarVolumes: []
podDisruptionBudget:
maxUnavailable: 1
expose:
enabled: false
type: ClusterIP
# loadBalancerIP: 10.0.0.0
# loadBalancerSourceRanges:
# - 10.0.0.0/8
# annotations:
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# labels:
# some-label: some-key
# internalTrafficPolicy: Local
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
# emptyDir: {}
# hostPath:
# path: /data
# type: Directory
pvc:
# annotations:
# volume.beta.kubernetes.io/storage-class: example-hostpath
# labels:
# rack: rack-22
# storageClassName: standard
# accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
# hostAliases:
# - ip: "10.10.0.2"
# hostnames:
# - "host1"
# - "host2"
mongos:
size: 3
# terminationGracePeriodSeconds: 300
# configuration: |
# systemLog:
# verbosity: 1
# serviceAccountName: percona-server-mongodb-operator
# topologySpreadConstraints:
# - labelSelector:
# matchLabels:
# app.kubernetes.io/name: percona-server-mongodb
# maxSkew: 1
# topologyKey: kubernetes.io/hostname
# whenUnsatisfiable: DoNotSchedule
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
# advanced:
# podAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# - labelSelector:
# matchExpressions:
# - key: security
# operator: In
# values:
# - S1
# topologyKey: failure-domain.beta.kubernetes.io/zone
# tolerations: []
# priorityClass: ""
# annotations: {}
# labels: {}
# podSecurityContext: {}
# containerSecurityContext: {}
# nodeSelector: {}
# livenessProbe: {}
# readinessProbe: {}
# runtimeClassName: image-rc
# sidecars:
# - image: busybox
# command: ["/bin/sh"]
# args: ["-c", "while true; do echo echo $(date -u) 'test' >> /dev/null; sleep 5;done"]
# name: rs-sidecar-1
# volumeMounts:
# - mountPath: /volume1
# name: sidecar-volume-claim
# sidecarPVCs: []
# sidecarVolumes: []
podDisruptionBudget:
maxUnavailable: 1
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
expose:
enabled: false
type: ClusterIP
# loadBalancerIP: 10.0.0.0/8
# loadBalancerSourceRanges:
# - 10.0.0.0/8
# annotations:
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# labels:
# some-label: some-key
# internalTrafficPolicy: Local
# nodePort: 32017
# auditLog:
# destination: file
# format: BSON
# filter: '{}'
# hostAliases:
# - ip: "10.10.0.2"
# hostnames:
# - "host1"
# - "host2"
# users:
# - name: my-user
# db: admin
# passwordSecretRef:
# name: my-user-password
# key: my-user-password-key
# roles:
# - name: clusterAdmin
# db: admin
# - name: userAdminAnyDatabase
# db: admin
# - name: my-usr
# db: admin
# passwordSecretRef:
# name: my-user-pwd
# key: my-user-pwd-key
# roles:
# - name: dbOwner
# db: sometest
# roles:
# - role: myClusterwideAdmin
# db: admin
# privileges:
# - resource:
# cluster: true
# actions:
# - addShard
# - resource:
# db: config
# collection: ''
# actions:
# - find
# - update
# - insert
# - remove
# roles:
# - role: read
# db: admin
# - role: my-role
# db: myDb
# privileges:
# - resource:
# db: ''
# collection: ''
# actions:
# - find
# authenticationRestrictions:
# - clientSource:
# - 127.0.0.1
# serverAddress:
# - 127.0.0.1
backup:
enabled: false
image:
repository: percona/percona-backup-mongodb
tag: 2.7.0-multi
# annotations:
# iam.amazonaws.com/role: role-arn
# podSecurityContext: {}
# containerSecurityContext: {}
# resources:
# limits:
# cpu: "300m"
# memory: "1.2G"
# requests:
# cpu: "300m"
# memory: "1G"
storages:
# s3-us-west:
# type: s3
# s3:
# bucket: S3-BACKUP-BUCKET-NAME-HERE
# credentialsSecret: my-cluster-name-backup-s3
# serverSideEncryption:
# kmsKeyID: 1234abcd-12ab-34cd-56ef-1234567890ab
# sseAlgorithm: aws:kms
# sseCustomerAlgorithm: AES256
# sseCustomerKey: Y3VzdG9tZXIta2V5
# retryer:
# numMaxRetries: 3
# minRetryDelay: 30ms
# maxRetryDelay: 5m
# region: us-west-2
# prefix: ""
# uploadPartSize: 10485760
# maxUploadParts: 10000
# storageClass: STANDARD
# insecureSkipTLSVerify: false
# minio:
# type: s3
# s3:
# bucket: MINIO-BACKUP-BUCKET-NAME-HERE
# region: us-east-1
# credentialsSecret: my-cluster-name-backup-minio
# endpointUrl: http://minio.psmdb.svc.cluster.local:9000/minio/
# prefix: ""
# azure-blob:
# type: azure
# azure:
# container: percona-container
# prefix: backups
# endpointUrl: https://perconasa.blob.core.windows.net
# credentialsSecret: perconasasecret
pitr:
enabled: false
oplogOnly: false
# oplogSpanMin: 10
# compressionType: gzip
# compressionLevel: 6
# configuration:
# backupOptions:
# priority:
# "localhost:28019": 2.5
# "localhost:27018": 2.5
# timeouts:
# startingStatus: 33
# oplogSpanMin: 10
# restoreOptions:
# batchSize: 500
# numInsertionWorkers: 10
# numDownloadWorkers: 4
# maxDownloadBufferMb: 0
# downloadChunkMb: 32
# mongodLocation: /usr/bin/mongo
# mongodLocationMap:
# "node01:2017": /usr/bin/mongo
# "node03:27017": /usr/bin/mongo
tasks:
# - name: daily-s3-us-west
# enabled: true
# schedule: "0 0 * * *"
# keep: 3
# storageName: s3-us-west
# compressionType: gzip
# - name: weekly-s3-us-west
# enabled: false
# schedule: "0 0 * * 0"
# keep: 5
# storageName: s3-us-west
# compressionType: gzip
# - name: weekly-s3-us-west-physical
# enabled: false
# schedule: "0 5 * * 0"
# keep: 5
# type: physical
# storageName: s3-us-west
# compressionType: gzip
# compressionLevel: 6
# If you set systemUsers here the secret will be constructed by helm with these values
# systemUsers:
# MONGODB_BACKUP_USER: backup
# MONGODB_BACKUP_PASSWORD: backup123456
# MONGODB_DATABASE_ADMIN_USER: databaseAdmin
# MONGODB_DATABASE_ADMIN_PASSWORD: databaseAdmin123456
# MONGODB_CLUSTER_ADMIN_USER: clusterAdmin
# MONGODB_CLUSTER_ADMIN_PASSWORD: clusterAdmin123456
# MONGODB_CLUSTER_MONITOR_USER: clusterMonitor
# MONGODB_CLUSTER_MONITOR_PASSWORD: clusterMonitor123456
# MONGODB_USER_ADMIN_USER: userAdmin
# MONGODB_USER_ADMIN_PASSWORD: userAdmin123456
# PMM_SERVER_API_KEY: apikey
# # PMM_SERVER_USER: admin
# # PMM_SERVER_PASSWORD: admin
backup:
enabled: true
annotations:
description: "test"
name: backup
labels:
app: mongo-backup
environment: testing
clusterName: mdb-db-psmdb-db
storageName: azure-blob
type: logical
restore:
enabled: true
annotations:
description: "test"
name: restore1
labels:
app: mongo-restore
environment: testing
clusterName: mdb-db-psmdb-db
backupName: backup

View File

@ -1,20 +0,0 @@
---
apiVersion: v1
description: Provides easy redis setup definitions for Kubernetes services, and deployment.
engine: gotpl
maintainers:
- name: Opstree Solutions
name: redis-cluster
sources:
- https://github.com/ot-container-kit/redis-operator
version: 0.10.0
appVersion: "0.10.0"
home: https://github.com/ot-container-kit/redis-operator
keywords:
- operator
- redis
- opstree
- kubernetes
- openshift
- redis-exporter
icon: https://github.com/OT-CONTAINER-KIT/redis-operator/raw/master/static/redis-operator-logo.svg

View File

@ -1,62 +0,0 @@
## Redis Cluster
Redis is a key-value based distributed database, this helm chart is for redis cluster setup. This helm chart needs [Redis Operator](../redis-operator) inside Kubernetes cluster. The redis cluster definition can be modified or changed by [values.yaml](./values.yaml).
```shell
$ helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
$ helm install <my-release> ot-helm/redis-cluster \
--set redisCluster.clusterSize=3 --namespace <namespace>
```
Redis setup can be upgraded by using `helm upgrade` command:-
```shell
$ helm upgrade <my-release> ot-helm/redis-cluster --install \
--set redisCluster.clusterSize=5 --namespace <namespace>
```
For uninstalling the chart:-
```shell
$ helm delete <my-release> --namespace <namespace>
```
### Pre-Requisities
- Kubernetes 1.15+
- Helm 3.X
- Redis Operator 0.7.0
### Parameters
|**Name**|**Default Value**|**Description**|
|--------|-----------------|---------------|
|`imagePullSecrets` | [] | List of image pull secrets, in case redis image is getting pull from private registry |
|`redisCluster.clusterSize` | 3 | Size of the redis cluster leader and follower nodes |
|`redisCluster.secretName` | redis-secret | Name of the existing secret in Kubernetes |
|`redisCluster.secretKey` | password | Name of the existing secret key in Kubernetes |
|`redisCluster.image` | quay.io/opstree/redis | Name of the redis image |
|`redisCluster.tag` | v6.2 | Tag of the redis image |
|`redisCluster.imagePullPolicy` | IfNotPresent | Image Pull Policy of the redis image |
|`redisCluster.leaderServiceType` | ClusterIP | Kubernetes service type for Redis Leader |
|`redisCluster.followerServiceType` | ClusterIP | Kubernetes service type for Redis Follower |
|`externalService.enabled`| false | If redis service needs to be exposed using LoadBalancer or NodePort |
|`externalService.annotations`| {} | Kubernetes service related annotations |
|`externalService.serviceType` | NodePort | Kubernetes service type for exposing service, values - ClusterIP, NodePort, and LoadBalancer |
|`externalService.port` | 6379 | Port number on which redis external service should be exposed |
|`serviceMonitor.enabled` | false | Servicemonitor to monitor redis with Prometheus |
|`serviceMonitor.interval` | 30s | Interval at which metrics should be scraped. |
|`serviceMonitor.scrapeTimeout` | 10s | Timeout after which the scrape is ended |
|`serviceMonitor.namespace` | monitoring | Namespace in which Prometheus operator is running |
|`redisExporter.enabled` | true | Redis exporter should be deployed or not |
|`redisExporter.image` | quay.io/opstree/redis-exporter | Name of the redis exporter image |
|`redisExporter.tag` | v6.2 | Tag of the redis exporter image |
|`redisExporter.imagePullPolicy` | IfNotPresent | Image Pull Policy of the redis exporter image |
|`redisExporter.env` | [] | Extra environment variables which needs to be added in redis exporter|
|`sidecars` | [] | Sidecar for redis pods |
|`nodeSelector` | {} | NodeSelector for redis statefulset |
|`priorityClassName`| "" | Priority class name for the redis statefulset |
|`storageSpec` | {} | Storage configuration for redis setup |
|`securityContext` | {} | Security Context for redis pods for changing system or kernel level parameters |
|`affinity` | {} | Affinity for node and pods for redis statefulset |
|`tolerations` | [] | Tolerations for redis statefulset |

View File

@ -1,29 +0,0 @@
{{- if and (gt (int .Values.redisCluster.follower.replicas) 0) (eq .Values.externalService.enabled true) }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-follower-external-service
{{- if .Values.externalService.annotations }}
annotations:
{{ toYaml .Values.externalService.annotations | indent 4 }}
{{- end }}
labels:
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
spec:
type: {{ .Values.externalService.serviceType }}
selector:
app: {{ .Release.Name }}-follower
redis_setup_type: follower
role: follower
ports:
- protocol: TCP
port: {{ .Values.externalService.port }}
targetPort: 6379
name: client
{{- end }}

View File

@ -1,27 +0,0 @@
{{- if and (eq .Values.serviceMonitor.enabled true) (gt (int .Values.redisCluster.follower.replicas) 0) }}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ .Release.Name }}-follower-prometheus-monitoring
labels:
app.kubernetes.io/name: {{ .Release.Name }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: middleware
spec:
selector:
matchLabels:
app: {{ .Release.Name }}-follower
redis_setup_type: cluster
role: follower
endpoints:
- port: redis-exporter
interval: {{ .Values.serviceMonitor.interval }}
scrapeTimeout: {{ .Values.serviceMonitor.scrapeTimeout }}
namespaceSelector:
matchNames:
- {{ .Values.serviceMonitor.namespace }}
{{- end }}

Some files were not shown because too many files have changed in this diff Show More