Compare commits

...

80 Commits

Author SHA1 Message Date
Jorge Turrado Ferrero d121527052
chore: use ARM64 machine to build e2e test images (#7003)
Signed-off-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
2025-08-21 13:13:51 +02:00
Zhenghan Zhou 06f92b0319
Add error and event for mismatching input property (#6763)
* Update

Signed-off-by: zhenghanzhou <zhenghanzhou@microsoft.com>

* Update

Signed-off-by: zhenghanzhou <zhenghanzhou@microsoft.com>

* Update

Signed-off-by: zhenghanzhou <zhenghanzhou@microsoft.com>

* Update pkg/scalers/scalersconfig/typed_config_test.go

Co-authored-by: Rick Brouwer <rickbrouwer@gmail.com>
Signed-off-by: Zhenghan Zhou <iammrzhouzhenghan@gmail.com>

* Update

Signed-off-by: zhenghanzhou <zhenghanzhou@microsoft.com>

---------

Signed-off-by: zhenghanzhou <zhenghanzhou@microsoft.com>
Signed-off-by: Zhenghan Zhou <iammrzhouzhenghan@gmail.com>
Signed-off-by: Zbynek Roubalik <zroubalik@gmail.com>
Co-authored-by: zhenghanzhou <zhenghanzhou@microsoft.com>
Co-authored-by: Rick Brouwer <rickbrouwer@gmail.com>
Co-authored-by: Zbynek Roubalik <zroubalik@gmail.com>
2025-08-21 10:29:50 +02:00
Jorge Turrado Ferrero c63413fde0
chore: Bump k8s tested versions (#6988)
Signed-off-by: Jorge Turrado <jorge.turrado@mail.schwarz>
Co-authored-by: Zbynek Roubalik <zroubalik@gmail.com>
2025-08-20 09:00:19 +02:00
Rick Brouwer 265f27589e
fix: prevent e2e test reset when adding non-skip labels (#6930)
* fix: prevent e2e test reset when adding non-skip labels

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>

* Update .github/workflows/pr-e2e-checker.yml

Co-authored-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>

---------

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>
Co-authored-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
2025-08-19 10:36:54 +02:00
Rick Brouwer 9204b23edc
Refactor huawei cloudeye scaler (#6978)
Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>
2025-08-19 10:32:07 +02:00
John Kyros 794fe8089b
Kafka e2e: Bump Strimzi/Kafka for Kube 1.33, fix offset test flakes (#6929)
* Update strimzi to 0.47.0 for Kube 1.33, fix setup

Kube 1.33 added an emulationMajor field to the version API, which
breaks the version parsing of Strimzi older than 0.47.0, which causes
our Strimzi steup to fail to start on kube >= 1.33. Additionally, the
faulure mode for this was silent, as all we currently test for is
whether the helm chart for Strimzi was successfully applied.

To rectify this, this does the following to the kafka scaler tests:
- Waits for the Strimzi deployment to become available on setup
- Bumps Strimzi version for tests to 0.47.0
- Bumps Kafka version for tests to 4.0.0 (3.4.0 is too old for Strimzi)
- Configures Kafka for KRaft since Zookeeper has been deprecated
- Disables topic finalization so topics don't block namespace deletion

Signed-off-by: John Kyros <jkyros@redhat.com>

* Move Kafka offset tests to own consumer group

The Kafka offset tests flake in some environments if you move through
the test cases too fast -- the state from the consumer group in the
previous test seems to leak through to the next one because they are
sharing a consumer group, and thus will share offsets. .

This fixes these flakes by moving each of these tests to their own
consumer group to prevent this test pollution. They will still share the
same topic, which is fine, the offset is consumer-group specific.

Signed-off-by: John Kyros <jkyros@redhat.com>

* Delete strimzi CRDs during e2e test cleanup

Strimzi installs its CRDs in the cluster when it does a helm install
during the e2e test run, but helm is a big chicken and won't overwrite
or remove any CRDs during cleanup/reinstall, so we're stuck with the
first versions helm installed unless something explicitly removes them.

That wouldn't be a problem if we were grabbing a fresh cluster every
time, but we're not. We just scale up an existing one and create some
testing namespaces, so those old CRDs conflict with the  newer versions
of Strimzi we're trying to move to.

This just adds strimzi CRD cleanup to the e2e cleanup script so they get
removed at the end of a test run, so the next test run can install the
proper ones.

Signed-off-by: John Kyros <jkyros@redhat.com>

---------

Signed-off-by: John Kyros <jkyros@redhat.com>
2025-08-18 16:14:53 +02:00
Kodai Matsumoto c2f562f74c
fix: cron_scaler returns zero metric value by default (#6888)
Signed-off-by: frauniki <frauniki@sinoa.jp>
2025-08-17 18:20:39 +00:00
dependabot[bot] edd72b32ee
chore(deps): bump sigstore/cosign-installer from 3.8.2 to 3.9.2 (#6952)
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.8.2 to 3.9.2.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](3454372f43...d58896d6a1)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 3.9.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-17 19:38:02 +02:00
dependabot[bot] 5258fb0ef2
chore(deps): bump tj-actions/branch-names from 8.2.1 to 9.0.2 (#6951)
Bumps [tj-actions/branch-names](https://github.com/tj-actions/branch-names) from 8.2.1 to 9.0.2.
- [Release notes](https://github.com/tj-actions/branch-names/releases)
- [Changelog](https://github.com/tj-actions/branch-names/blob/main/HISTORY.md)
- [Commits](dde14ac574...5250492686)

---
updated-dependencies:
- dependency-name: tj-actions/branch-names
  dependency-version: 9.0.2
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-17 19:37:09 +02:00
dependabot[bot] 8cb622e563
chore(deps): bump github/codeql-action from 3.28.18 to 3.29.5 (#6953)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.18 to 3.29.5.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](ff0a06e83c...51f77329af)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 3.29.5
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-17 19:36:22 +02:00
dependabot[bot] 9151f0acf7
chore(deps): bump aquasecurity/trivy-action from 0.30.0 to 0.32.0 (#6954)
Bumps [aquasecurity/trivy-action](https://github.com/aquasecurity/trivy-action) from 0.30.0 to 0.32.0.
- [Release notes](https://github.com/aquasecurity/trivy-action/releases)
- [Commits](6c175e9c40...dc5a429b52)

---
updated-dependencies:
- dependency-name: aquasecurity/trivy-action
  dependency-version: 0.32.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-17 19:36:11 +02:00
Edwin Smulders d899439deb
Fix incorrect URL encoding in RabbitMQ vhosts containing %2f (#6964)
Signed-off-by: Edwin Smulders <edwin.smulders@kadaster.nl>
2025-08-17 11:50:59 +02:00
alfonso-chacon 56a023d958
New Solace Direct Messaging Scaler (#6546)
* * Solace Direct Messaging Scaler

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* Fix D-1 queue calculation

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* Refactor variables

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* Inlcude D-1 messages in calculation

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* Modify url to be able to use https protocol

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* Added UnsafeSSL parameter to avoid SSL validations

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* Added a factor to multiply the weight of queues messages to reduce discarded messages while scaling up

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* Refactor and enhancements

- Added Configuration validations
- Variables Refactor
- Multiple URLs
- Remove commented blocks

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* * Scaler E2E testing

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* Adding New Solace Direct Messaging Scaler in CHANGELOG.md

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* * Code Cleanup - CodeQL/StaticChecks

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* * Code cleanup /golangci-lint

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* * Correct order in CHANGELOG.md

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* * Remove trailing white spaces

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* * Cleanup code - remove comment

Co-authored-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
Signed-off-by: alfonso-chacon <alfonso.chacon@solace.com>

* * Cleanup code - remove comment

Co-authored-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
Signed-off-by: alfonso-chacon <alfonso.chacon@solace.com>

* * Cleanup code - remove comment

Co-authored-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
Signed-off-by: alfonso-chacon <alfonso.chacon@solace.com>

* * Cleanup code - change log level

Co-authored-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
Signed-off-by: alfonso-chacon <alfonso.chacon@solace.com>

* * E2E tests - reduce time in scale down cfg

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* * E2E fix typo

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* * Avoid read secrets from metadata

Co-authored-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
Signed-off-by: alfonso-chacon <alfonso.chacon@solace.com>

* * Remove constant used once

Co-authored-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
Signed-off-by: alfonso-chacon <alfonso.chacon@solace.com>

* * Remove constant used once on logger init

Co-authored-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
Signed-off-by: alfonso-chacon <alfonso.chacon@solace.com>

* * Code cleanup recommendations

- Move regex to pkg level
- Remove unnecessary struct and interface

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* * Code cleanup

Co-authored-by: Zbynek Roubalik <zroubalik@gmail.com>
Signed-off-by: alfonso-chacon <alfonso.chacon@solace.com>

* * Code cleanup

Co-authored-by: Zbynek Roubalik <zroubalik@gmail.com>
Signed-off-by: alfonso-chacon <alfonso.chacon@solace.com>

* * Updated to sync with other sol scaler

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* * Remove unnecessary trailing newline

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* Remove optional keyword for optional fields

Co-authored-by: Jan Wozniak <wozniak.jan@gmail.com>
Signed-off-by: alfonso-chacon <alfonso.chacon@solace.com>

* * Code fixes

-move unused variables to test file
-return json unmarshall error

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* * Removing redundant validation

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* * Remove unnecessary new line

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* fix triggerMetadata syntax

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: alfonso-chacon <alfonso.chacon@solace.com>

* Update pkg/scalers/solace_dm_scaler.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: alfonso-chacon <alfonso.chacon@solace.com>

* Update tests/scalers/solace/direct-messaging/solace_dm_test.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: alfonso-chacon <alfonso.chacon@solace.com>

* Update tests/scalers/solace/direct-messaging/solace_dm_test.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: alfonso-chacon <alfonso.chacon@solace.com>

* * fix a typo and change the ScaleUp/Down to ScaleIn/Out

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

* * fix a typo and change the ScaleUp/Down to ScaleIn/Out

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>

---------

Signed-off-by: Alfonso Chacón <alfonso.chacon@solace.com>
Signed-off-by: alfonso-chacon <alfonso.chacon@solace.com>
Co-authored-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
Co-authored-by: Zbynek Roubalik <zroubalik@gmail.com>
Co-authored-by: Jan Wozniak <wozniak.jan@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-08 11:24:08 +02:00
Rick Brouwer 48db78a9e3
refactor pulsar scaler (#6848)
* refactor pulsar scaler

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>

* Update after feedback

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>

---------

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>
2025-08-07 14:23:02 +02:00
SC 4b5c62e14f
refactor gcp pubsub scaler (#6866)
* Refactor gcp pubsub scaler, typed config

Signed-off-by: Steven Chau <stevenchau1998@outlook.com>

* Minor fix to test

Signed-off-by: Steven Chau <stevenchau1998@outlook.com>

* Minor fix to default value

Signed-off-by: Steven Chau <stevenchau1998@outlook.com>

* Resolve comments

Signed-off-by: Steven Chau <stevenchau1998@outlook.com>

* Force default mod when subscriptionsize is present

Signed-off-by: Steven Chau <stevenchau1998@outlook.com>

* Add deprecate message in changelog. Scaler to use time duration reflect

Signed-off-by: Steven Chau <stevenchau1998@outlook.com>

---------

Signed-off-by: Steven Chau <stevenchau1998@outlook.com>
2025-08-07 14:19:33 +02:00
Rick Brouwer adbda7903d
fix: increase timeout for HPA name status update test in the scaledobject controller test (#6924)
Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>
2025-08-01 16:05:18 +02:00
Rick Brouwer 29b422f5d6
fix: eliminate race conditions in NSQ scaler tests by using separate mock servers (#6914)
* fix: eliminate race conditions in NSQ scaler tests by using separate mock servers

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>

* update feedback

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>

---------

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>
2025-08-01 16:04:42 +02:00
Rick Brouwer 4bdee08129
feat(ci): Simplify PR bot workflow and improve contributor instructions (#6922)
Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>
2025-08-01 16:02:30 +02:00
Rick Brouwer 2e8dd6182e
fix: change corn to cron in the status update test (#6936)
Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>
2025-08-01 15:47:13 +02:00
Aniruddh Chaturvedi 8a999fc3a1
refactor: rename temporal scaler files for consistency (#6911)
* refactor: rename temporal scaler files for consistency

Renames the temporal scaler files to follow the  convention used by other scalers in the project.

-  ->
-  ->
-  ->

Signed-off-by: Aniruddh Chaturvedi <aniruddhc@google.com>

* address feedback

Signed-off-by: Aniruddh Chaturvedi <aniruddhchaturvedi@gmail.com>

---------

Signed-off-by: Aniruddh Chaturvedi <aniruddhc@google.com>
Signed-off-by: Aniruddh Chaturvedi <aniruddhchaturvedi@gmail.com>
2025-07-27 23:19:10 +02:00
Vaibhav Mittal 246dd2ae7b
(fix): Sumo Logic scaler throws errors when upstream returns empty result (#6906)
* (fix): `sumologic scaler`: don't complain if query results in zero messages

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (fix): `sumologic scaler`: Update tests and making polling interval configureable

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (fix): Increase sumologic scaler e2e pollingInterval

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (fix): Add rate-limiting handling

Signed-off-by: mittalvaibhavs <mittalvaibhavandroid@gmail.com>

* (fix): tuning e2e

Signed-off-by: mittalvaibhavs <mittalvaibhavandroid@gmail.com>

* (fix): semgrep happy

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (fix): update changelog

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* PR comments - removing changelog entry

Co-authored-by: Rick Brouwer <rickbrouwer@gmail.com>
Signed-off-by: Vaibhav Mittal <53215737+mittalvaibhav1@users.noreply.github.com>

---------

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>
Signed-off-by: mittalvaibhavs <mittalvaibhavandroid@gmail.com>
Signed-off-by: Vaibhav Mittal <53215737+mittalvaibhav1@users.noreply.github.com>
Co-authored-by: Rick Brouwer <rickbrouwer@gmail.com>
2025-07-25 13:48:17 +02:00
SC ba77c580d0
Fix/3401 mssql e2e (#6801)
* Use kedacore image for mssql e2e test

Signed-off-by: Steven Chau <stevenchau1998@outlook.com>

* Refactor MSSql Scaler to use kedacore test tools mssql image

Signed-off-by: Steven Chau <stevenchau1998@outlook.com>

* Move readme description to other section

Signed-off-by: Steven Chau <stevenchau1998@outlook.com>

* Remove empty row

Co-authored-by: Rick Brouwer <rickbrouwer@gmail.com>
Signed-off-by: SC <45040952+stevencjy@users.noreply.github.com>

---------

Signed-off-by: Steven Chau <stevenchau1998@outlook.com>
Signed-off-by: SC <45040952+stevencjy@users.noreply.github.com>
Co-authored-by: Rick Brouwer <rickbrouwer@gmail.com>
Co-authored-by: Zbynek Roubalik <zroubalik@gmail.com>
Co-authored-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
2025-07-22 20:50:36 +00:00
Rick Brouwer ef7f715225
Refactor MSSQL scaler (#6714)
Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>
Co-authored-by: Zbynek Roubalik <zroubalik@gmail.com>
2025-07-21 16:38:56 +00:00
SC 2034dbf05f
refactor github runner scaler (#6834)
Signed-off-by: Steven Chau <stevenchau1998@outlook.com>
2025-07-21 17:12:56 +02:00
Zhenghan Zhou 10ad6ce81e
Refactor Azure Event Hub scaler (#6908)
Signed-off-by: zhenghanzhou <zhenghanzhou@microsoft.com>
Co-authored-by: zhenghanzhou <zhenghanzhou@microsoft.com>
2025-07-21 17:11:34 +02:00
Pieter Oliver cf579d4763
Misc typos (#6909)
Signed-off-by: Pieter Oliver <pieter.oliver@nourishcare.com>
2025-07-21 17:10:44 +02:00
Dao Thanh Tung a2b7f6e628
Refactor GCP storage scaler (#6868)
* Refactor GCP storage scaler

Signed-off-by: dttung2905 <ttdao.2015@accountancy.smu.edu.sg>

* Fix ci linting

Signed-off-by: dttung2905 <ttdao.2015@accountancy.smu.edu.sg>

---------

Signed-off-by: dttung2905 <ttdao.2015@accountancy.smu.edu.sg>
2025-07-21 09:37:03 +02:00
aliaqel-stripe 1d5f3c56a6
Fix dev container building (#6905)
* Fix dev container building

Signed-off-by: Ali Aqel <aliaqel@stripe.com>

* remove changelog item

Signed-off-by: Ali Aqel <aliaqel@stripe.com>

---------

Signed-off-by: Ali Aqel <aliaqel@stripe.com>
2025-07-14 22:01:36 +02:00
Sergio Conde Gómez ef7da8de03
Add annotations to exclude labels from generated HPA and Job objects (#6851)
* Add annotations to exclude labels from being propagated to generated HPA and Job objects

Signed-off-by: Sergio Conde <skgsergio@gmail.com>

* Improve testing

Signed-off-by: Sergio Conde <skgsergio@gmail.com>

* Fix tests label type

Signed-off-by: Sergio Conde <skgsergio@gmail.com>

* Fix job fetch helper function for e2e tests

Signed-off-by: Sergio Conde <skgsergio@gmail.com>

* Fix scaled object excluded labels tests

Signed-off-by: Sergio Conde <skgsergio@gmail.com>

* Use cron trigger instead of cpu on e2e test

Signed-off-by: Sergio Conde <skgsergio@gmail.com>

* Use a job without secret dependencies in e2e tests

Signed-off-by: Sergio Conde <skgsergio@gmail.com>

* Fix annotation in tests

Signed-off-by: Sergio Conde <skgsergio@gmail.com>

---------

Signed-off-by: Sergio Conde <skgsergio@gmail.com>
2025-07-11 16:30:16 +02:00
Vaibhav Mittal f5dd08a053
(feat): Add Sumo Logic scaler (#6736)
* (feat): Add barebones for sumologic scaler

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): refactoring...

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): add client support for metric queries

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): process and return log search response as floats

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): fix tests

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* Adding sumologic scaler file

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): refactoring sumologic scaler

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): bug fixes

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): adopt a logging frameworkin in sumologic package

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): fix tests..

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): fix metrics queries

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): bug fixes

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): bug fixes

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): bug fixes

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): bug fixes

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): bug fixes

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): fixing metric name

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): Adding resultfield and rollup for query result method

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): allow users to specify result field and rollup type

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): use cookies as recommended

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): Updating test cases for sumologic_scaler

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): Add support for multi metrics queries

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): fix tests

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): bug fixes

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): fix typo + readability improvements

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): pre-commit changes + add tests

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): Fix PR checks

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): Update changelog

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): PR checks

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): use TypedConfig pattern

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): remove unnecessary fields

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): Use builder

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): Address comments

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): Fix e2e

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): Address comments

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

* (feat): Trigger tests again

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>

---------

Signed-off-by: mittalvaibhav1 <mittalvaibhavandroid@gmail.com>
Co-authored-by: Anoop Bansal <anoop.bansal@sumologic.com>
2025-07-10 13:49:31 +02:00
Jorge Turrado Ferrero a239d2459a
fix: e2e test checks label value by name and not by index (#6858)
* fix: e2e test checks label value by name and not by index

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>

* fix: e2e test checks label value by name and not by index

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>

* fix: e2e test checks label value by name and not by index

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>

---------

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>
2025-07-08 23:06:20 +02:00
Alexander Huck 3d2100530f
fix: add missing omitempty tags in the AuthPodIdentity struct (#6780)
Signed-off-by: Alexander Huck <alexander.huck@hello-charles.com>
Signed-off-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
Co-authored-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
2025-07-08 23:05:22 +02:00
Zhenghan Zhou c004d8ce8b
Refactor Azure Log Analytics Scaler (#6701)
* Update

Signed-off-by: zhenghanzhou <zhenghanzhou@microsoft.com>

* Update

Signed-off-by: zhenghanzhou <zhenghanzhou@microsoft.com>

* Fix

Signed-off-by: zhenghanzhou <zhenghanzhou@microsoft.com>

* Update

Signed-off-by: zhenghanzhou <zhenghanzhou@microsoft.com>

---------

Signed-off-by: zhenghanzhou <zhenghanzhou@microsoft.com>
Co-authored-by: zhenghanzhou <zhenghanzhou@microsoft.com>
Co-authored-by: Zbynek Roubalik <zroubalik@gmail.com>
2025-07-07 20:26:03 +00:00
Rick Brouwer 980637ebbf
fix: resolve race condition in NSQ scaler tests by fixing atomic counter usage (#6843)
Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>
2025-07-07 20:24:00 +02:00
Jan Wozniak 6ea8d34fb0
chore: changelog and issue template v2.17.2 (#6845)
Signed-off-by: Jan Wozniak <wozniak.jan@gmail.com>
2025-07-07 18:08:39 +02:00
Justin 967c5d6211
Update hpaName in ScaledObject status when HPA already exists (#6860)
When the ScaledObject adopts an HPA or if the hpaName in the
ScaledObject status is removed for any reason, the hpaName status will
no longer get set. It is only set when the HPA is created by the
ScaledObject.

We add setting the HPA name in the ScaledObject status to the
regular reconciliation loop for the ScaledObject.

See https://github.com/kedacore/keda/issues/6336

Signed-off-by: Justin Miron <justin.miron@reddit.com>
2025-06-28 17:08:43 +02:00
Jorge Turrado Ferrero 37a0f2c5a7
fix: e2e test on ARM/s390x are correctly reported during PRs (#6856)
Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>
2025-06-23 14:30:52 +02:00
Dao Thanh Tung 75f589a138
refactor external scaler (#6829)
* refactor external scaler

Signed-off-by: dttung2905 <ttdao.2015@accountancy.smu.edu.sg>

* Fix ci lint error

Signed-off-by: dttung2905 <ttdao.2015@accountancy.smu.edu.sg>

* Fix CI Failure

Signed-off-by: dttung2905 <ttdao.2015@accountancy.smu.edu.sg>

---------

Signed-off-by: dttung2905 <ttdao.2015@accountancy.smu.edu.sg>
Co-authored-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
2025-06-21 20:22:56 +00:00
Dao Thanh Tung eab39a9349
Fix bug with datadogNamespace config (#6827)
* Fix DCO signing

Signed-off-by: dttung2905 <ttdao.2015@accountancy.smu.edu.sg>

* Fix linting

Signed-off-by: dttung2905 <ttdao.2015@accountancy.smu.edu.sg>

* Revert

Signed-off-by: dttung2905 <ttdao.2015@accountancy.smu.edu.sg>

---------

Signed-off-by: dttung2905 <ttdao.2015@accountancy.smu.edu.sg>
2025-06-21 21:48:10 +02:00
Max Cao 30a37e6575
test: refactor cloudevent_source_test.go (#6759)
Refactors the cloudevent_source test to rely on events instead of sleeping in order to progress and also improves
the cleanup between and after tests. We've noticed flakes with this test where the magic sleep timings don't
play well on certain machines and platforms.

This commit also adds several more helper test functions that:
* Allow you to wait for Kubernetes events after calling a trigger function
* Allows you to consistently validate a condition up to a duration
* Allows you to execute commands on pods with TTY turned off (this change is due to several weird test flakes
we've seen where the output of an exec command returned to the user is unexpected)

Signed-off-by: Max Cao <macao@redhat.com>
2025-06-20 12:31:51 +02:00
Rick Brouwer e25ab90538
Deprecate type setting CPU/MEM and tls setting IBMMQ (#6698)
* feat(ibmmq): Deprecate TLS setting

Signed-off-by: rickbrouwer <rickbrouwer@gmail.com>

* Update typed_config conform governance deprecations

Signed-off-by: rickbrouwer <rickbrouwer@gmail.com>

* Deprecate type setting

Signed-off-by: rickbrouwer <rickbrouwer@gmail.com>

* Remove webhook prommetrics deprecations

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>

* Update changelog

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>

* Update

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>

* Update e2e tests

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>

---------

Signed-off-by: rickbrouwer <rickbrouwer@gmail.com>
Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>
2025-06-15 21:51:26 +02:00
Jorge Turrado Ferrero e91c91c237
clean up s390x resources (#6838) 2025-06-12 11:44:36 +02:00
Jorge Turrado Ferrero ae083cd42a
configure docker buildx context (#6836)
Signed-off-by: Jorge Turrado <jorge.turrado@mail.schwarz>
2025-06-11 11:23:38 +02:00
Jorge Turrado Ferrero a18596513f
fix some s390x issues (#6833)
* fix some s390x issues

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>

* fix some s390x issues

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>

* fix some s390x issues

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>

---------

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>
2025-06-11 09:04:02 +02:00
Jorge Turrado Ferrero c9b1fa193e
feat: Support s390x (#6623)
* prepare s390x

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>

* prepare s390x

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>

* prepare s390x

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>

* prepare s390x

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>

---------

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>
Signed-off-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
2025-06-10 23:25:12 +02:00
aliaqel-stripe 4ee73f064f
Reintroduce PR #5974: Remove klogr dependency and move to zap (#6578)
remove changelog changes



add verbose logging for azure test



add verbose logging for azure test

Signed-off-by: Ali Aqel <aliaqel@stripe.com>
Co-authored-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
2025-06-09 14:19:17 +02:00
Dao Thanh Tung 707b095ba3
Add option to configure tlsServerName (#6820)
* Add option to configure tlsServerName

Signed-off-by: dttung2905 <ttdao.2015@accountancy.smu.edu.sg>

* Add CHANGELOG.md

Signed-off-by: dttung2905 <ttdao.2015@accountancy.smu.edu.sg>

---------

Signed-off-by: dttung2905 <ttdao.2015@accountancy.smu.edu.sg>
2025-06-04 23:27:51 +02:00
Jorge Turrado Ferrero 8a6ee85e68
fix: Events e2e tests pass again (#6818)
Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>
2025-06-02 20:27:15 +02:00
dependabot[bot] acf5d2c102
chore(deps): bump fossas/fossa-action from 1.6.0 to 1.7.0 (#6813)
Bumps [fossas/fossa-action](https://github.com/fossas/fossa-action) from 1.6.0 to 1.7.0.
- [Release notes](https://github.com/fossas/fossa-action/releases)
- [Commits](c0a7d013f8...3ebcea1862)

---
updated-dependencies:
- dependency-name: fossas/fossa-action
  dependency-version: 1.7.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-01 19:39:25 +02:00
dependabot[bot] 894a1e70da
chore(deps): bump github/codeql-action from 3.28.16 to 3.28.18 (#6814)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.16 to 3.28.18.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](28deaeda66...ff0a06e83c)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 3.28.18
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-01 19:39:04 +02:00
dependabot[bot] 6ff987be40
chore(deps): bump ossf/scorecard-action from 2.3.1 to 2.4.2 (#6815)
Bumps [ossf/scorecard-action](https://github.com/ossf/scorecard-action) from 2.3.1 to 2.4.2.
- [Release notes](https://github.com/ossf/scorecard-action/releases)
- [Changelog](https://github.com/ossf/scorecard-action/blob/main/RELEASE.md)
- [Commits](0864cf1902...05b42c6244)

---
updated-dependencies:
- dependency-name: ossf/scorecard-action
  dependency-version: 2.4.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-01 19:38:53 +02:00
dependabot[bot] f93f0a420f
chore(deps): bump actions/setup-go from 5.4.0 to 5.5.0 (#6816)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5.4.0 to 5.5.0.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](0aaccfd150...d35c59abb0)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-version: 5.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-01 19:38:37 +02:00
Grace Atwood 100f234c3b
Require the same version we replace in order to not break downstream dependencies (#6806)
Signed-off-by: Grace Atwood <gatwood@slack-corp.com>
2025-06-01 19:37:41 +02:00
Zbynek Roubalik 15244baf46
Improve Events emitted from ScaledObject controller (#6803)
Signed-off-by: Zbynek Roubalik <zroubalik@gmail.com>
2025-05-29 22:06:21 +00:00
Jay Shane b513bf3319
fix missing test function (#6785)
Signed-off-by: Jay Shane <327411586@qq.com>
2025-05-27 22:37:27 +02:00
Tobias Germer e23ef462ab
Replace deprecated webhook.Validator with webhook.CustomValidator (#6700)
* Replace deprecated webhook.Validator with webhook.CustomValidator

Signed-off-by: Tobias Germer <tobias.germer@tui.com>

* Fix SetupWebhookWithManager

Signed-off-by: Tobias Germer <tobias.germer@tui.com>

---------

Signed-off-by: Tobias Germer <tobias.germer@tui.com>
Co-authored-by: Zbynek Roubalik <zroubalik@gmail.com>
2025-05-21 19:32:04 +02:00
Jan Wozniak ec678d9531
require race condition check for PRs to pass (#6778)
Signed-off-by: Jan Wozniak <wozniak.jan@gmail.com>
2025-05-16 18:02:44 +02:00
João Bastos 267a58a867
fix: temporal scaler was not setting up tls for apikey auth (#6781)
* fix: temporal scaler was not setting up tls for apikey auth

Signed-off-by: joaopaulosr95 <joaopaulosr95@gmail.com>

* Update CHANGELOG.md

Co-authored-by: Jan Wozniak <wozniak.jan@gmail.com>
Signed-off-by: João Bastos <joaopaulosr95@gmail.com>

---------

Signed-off-by: joaopaulosr95 <joaopaulosr95@gmail.com>
Signed-off-by: João Bastos <joaopaulosr95@gmail.com>
Signed-off-by: Jan Wozniak <wozniak.jan@gmail.com>
Co-authored-by: Jan Wozniak <wozniak.jan@gmail.com>
2025-05-16 10:06:00 +00:00
Jorge Turrado Ferrero 641ab7ea6e
Update changelog according to v2.17.1 (#6782)
Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>
2025-05-16 10:23:43 +02:00
Armin Aminian 033f46d0a0
Fix: Prefixes in envFrom within deployment specs are being ignored (#6757)
* fix: Prefixes in envFrom within deployment specs are being ignored

When envFrom includes a prefix setting, the scale resolver ignores this prefix and adds environment variable keys to the ScalerConfig without the prefix.

This fix modifies resolveEnv to correctly apply prefixes to keys when a prefix is specified.

Signed-off-by: araminian <rmin.aminian@gmail.com>

* Reorder changelog

Signed-off-by: araminian <rmin.aminian@gmail.com>

* Use existing test function

Signed-off-by: araminian <rmin.aminian@gmail.com>

* Adjust comment

Signed-off-by: araminian <rmin.aminian@gmail.com>

---------

Signed-off-by: araminian <rmin.aminian@gmail.com>
Signed-off-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
Co-authored-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
2025-05-13 20:05:15 +00:00
Jorge Turrado Ferrero bf2b3f1f14
feat: grpc mTLS certificates are hot reloaded (#6756)
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-05-13 15:04:37 +02:00
Jorge Turrado Ferrero a8c0126cc9
fix: ScalerCache gets the lock before operate the scalers (#6739) 2025-05-13 07:22:03 +00:00
Viet Nguyen Duc 505f97a86a
Selenium Grid: Update metric name generated without part of empty (#6772)
* Selenium Grid: Update metric name generated without part of empty

Signed-off-by: Viet Nguyen Duc <nguyenducviet4496@gmail.com>

* Update CHANGELOG with the PR

Signed-off-by: Viet Nguyen Duc <nguyenducviet4496@gmail.com>

---------

Signed-off-by: Viet Nguyen Duc <nguyenducviet4496@gmail.com>
2025-05-12 22:32:32 +02:00
Jan Wozniak b0d1c01581
enable race condition check for unit tests (#6760)
Signed-off-by: Jan Wozniak <wozniak.jan@gmail.com>
2025-05-12 14:10:11 +02:00
Rick Brouwer b66b9ed390
Fix race conditions in tests external scaler and predictkube scaler (#6764)
Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>
2025-05-12 14:07:49 +02:00
dependabot[bot] dcfae29050
chore(deps): bump actions/setup-python from 5.5.0 to 5.6.0 (#6742)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.5.0 to 5.6.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](8d9ed9ac5c...a26af69be9)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-version: 5.6.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-04 21:11:45 +02:00
dependabot[bot] 788dec95e8
chore(deps): bump sigstore/cosign-installer from 3.7.0 to 3.8.2 (#6743)
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.7.0 to 3.8.2.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](dc72c7d5c4...3454372f43)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 3.8.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-04 21:11:34 +02:00
dependabot[bot] 45cfe490c8
chore(deps): bump github/codeql-action from 3.28.13 to 3.28.16 (#6744)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.13 to 3.28.16.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](1b549b9259...28deaeda66)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 3.28.16
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-04 21:11:25 +02:00
dependabot[bot] 16501feb92
chore(deps): bump tj-actions/branch-names from 8.1.0 to 8.2.1 (#6745)
Bumps [tj-actions/branch-names](https://github.com/tj-actions/branch-names) from 8.1.0 to 8.2.1.
- [Release notes](https://github.com/tj-actions/branch-names/releases)
- [Changelog](https://github.com/tj-actions/branch-names/blob/main/HISTORY.md)
- [Commits](f44339b51f...dde14ac574)

---
updated-dependencies:
- dependency-name: tj-actions/branch-names
  dependency-version: 8.2.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-04 21:11:12 +02:00
Jorge Turrado Ferrero 9b954d4430
fix: Use pinned version for nginx image (#6737)
* fix: Use pinned version for nginx image

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>

* .

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>

* fix panic in gcp scaler

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>

---------

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>
2025-05-01 06:38:16 +02:00
Rick Brouwer 481cc807fe
fix: add default Operation in Azure Service Bus scaler (#6731)
Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>
2025-04-30 07:45:53 +00:00
Jorge Turrado Ferrero b05b703e1a
chore: Move the changelog record to the right place (#6732)
Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>
2025-04-27 20:09:35 +02:00
Rick Brouwer 8e6a985a73
Support multiple auth methods simultaneously in Metrics API scaler (#6645)
Signed-off-by: rickbrouwer <rickbrouwer@gmail.com>
2025-04-26 17:58:07 +02:00
Rick Brouwer 96c1c70440
fix: Temporal scaler with API Key (#6707)
Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>
Signed-off-by: rickbrouwer <rickbrouwer@gmail.com>
2025-04-26 13:37:47 +02:00
Dao Thanh Tung 9d7c1aaac8
Refactor Datadog scaler
Signed-off-by: dttung2905 <ttdao.2015@accountancy.smu.edu.sg>
Co-authored-by: Zbynek Roubalik <zroubalik@gmail.com>
2025-04-24 22:13:01 +01:00
Omer Aplatony b7f37a0fea
refactor gcp stackdriver scaler (#6462)
Signed-off-by: Omer Aplatony <omerap12@gmail.com>
Co-authored-by: Jan Wozniak <wozniak.jan@gmail.com>
Co-authored-by: Jorge Turrado Ferrero <Jorge_turrado@hotmail.es>
2025-04-24 18:21:10 +02:00
Rick Brouwer aa4eea4a48
fix: AWS SQS Queue queueURLFromEnv not working (#6713)
Signed-off-by: rickbrouwer <rickbrouwer@gmail.com>
2025-04-24 13:18:59 +02:00
rickbrouwer 476717e130
fix: Admission Webhook blocks ScaledObject without metricType with fallback (#6702)
* fix: Admission Webhook blocks ScaledObject without metricType with fallback

Signed-off-by: rickbrouwer <rickbrouwer@gmail.com>

* Add unit test

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>

* Add e2e test

Signed-off-by: rickbrouwer <rickbrouwer@gmail.com>

* Add more unit tests for scaledobject_types

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>

* Update changelog

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>

* Update

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>

---------

Signed-off-by: rickbrouwer <rickbrouwer@gmail.com>
Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>
Co-authored-by: Zbynek Roubalik <zroubalik@gmail.com>
2025-04-15 15:51:31 +02:00
Zbynek Roubalik a368eee433
fix workflows (#6704)
Signed-off-by: Zbynek Roubalik <zroubalik@gmail.com>
2025-04-10 15:29:36 +02:00
rickbrouwer 720ae0e98f
Add time.Duration in TypedConfig (#6650)
* Add time Duration at TypedConfig

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>

* Add time duration prometheus scaler

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>

* Correct changelog

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>

* Update after feedback

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>

---------

Signed-off-by: Rick Brouwer <rickbrouwer@gmail.com>
Signed-off-by: rickbrouwer <rickbrouwer@gmail.com>
2025-04-07 14:36:15 +02:00
349 changed files with 77546 additions and 69204 deletions

View File

@ -41,12 +41,16 @@ RUN apt-get update \
&& go install golang.org/x/lint/golint@latest \
&& go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest \
&& go install github.com/mgechev/revive@latest \
&& go install github.com/go-delve/delve/cmd/dlv@latest \
&& go install honnef.co/go/tools/cmd/staticcheck@latest \
&& go install golang.org/x/tools/gopls@latest \
&& go install golang.org/x/tools/gopls@v0.18.1 \
# Protocol Buffer Compiler
&& PROTOC_VERSION=21.9 \
&& if [ $(dpkg --print-architecture) = "amd64" ]; then PROTOC_ARCH="x86_64"; else PROTOC_ARCH="aarch_64" ; fi \
&& PROTOC_VERSION=29.3 \
&& case $(dpkg --print-architecture) in \
"amd64") PROTOC_ARCH="x86_64" ;; \
"arm64") PROTOC_ARCH="aarch_64" ;; \
"s390x") PROTOC_ARCH="s390_64" ;; \
*) echo "Unsupported architecture"; exit 1 ;; \
esac \
&& curl -LO "https://github.com/protocolbuffers/protobuf/releases/download/v${PROTOC_VERSION}/protoc-${PROTOC_VERSION}-linux-$PROTOC_ARCH.zip" \
&& unzip "protoc-${PROTOC_VERSION}-linux-$PROTOC_ARCH.zip" -d $HOME/.local \
&& mv $HOME/.local/bin/protoc /usr/local/bin/protoc \

View File

@ -57,6 +57,8 @@ body:
label: KEDA Version
description: What version of KEDA that are you running?
options:
- "2.17.2"
- "2.17.1"
- "2.17.0"
- "2.16.1"
- "2.16.0"

View File

@ -21,19 +21,19 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- uses: actions/setup-go@0aaccfd150d50ccaeb58ebd88d36e91967a5f35b # v5
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5
with:
go-version: "1.23"
- run: go version
- name: Get branch name
id: branch-name
uses: tj-actions/branch-names@f44339b51f74753b57583fbbd124e18a81170ab1 # v8.1.0
- uses: fossas/fossa-action@c0a7d013f84c8ee5e910593186598625513cc1e4 # v1.6.0
uses: tj-actions/branch-names@5250492686b253f06fa55861556d1027b067aeb5 # v9.0.2
- uses: fossas/fossa-action@3ebcea1862c6ffbd5cf1b4d0bd6b3fe7bd6f2cac # v1.7.0
name: Scanning with FOSSA
with:
api-key: ${{ env.fossa-key }}
branch: ${{ steps.branch-name.outputs.current_branch }}
- uses: fossas/fossa-action@c0a7d013f84c8ee5e910593186598625513cc1e4 # v1.6.0
- uses: fossas/fossa-action@3ebcea1862c6ffbd5cf1b4d0bd6b3fe7bd6f2cac # v1.7.0
name: Executing tests with FOSSA
with:
api-key: ${{ env.fossa-key }}

View File

@ -49,6 +49,13 @@ jobs:
- name: Test
run: make test
# https://github.com/sigstore/cosign-installer
- name: Install Cosign
uses: sigstore/cosign-installer@d58896d6a1865668819e1d91763c7751a165e159 # v3.9.2
- name: Check Cosign install!
run: cosign version
- name: Login to GitHub Container Registry
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
@ -62,21 +69,26 @@ jobs:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@c47758b77c9736f4b2ef4073d4d51994fabfe349 # v3.7.1
- name: Publish on GitHub Container Registry
- name: Publish main on GitHub Container Registry
run: make publish-multiarch
# https://github.com/sigstore/cosign-installer
- name: Install Cosign
uses: sigstore/cosign-installer@dc72c7d5c4d10cd6bcb8cf6e3fd625a9e5e537da # v3.7.0
- name: Check Cosign install!
run: cosign version
- name: Sign KEDA images published on GitHub Container Registry
# This step uses the identity token to provision an ephemeral certificate
# against the sigstore community Fulcio instance.
run: make sign-images
- name: Publish sha on GitHub Container Registry
run: make publish-multiarch
env:
VERSION: ${{ github.sha }}
- name: Sign KEDA images published on GitHub Container Registry
# This step uses the identity token to provision an ephemeral certificate
# against the sigstore community Fulcio instance.
run: make sign-images
env:
VERSION: ${{ github.sha }}
validate:
needs: build
uses: kedacore/keda/.github/workflows/template-main-e2e-test.yml@main
@ -86,6 +98,10 @@ jobs:
needs: build
uses: kedacore/keda/.github/workflows/template-arm64-smoke-tests.yml@main
validate-s390x:
needs: build
uses: kedacore/keda/.github/workflows/template-s390x-smoke-tests.yml@main
validate-k8s-versions:
needs: build
uses: kedacore/keda/.github/workflows/template-versions-smoke-tests.yml@main

View File

@ -14,5 +14,8 @@ jobs:
validate-arm64:
uses: kedacore/keda/.github/workflows/template-arm64-smoke-tests.yml@main
validate-s390x:
uses: kedacore/keda/.github/workflows/template-s390x-smoke-tests.yml@main
validate-k8s-versions:
uses: kedacore/keda/.github/workflows/template-versions-smoke-tests.yml@main

28
.github/workflows/pr-bot-welcome.yml vendored Normal file
View File

@ -0,0 +1,28 @@
name: PR Welcome Bot
on:
pull_request_target:
types: [opened]
branches:
- 'main'
permissions:
issues: write
pull-requests: write
jobs:
pr_bot:
name: PR Bot
runs-on: ubuntu-latest
steps:
- name: 'Comment on PR'
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: ${{ github.event.number }},
body: 'Thank you for your contribution! 🙏\n\nPlease understand that we will do our best to review your PR and give you feedback as soon as possible, but please bear with us if it takes a little longer as expected.\n\nWhile you are waiting, make sure to:\n\n- Add an entry in our changelog in alphabetical order and link related issue\n- Update the documentation, if needed\n- Add unit & e2e tests for your changes\n- GitHub checks are passing\n- Is the DCO check failing? Here is how you can [fix DCO issues](https://github.com/kedacore/keda/blob/main/CONTRIBUTING.md#i-didnt-sign-my-commit-now-what)\n\nOnce the initial tests are successful, a KEDA member will ensure that the e2e tests are run. Once the e2e tests have been successfully completed, the PR may be merged at a later date. Please be patient.\n\nLearn more about our [contribution guide](https://github.com/kedacore/keda-docs/blob/main/CONTRIBUTING.md).'
});

View File

@ -8,6 +8,9 @@ on:
env:
SKIP_E2E_TAG: skip-e2e
E2E_CHECK_NAME: e2e tests
ARM_SMOKE_CHECK_NAME: ARM smoke tests
S390X_SMOKE_CHECK_NAME: S390x smoke tests
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
@ -17,6 +20,7 @@ jobs:
e2e-checker:
name: label checker
runs-on: ubuntu-latest
if: github.event.label.name == 'skip-e2e'
steps:
- uses: LouisBrunner/checks-action@6b626ffbad7cc56fd58627f774b9067e6118af23 # v2
name: Enqueue e2e
@ -37,3 +41,43 @@ jobs:
conclusion: success
output: |
{"summary": "skipped by maintainer"}
- uses: LouisBrunner/checks-action@6b626ffbad7cc56fd58627f774b9067e6118af23 # v2
name: Enqueue e2e
id: create-arm
with:
token: ${{ secrets.GITHUB_TOKEN }}
sha: ${{ github.event.pull_request.head.sha }}
name: ${{ env.ARM_SMOKE_CHECK_NAME }}
status: queued
- uses: LouisBrunner/checks-action@6b626ffbad7cc56fd58627f774b9067e6118af23 # v2
name: Skip e2e
if: ${{ contains(github.event.pull_request.labels.*.name, env.SKIP_E2E_TAG )}}
with:
token: ${{ secrets.GITHUB_TOKEN }}
sha: ${{ github.event.pull_request.head.sha }}
check_id: ${{ steps.create-arm.outputs.check_id }}
conclusion: success
output: |
{"summary": "skipped by maintainer"}
- uses: LouisBrunner/checks-action@6b626ffbad7cc56fd58627f774b9067e6118af23 # v2
name: Enqueue e2e
id: create-s390x
with:
token: ${{ secrets.GITHUB_TOKEN }}
sha: ${{ github.event.pull_request.head.sha }}
name: ${{ env.S390X_SMOKE_CHECK_NAME }}
status: queued
- uses: LouisBrunner/checks-action@6b626ffbad7cc56fd58627f774b9067e6118af23 # v2
name: Skip e2e
if: ${{ contains(github.event.pull_request.labels.*.name, env.SKIP_E2E_TAG )}}
with:
token: ${{ secrets.GITHUB_TOKEN }}
sha: ${{ github.event.pull_request.head.sha }}
check_id: ${{ steps.create-s390x.outputs.check_id }}
conclusion: success
output: |
{"summary": "skipped by maintainer"}

View File

@ -8,6 +8,8 @@ on:
env:
SKIP_E2E_TAG: skip-e2e
E2E_CHECK_NAME: e2e tests
ARM_SMOKE_CHECK_NAME: ARM smoke tests
S390X_SMOKE_CHECK_NAME: S390x smoke tests
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
@ -37,3 +39,43 @@ jobs:
conclusion: success
output: |
{"summary": "skipped by maintainer"}
- uses: LouisBrunner/checks-action@6b626ffbad7cc56fd58627f774b9067e6118af23 # v2
name: Enqueue e2e
id: create-arm
with:
token: ${{ secrets.GITHUB_TOKEN }}
sha: ${{ github.event.pull_request.head.sha }}
name: ${{ env.ARM_SMOKE_CHECK_NAME }}
status: queued
- uses: LouisBrunner/checks-action@6b626ffbad7cc56fd58627f774b9067e6118af23 # v2
name: Skip e2e
if: ${{ contains(github.event.pull_request.labels.*.name, env.SKIP_E2E_TAG )}}
with:
token: ${{ secrets.GITHUB_TOKEN }}
sha: ${{ github.event.pull_request.head.sha }}
check_id: ${{ steps.create-arm.outputs.check_id }}
conclusion: success
output: |
{"summary": "skipped by maintainer"}
- uses: LouisBrunner/checks-action@6b626ffbad7cc56fd58627f774b9067e6118af23 # v2
name: Enqueue e2e
id: create-s390x
with:
token: ${{ secrets.GITHUB_TOKEN }}
sha: ${{ github.event.pull_request.head.sha }}
name: ${{ env.S390X_SMOKE_CHECK_NAME }}
status: queued
- uses: LouisBrunner/checks-action@6b626ffbad7cc56fd58627f774b9067e6118af23 # v2
name: Skip e2e
if: ${{ contains(github.event.pull_request.labels.*.name, env.SKIP_E2E_TAG )}}
with:
token: ${{ secrets.GITHUB_TOKEN }}
sha: ${{ github.event.pull_request.head.sha }}
check_id: ${{ steps.create-s390x.outputs.check_id }}
conclusion: success
output: |
{"summary": "skipped by maintainer"}

View File

@ -5,6 +5,8 @@ on:
env:
E2E_CHECK_NAME: e2e tests
ARM_SMOKE_CHECK_NAME: ARM smoke tests
S390X_SMOKE_CHECK_NAME: S390x smoke tests
jobs:
triage:
@ -67,7 +69,7 @@ jobs:
build-test-images:
needs: triage
runs-on: ubuntu-latest
runs-on: ARM64
name: Build images
container: ghcr.io/kedacore/keda-tools:1.23.8
if: needs.triage.outputs.run-e2e == 'true'
@ -81,6 +83,24 @@ jobs:
status: in_progress
details_url: https://github.com/${{github.repository}}/actions/runs/${{github.run_id}}
- name: Set status in-progress
uses: LouisBrunner/checks-action@6b626ffbad7cc56fd58627f774b9067e6118af23 # v2
with:
token: ${{ secrets.GITHUB_TOKEN }}
sha: ${{ needs.triage.outputs.commit_sha }}
name: ${{ env.ARM_SMOKE_CHECK_NAME }}
status: in_progress
details_url: https://github.com/${{github.repository}}/actions/runs/${{github.run_id}}
- name: Set status in-progress
uses: LouisBrunner/checks-action@6b626ffbad7cc56fd58627f774b9067e6118af23 # v2
with:
token: ${{ secrets.GITHUB_TOKEN }}
sha: ${{ needs.triage.outputs.commit_sha }}
name: ${{ env.S390X_SMOKE_CHECK_NAME }}
status: in_progress
details_url: https://github.com/${{github.repository}}/actions/runs/${{github.run_id}}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- name: Register workspace path
@ -126,6 +146,26 @@ jobs:
conclusion: failure
details_url: https://github.com/${{github.repository}}/actions/runs/${{github.run_id}}
- name: Set status failure
uses: LouisBrunner/checks-action@6b626ffbad7cc56fd58627f774b9067e6118af23 # v2
if: steps.regex-validation.outcome != 'success'
with:
token: ${{ secrets.GITHUB_TOKEN }}
sha: ${{ needs.triage.outputs.commit_sha }}
name: ${{ env.ARM_SMOKE_CHECK_NAME }}
status: failure
details_url: https://github.com/${{github.repository}}/actions/runs/${{github.run_id}}
- name: Set status failure
uses: LouisBrunner/checks-action@6b626ffbad7cc56fd58627f774b9067e6118af23 # v2
if: steps.regex-validation.outcome != 'success'
with:
token: ${{ secrets.GITHUB_TOKEN }}
sha: ${{ needs.triage.outputs.commit_sha }}
name: ${{ env.S390X_SMOKE_CHECK_NAME }}
status: failure
details_url: https://github.com/${{github.repository}}/actions/runs/${{github.run_id}}
- name: Exit on failure
if: steps.regex-validation.outcome != 'success'
run: exit 1
@ -140,12 +180,16 @@ jobs:
# Server address of Docker registry. If not set then will default to Docker Hub
registry: ghcr.io
- name: Publish on GitHub Container Registry
run: make publish
env:
E2E_IMAGE_TAG: ${{ needs.triage.outputs.image_tag }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@c47758b77c9736f4b2ef4073d4d51994fabfe349 # v3.7.1
run-test:
- name: Publish on GitHub Container Registry
run: make publish-multiarch
env:
VERSION: ${{ needs.triage.outputs.image_tag }}
SUFFIX: "-test"
run-e2e-test:
needs: [triage, build-test-images]
runs-on: e2e
name: Execute e2e tests
@ -192,7 +236,8 @@ jobs:
AZURE_RUN_WORKLOAD_IDENTITY_TESTS: true
GCP_RUN_IDENTITY_TESTS: true
ENABLE_OPENTELEMETRY: true
E2E_IMAGE_TAG: ${{ needs.triage.outputs.image_tag }}
VERSION: ${{ needs.triage.outputs.image_tag }}
SUFFIX: "-test"
TEST_CLUSTER_NAME: keda-e2e-cluster-pr
COMMENT_BODY: ${{ github.event.comment.body }}
run: |
@ -260,3 +305,195 @@ jobs:
name: e2e-test-logs
path: "${{ github.workspace }}/**/*.log"
if-no-files-found: ignore
run-arm-smoke-test:
needs: [triage, build-test-images]
runs-on: ARM64
name: Execute ARM smoke tests
if: needs.triage.outputs.run-e2e == 'true'
steps:
- name: Set status in-progress
uses: LouisBrunner/checks-action@6b626ffbad7cc56fd58627f774b9067e6118af23 # v2
with:
token: ${{ secrets.GITHUB_TOKEN }}
sha: ${{ needs.triage.outputs.commit_sha }}
name: ${{ env.ARM_SMOKE_CHECK_NAME }}
status: in_progress
details_url: https://github.com/${{github.repository}}/actions/runs/${{github.run_id}}
- name: Setup Go
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5
with:
go-version: "1.23"
- name: Install prerequisites
run: |
sudo apt update
sudo apt install curl make ca-certificates gcc libc-dev wget -y
env:
DEBIAN_FRONTEND: noninteractive
- name: Check out code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
fetch-depth: 1
- name: Create k8s ${{ inputs.kubernetesVersion }} Kind Cluster
uses: helm/kind-action@0025e74a8c7512023d06dc019c617aa3cf561fde # v1.10.0
with:
node_image: kindest/node:v1.32.0@sha256:c48c62eac5da28cdadcf560d1d8616cfa6783b58f0d94cf63ad1bf49600cb027
cluster_name: ${{ runner.name }}
- name: Run smoke test
continue-on-error: true
run: make smoke-test
id: test
env:
VERSION: ${{ needs.triage.outputs.image_tag }}
SUFFIX: "-test"
- name: React to comment with success
uses: dkershner6/reaction-action@97ede302a1b145b3739dec3ca84a489a34ef48b5 # v2
if: steps.test.outcome == 'success'
with:
token: ${{ secrets.GITHUB_TOKEN }}
commentId: ${{ github.event.comment.id }}
reaction: "+1"
- name: Set status success
uses: LouisBrunner/checks-action@6b626ffbad7cc56fd58627f774b9067e6118af23 # v2
if: steps.test.outcome == 'success'
with:
token: ${{ secrets.GITHUB_TOKEN }}
sha: ${{ needs.triage.outputs.commit_sha }}
name: ${{ env.ARM_SMOKE_CHECK_NAME }}
conclusion: success
details_url: https://github.com/${{github.repository}}/actions/runs/${{github.run_id}}
- name: React to comment with failure
uses: dkershner6/reaction-action@97ede302a1b145b3739dec3ca84a489a34ef48b5 # v2
if: steps.test.outcome != 'success'
with:
token: ${{ secrets.GITHUB_TOKEN }}
commentId: ${{ github.event.comment.id }}
reaction: "-1"
- name: Set status failure
uses: LouisBrunner/checks-action@6b626ffbad7cc56fd58627f774b9067e6118af23 # v2
if: steps.test.outcome != 'success'
with:
token: ${{ secrets.GITHUB_TOKEN }}
sha: ${{ needs.triage.outputs.commit_sha }}
name: ${{ env.ARM_SMOKE_CHECK_NAME }}
conclusion: failure
details_url: https://github.com/${{github.repository}}/actions/runs/${{github.run_id}}
- name: Upload test logs
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
with:
name: arm-smoke-test-logs
path: "${{ github.workspace }}/**/*.log"
if-no-files-found: ignore
run-s390x-smoke-test:
needs: [triage, build-test-images]
runs-on: s390x
name: Execute s390x smoke tests
if: needs.triage.outputs.run-e2e == 'true'
steps:
- name: Set status in-progress
uses: LouisBrunner/checks-action@6b626ffbad7cc56fd58627f774b9067e6118af23 # v2
with:
token: ${{ secrets.GITHUB_TOKEN }}
sha: ${{ needs.triage.outputs.commit_sha }}
name: ${{ env.S390X_SMOKE_CHECK_NAME }}
status: in_progress
details_url: https://github.com/${{github.repository}}/actions/runs/${{github.run_id}}
- name: Setup Go
uses: actions/setup-go@f111f3307d8850f501ac008e886eec1fd1932a34 # v5
with:
go-version: "1.23"
- name: Install prerequisites
run: |
apt update
apt install curl make ca-certificates gcc libc-dev wget -y
env:
DEBIAN_FRONTEND: noninteractive
- name: Check out code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
fetch-depth: 1
- name: Install Kubernetes
run: |
go install sigs.k8s.io/kind@v0.29.0
kind create cluster --name ${{ runner.name }} --image vishnubijukumar/kindest-node-s390x:v0.29.0
kubectl -n kube-system set image daemonset/kindnet kindnet-cni=docker.io/vishnubijukumar/kindnetd:v20250512-s390x
kubectl -n local-path-storage set image deployment/local-path-provisioner local-path-provisioner=docker.io/vishnubijukumar/local-path-provisioner:v1
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/s390x/kubectl"
chmod +x kubectl
mkdir -p ~/.local/bin
mv ./kubectl ~/.local/bin/kubectl
kubectl version
- name: Run smoke test
continue-on-error: true
run: make smoke-test
id: test
env:
VERSION: ${{ needs.triage.outputs.image_tag }}
SUFFIX: "-test"
- name: React to comment with success
uses: dkershner6/reaction-action@97ede302a1b145b3739dec3ca84a489a34ef48b5 # v2
if: steps.test.outcome == 'success'
with:
token: ${{ secrets.GITHUB_TOKEN }}
commentId: ${{ github.event.comment.id }}
reaction: "+1"
- name: Set status success
uses: LouisBrunner/checks-action@6b626ffbad7cc56fd58627f774b9067e6118af23 # v2
if: steps.test.outcome == 'success'
with:
token: ${{ secrets.GITHUB_TOKEN }}
sha: ${{ needs.triage.outputs.commit_sha }}
name: ${{ env.S390X_SMOKE_CHECK_NAME }}
conclusion: success
details_url: https://github.com/${{github.repository}}/actions/runs/${{github.run_id}}
- name: React to comment with failure
uses: dkershner6/reaction-action@97ede302a1b145b3739dec3ca84a489a34ef48b5 # v2
if: steps.test.outcome != 'success'
with:
token: ${{ secrets.GITHUB_TOKEN }}
commentId: ${{ github.event.comment.id }}
reaction: "-1"
- name: Set status failure
uses: LouisBrunner/checks-action@6b626ffbad7cc56fd58627f774b9067e6118af23 # v2
if: steps.test.outcome != 'success'
with:
token: ${{ secrets.GITHUB_TOKEN }}
sha: ${{ needs.triage.outputs.commit_sha }}
name: ${{ env.S390X_SMOKE_CHECK_NAME }}
conclusion: failure
details_url: https://github.com/${{github.repository}}/actions/runs/${{github.run_id}}
- name: Remove Kubernetes
run: |
kind delete cluster --name ${{ runner.name }}
if: ${{ always() }}
env:
DEBIAN_FRONTEND: noninteractive
- name: Upload test logs
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
with:
name: s390x-smoke-test-logs
path: "${{ github.workspace }}/**/*.log"
if-no-files-found: ignore

View File

@ -18,6 +18,8 @@ jobs:
name: arm64
- runner: ubuntu-latest
name: amd64
- runner: s390x
name: s390x
steps:
- name: Check out code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
@ -68,10 +70,15 @@ jobs:
- name: Test
run: make test
- name: Test with race detector enabled
run: make test-race
- name: Create test Summary
uses: test-summary/action@31493c76ec9e7aa675f1585d3ed6f1da69269a86 # v2.4
with:
paths: "report.xml"
paths: |
report.xml
report-race.xml
if: always()
validate-dockerfiles:
@ -88,6 +95,8 @@ jobs:
name: arm64
- runner: ubuntu-latest
name: amd64
- runner: s390x
name: s390x
steps:
- name: Check out code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
@ -122,6 +131,8 @@ jobs:
name: arm64
- runner: ubuntu-latest
name: amd64
- runner: s390x
name: s390x
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
@ -144,10 +155,10 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: 3.x
- uses: actions/setup-go@0aaccfd150d50ccaeb58ebd88d36e91967a5f35b # v5
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5
with:
go-version: "1.23"
- uses: pre-commit/action@2c7b3805fd2a0fd8c1884dcaebf91fc102a13ecd # v3.0.1

View File

@ -1,65 +0,0 @@
name: PR Welcome Bot
on:
pull_request_target:
types: [opened, ready_for_review]
branches:
- "main"
pull_request_review:
types: [submitted, edited]
permissions:
issues: write
pull-requests: write
jobs:
pr_bot:
name: PR Bot
runs-on: ubuntu-latest
steps:
- name: "Add welcome comment on PR #${{ github.event.number }} (draft)"
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7
if: github.event_name == 'pull_request_target' && github.event.pull_request.action == 'opened' && github.event.pull_request.draft
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: ${{ github.event.number }},
body: 'Thank you for your contribution! 🙏 Let us know when you are ready for a review by publishing the PR.'
});
- name: "Add welcome comment on PR #${{ github.event.number }}"
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7
if: github.event_name == 'pull_request_target' && (github.event.pull_request.action == 'opened' || github.event.pull_request.action == 'ready_for_review')
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: ${{ github.event.number }},
body: 'Thank you for your contribution! 🙏 We will review your PR as soon as possible.\n\n\n While you are waiting, make sure to:\n\n\n- Add an entry in [our changelog](https://github.com/kedacore/keda/blob/main/CHANGELOG.md) in alphabetical order and link related issue\n- Update the [documentation](https://github.com/kedacore/keda-docs), if needed\n- Add unit & [e2e](https://github.com/kedacore/keda/blob/main/tests/README.md) tests for your changes\n- GitHub checks are passing\n- Is the DCO check failing? Here is [how you can fix DCO issues](https://github.com/kedacore/keda/blob/main/CONTRIBUTING.md#i-didnt-sign-my-commit-now-what)\n\n\nLearn more about:\n- Our [contribution guide](https://github.com/kedacore/keda/blob/main/CONTRIBUTING.md)'
});
- name: "Apply review required label"
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7
if: github.event_name == 'pull_request_target' && (github.event.pull_request.action == 'opened'|| github.event.pull_request.action == 'ready_for_review')
with:
script: |
github.rest.issues.addLabels({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
labels: ["requires-pr-review"]
})
- name: "Remove review required label"
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7
if: github.event_name == 'pull_request_review' && (github.event.review.state == 'submitted' || github.event.review.state == 'edited')
with:
script: |
github.rest.issues.removeLabel({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
name: "requires-pr-review"
})

View File

@ -76,7 +76,7 @@ jobs:
# https://github.com/sigstore/cosign-installer
- name: Install Cosign
uses: sigstore/cosign-installer@dc72c7d5c4d10cd6bcb8cf6e3fd625a9e5e537da # v3.7.0
uses: sigstore/cosign-installer@d58896d6a1865668819e1d91763c7751a165e159 # v3.9.2
- name: Check Cosign install!
run: cosign version

View File

@ -33,7 +33,7 @@ jobs:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@0864cf19026789058feabb7e87baa5f140aac736 # v2.3.1
uses: ossf/scorecard-action@05b42c624433fc40578a4040d5cf5e36ddca8cde # v2.4.2
with:
results_file: results.sarif
results_format: sarif
@ -64,6 +64,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard (optional).
# Commenting out will disable upload of results to your repo's Code Scanning dashboard
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@1b549b9259bda1cb5ddde3b41741a82a2d15a841 # v3.28.13
uses: github/codeql-action/upload-sarif@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
with:
sarif_file: results.sarif

View File

@ -26,16 +26,16 @@ jobs:
run: git config --global --add safe.directory "$GITHUB_WORKSPACE"
- name: Initialize CodeQL
uses: github/codeql-action/init@@1b549b9259bda1cb5ddde3b41741a82a2d15a841 # v3.28.13
uses: github/codeql-action/init@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
with:
languages: go
# Details on CodeQL's query packs refer to : https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
queries: +security-and-quality
- name: Autobuild
uses: github/codeql-action/autobuild@@1b549b9259bda1cb5ddde3b41741a82a2d15a841 # v3.28.13
uses: github/codeql-action/autobuild@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@@1b549b9259bda1cb5ddde3b41741a82a2d15a841 # v3.28.13
uses: github/codeql-action/analyze@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
with:
category: "/language:go"

View File

@ -39,7 +39,7 @@ jobs:
SEMGREP_APP_TOKEN: ${{ secrets.SEMGREP_APP_TOKEN }}
- name: Upload SARIF file for GitHub Advanced Security Dashboard
uses: github/codeql-action/upload-sarif@@1b549b9259bda1cb5ddde3b41741a82a2d15a841 # v3.28.13
uses: github/codeql-action/upload-sarif@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
with:
sarif_file: semgrep.sarif
if: ${{ github.event.number == '' && !cancelled() }}

View File

@ -10,5 +10,5 @@ jobs:
uses: kedacore/keda/.github/workflows/template-smoke-tests.yml@main
with:
runs-on: ARM64
kubernetesVersion: v1.32
kindImage: kindest/node:v1.32.0@sha256:c48c62eac5da28cdadcf560d1d8616cfa6783b58f0d94cf63ad1bf49600cb027
kubernetesVersion: v1.33
kindImage: kindest/node:v1.33.1@sha256:050072256b9a903bd914c0b2866828150cb229cea0efe5892e2b644d5dd3b34f

View File

@ -0,0 +1,57 @@
name: Reusable workflow to run smoke tests on s390x
on:
workflow_call:
jobs:
smoke-tests-s390x:
name: Validate k8s
runs-on: s390x
steps:
- name: Setup Go
uses: actions/setup-go@f111f3307d8850f501ac008e886eec1fd1932a34 # v5
with:
go-version: "1.23"
- name: Install prerequisites
run: |
apt update
apt install curl make ca-certificates gcc libc-dev wget -y
env:
DEBIAN_FRONTEND: noninteractive
- name: Check out code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
fetch-depth: 1
- name: Install Kubernetes
run: |
go install sigs.k8s.io/kind@v0.29.0
kind create cluster --name ${{ runner.name }} --image vishnubijukumar/kindest-node-s390x:v0.29.0
kubectl -n kube-system set image daemonset/kindnet kindnet-cni=docker.io/vishnubijukumar/kindnetd:v20250512-s390x
kubectl -n local-path-storage set image deployment/local-path-provisioner local-path-provisioner=docker.io/vishnubijukumar/local-path-provisioner:v1
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/s390x/kubectl"
chmod +x kubectl
mkdir -p ~/.local/bin
mv ./kubectl ~/.local/bin/kubectl
kubectl version
- name: Run smoke test
run: make smoke-test
- name: Upload test logs
uses: actions/upload-artifact@4cec3d8aa04e39d1a68397de0c4cd6fb9dce8ec1 # v4
if: ${{ always() }}
with:
name: smoke-test-logs ${{ inputs.runs-on }}-${{ inputs.kubernetesVersion }}
path: "${{ github.workspace }}/**/*.log"
if-no-files-found: ignore
- name: Remove Kubernetes
run: |
kind delete cluster --name ${{ runner.name }}
if: ${{ always() }}
env:
DEBIAN_FRONTEND: noninteractive

View File

@ -19,7 +19,7 @@ jobs:
runs-on: ${{ inputs.runs-on }}
steps:
- name: Setup Go
uses: actions/setup-go@0aaccfd150d50ccaeb58ebd88d36e91967a5f35b # v5
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5
with:
go-version: "1.23"

View File

@ -39,7 +39,7 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- name: Run Trivy
uses: aquasecurity/trivy-action@6c175e9c4083a92bbca2f9724c8a5e33bc2d97a5 # v0.30.0
uses: aquasecurity/trivy-action@dc5a429b52fcf669ce959baa2c2dd26090d2a6c4 # v0.32.0
env:
TRIVY_DB_REPOSITORY: ghcr.io/kedacore/trivy-db
with:
@ -53,7 +53,7 @@ jobs:
trivy-config: trivy.yml
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@@1b549b9259bda1cb5ddde3b41741a82a2d15a841 # v3.28.13
uses: github/codeql-action/upload-sarif@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
if: ${{ inputs.publish }}
with:
sarif_file: ${{ inputs.output }}

View File

@ -9,14 +9,14 @@ jobs:
strategy:
fail-fast: false
matrix:
kubernetesVersion: [v1.32, v1.31, v1.30]
kubernetesVersion: [v1.33, v1.32, v1.31]
include:
- kubernetesVersion: v1.33
kindImage: kindest/node:v1.33.1@sha256:050072256b9a903bd914c0b2866828150cb229cea0efe5892e2b644d5dd3b34f
- kubernetesVersion: v1.32
kindImage: kindest/node:v1.32.0@sha256:c48c62eac5da28cdadcf560d1d8616cfa6783b58f0d94cf63ad1bf49600cb027
kindImage: kindest/node:v1.32.5@sha256:e3b2327e3a5ab8c76f5ece68936e4cafaa82edf58486b769727ab0b3b97a5b0d
- kubernetesVersion: v1.31
kindImage: kindest/node:v1.31.4@sha256:2cb39f7295fe7eafee0842b1052a599a4fb0f8bcf3f83d96c7f4864c357c6c30
- kubernetesVersion: v1.30
kindImage: kindest/node:v1.30.8@sha256:17cd608b3971338d9180b00776cb766c50d0a0b6b904ab4ff52fd3fc5c6369bf
kindImage: kindest/node:v1.31.9@sha256:b94a3a6c06198d17f59cca8c6f486236fa05e2fb359cbd75dabbfc348a10b211
uses: kedacore/keda/.github/workflows/template-smoke-tests.yml@main
with:
runs-on: ubuntu-latest

View File

@ -135,7 +135,7 @@ All components inspect the folder `/certs` for any certificates inside it. Argum
```bash
mkdir -p /certs
openssl req -newkey rsa:2048 -subj '/CN=localhost' -nodes -keyout /certs/tls.key -x509 -days 3650 -out /certs/tls.crt
openssl req -newkey rsa:2048 -subj '/CN=localhost' -addext "subjectAltName = DNS:localhost" -nodes -keyout /certs/tls.key -x509 -days 3650 -out /certs/tls.crt
cp /certs/tls.crt /certs/ca.crt
```
@ -287,7 +287,7 @@ Follow these instructions if you want to debug the KEDA webhook using VS Code.
clientConfig:
url: "https://${YOUR_URL}/validate-keda-sh-v1alpha1-scaledobject"
```
**Note:** You need to define also the key `caBundle` with the CA bundle encoded in base64. This `caBundle` is the pem file from the CA used to sign the certificate. Remember to disable the `caBundle` inyection to avoid unintended rewrites of your `caBundle` (by KEDA operator or by any other 3rd party)
**Note:** You need to define also the key `caBundle` with the CA bundle encoded in base64. This `caBundle` is the pem file from the CA used to sign the certificate. Remember to disable the `caBundle` injection to avoid unintended rewrites of your `caBundle` (by KEDA operator or by any other 3rd party)
4. Deploy CRDs and KEDA into `keda` namespace
@ -340,11 +340,11 @@ rfc3339nano - 2022-05-24T12:10:19.411Z
### Metrics Server logging
Find `--v=0` argument in Operator Deployment section in `config/metrics-server/deployment.yaml` file, modify its value and redeploy.
The Metrics Server logging can be configured in a similar way to the KEDA Operator and Admission Webhooks. The configuration is done in the `config/metrics-server/deployment.yaml` file.
Allowed values are `"0"` for info, `"4"` for debug, or an integer value greater than `0`, specified as string
To change the logging format, find the `--zap-encoder=` argument and modify its value. The allowed values are `json` and `console`. The default value is `console`.
Default value: `"0"`
To change the logging time encoding, find the `--zap-time-encoding=` argument and modify its value. The allowed values are `epoch`, `millis`, `nano`, `iso8601`, `rfc3339`, or `rfc3339nano`. The default value is `rfc3339`.
### CPU/Memory Profiling

View File

@ -16,7 +16,9 @@ To learn more about active deprecations, we recommend checking [GitHub Discussio
## History
- [Unreleased](#unreleased)
- [v2.17.0](#v2161)
- [v2.17.2](#v2172)
- [v2.17.1](#v2171)
- [v2.17.0](#v2170)
- [v2.16.1](#v2161)
- [v2.16.0](#v2160)
- [v2.15.1](#v2151)
@ -58,9 +60,13 @@ To learn more about active deprecations, we recommend checking [GitHub Discussio
## Unreleased
### New
- TODO ([#XXX](https://github.com/kedacore/keda/issues/XXX))
- **General**: Add error and event for mismatching input property ([#6721](https://github.com/kedacore/keda/issues/6721))
- **General**: Enable support on s390x for KEDA ([#6543](https://github.com/kedacore/keda/issues/6543))
- **General**: Introduce new Solace Direct Messaging scaler ([#6545](https://github.com/kedacore/keda/issues/6545))
- **General**: Introduce new Sumo Logic Scaler ([#6734](https://github.com/kedacore/keda/issues/6734))
#### Experimental
@ -68,11 +74,20 @@ To learn more about active deprecations, we recommend checking [GitHub Discussio
### Improvements
- TODO ([#XXX](https://github.com/kedacore/keda/issues/XXX))
- **General**: Allow excluding labels from being propagated from ScaledObject and ScaledJob to generated HPA and Job objects ([#6849](https://github.com/kedacore/keda/issues/6849))
- **General**: Improve Events emitted from ScaledObject controller ([#6802](https://github.com/kedacore/keda/issues/6802))
- **Datadog Scaler**: Fix bug with datadogNamespace config ([#6828](https://github.com/kedacore/keda/pull/6828))
- **Metrics API**: Support multiple auth methods simultaneously in Metrics API scaler ([#6642](https://github.com/kedacore/keda/issues/6642))
- **Temporal Scaler**: Support custom tlsServerName ([#6820](https://github.com/kedacore/keda/pull/6820))
### Fixes
- TODO ([#XXX](https://github.com/kedacore/keda/issues/XXX))
- **General**: Add missing omitempty json tags in the AuthPodIdentity struct ([#6779](https://github.com/kedacore/keda/issues/6779))
- **General**: Fix prefixes on envFrom elements in a deployment spec aren't being interpreted and Environment variables are not prefixed with the prefix ([#6728](https://github.com/kedacore/keda/issues/6728))
- **General**: Remove klogr dependency and replace with zap ([#5732](https://github.com/kedacore/keda/issues/5732))
- **General**: Sets hpaName in Status when ScaledObject adopts/finds an existing HPA ([#6336](https://github.com/kedacore/keda/issues/6336))
- **Cron Scaler**: Fix cron scaler to return zero metric value by default([#6886](https://github.com/kedacore/keda/issues/6886))
- **RabbitMQ Scaler**: Fix incorrect URL encoding in RabbitMQ vhosts containing %2f ([#6963](https://github.com/kedacore/keda/issues/6963))
### Deprecations
@ -80,15 +95,45 @@ You can find all deprecations in [this overview](https://github.com/kedacore/ked
New deprecation(s):
- TODO ([#XXX](https://github.com/kedacore/keda/issues/XXX))
- **GCP Pub/Sub Scaler**: The 'subscriptionSize' setting is DEPRECATED and will be removed in v2.20 - Use 'mode' and 'value' instead" ([#6866](https://github.com/kedacore/keda/pull/6866))
- **Huawei Cloudeye Scaler**: The 'minMetricValue' setting is DEPRECATED and will be removed in v2.20 - Use 'activationTargetMetricValue' instead" ([#6978](https://github.com/kedacore/keda/pull/6978))
### Breaking Changes
- TODO ([#XXX](https://github.com/kedacore/keda/issues/XXX))
- **General**: Remove Prometheus webhook prommetrics deprecations ([#6698](https://github.com/kedacore/keda/pull/6698))
- **CPU Memory scaler**: The 'type' setting is deprecated and removed, use 'metricType' instead ([#6698](https://github.com/kedacore/keda/pull/6698))
- **IBM MQ scaler**: The 'tls' setting is deprecated and removed, use 'unsafeSsl' instead ([#6698](https://github.com/kedacore/keda/pull/6698))
### Other
- TODO ([#XXX](https://github.com/kedacore/keda/issues/XXX))
- **General**: Fix several typos ([#6909](https://github.com/kedacore/keda/pull/6909))
- **General**: Replace deprecated webhook.Validator with webhook.CustomValidator ([#6660](https://github.com/kedacore/keda/issues/6660))
- **MSSQL Scaler**: Refactor MS SQL e2e test ([#3401](https://github.com/kedacore/keda/issues/3401))
## v2.17.2
### Improvements
- **General**: Internal gRPC connection's certificates are hot reloaded ([#6756](https://github.com/kedacore/keda/pull/6756))
### Fixes
- **Temporal Scaler**: Fix Temporal Scaler TLS version ([#6707](https://github.com/kedacore/keda/pull/6707))
## v2.17.1
### Improvements
- **Selenium Grid**: Update metric name generated without part of empty ([#6772](https://github.com/kedacore/keda/pull/6772))
### Fixes
- **General**: Admission Webhook blocks ScaledObject without metricType with fallback ([#6696](https://github.com/kedacore/keda/issues/6696))
- **General**: ScalerCache gets the lock before operate the scalers to prevent panics ([#6739](https://github.com/kedacore/keda/pull/6739))
- **AWS SQS Queue Scaler**: Fix AWS SQS Queue queueURLFromEnv not working ([#6712](https://github.com/kedacore/keda/issues/6712))
- **Azure Service Bus scaler**: Fix Azure Service Bus scaler add default Operation ([#6730](https://github.com/kedacore/keda/issues/6730))
- **Temporal Scaler**: Fix Temporal Scaler does not work properly with API Key authentication against Temporal Cloud as TLS is not enabled on the client ([#6703](https://github.com/kedacore/keda/issues/6703))
## v2.17.0
@ -130,7 +175,7 @@ New deprecation(s):
- **General**: Make sure the exposed metrics (from KEDA operator) are updated when there is a change to triggers ([#6618](https://github.com/kedacore/keda/pull/6618))
- **General**: Paused ScaledObject count is reported correctly after operator restart ([#6321](https://github.com/kedacore/keda/issues/6321))
- **General**: Reiterate fix (after [#6407](https://github.com/kedacore/keda/pull/6407)) for fallback validation in admission webhook. ([#6538](https://github.com/kedacore/keda/pull/6538))
- **General**: ScaledJobs ready status set to true when recoverred problem ([#6329](https://github.com/kedacore/keda/pull/6329))
- **General**: ScaledJobs ready status set to true when recovered problem ([#6329](https://github.com/kedacore/keda/pull/6329))
- **AWS Scalers**: Add AWS region to the AWS Config Cache key ([#6128](https://github.com/kedacore/keda/issues/6128))
- **External Scaler**: Support server TLS without custom CA ([#6606](https://github.com/kedacore/keda/pull/6606))
- **GCP Storage**: GCP Storage scaler ignores folders ([#6531](https://github.com/kedacore/keda/issues/6531))
@ -156,6 +201,7 @@ New deprecation(s):
### Other
- **General**: Add debug logs tracking validation of ScaledObjects on webhook ([#6498](https://github.com/kedacore/keda/pull/6498))
- **General**: Add time.Duration in TypedConfig ([#6650](https://github.com/kedacore/keda/pull/6650))
- **General**: New eventreason KEDAScalersInfo to display important information ([#6328](https://github.com/kedacore/keda/issues/6328))
- **Apache Kafka Scaler**: Remove unused awsEndpoint in Apache Kafka scaler ([#6627](https://github.com/kedacore/keda/pull/6627))
- **External Scalers**: Allow `float64` values in externalmetrics' `MetricValue` & `TargetSize`. The old fields are still there because of backward compatibility. ([#5159](https://github.com/kedacore/keda/issues/5159))
@ -166,7 +212,7 @@ New deprecation(s):
- **General**: Centralize and improve automaxprocs configuration with proper structured logging ([#5970](https://github.com/kedacore/keda/issues/5970))
- **General**: Paused ScaledObject count is reported correctly after operator restart ([#6321](https://github.com/kedacore/keda/issues/6321))
- **General**: ScaledJobs ready status set to true when recoverred problem ([#6329](https://github.com/kedacore/keda/pull/6329))
- **General**: ScaledJobs ready status set to true when recovered problem ([#6329](https://github.com/kedacore/keda/pull/6329))
- **Selenium Grid Scaler**: Exposes sum of pending and ongoing sessions to KDEA ([#6368](https://github.com/kedacore/keda/pull/6368))
### Other
@ -187,7 +233,7 @@ New deprecation(s):
- **CloudEventSource**: Provide ClusterCloudEventSource around the management of ScaledJobs resources ([#3523](https://github.com/kedacore/keda/issues/3523))
- **CloudEventSource**: Provide ClusterCloudEventSource around the management of TriggerAuthentication/ClusterTriggerAuthentication resources ([#3524](https://github.com/kedacore/keda/issues/3524))
- **Github Action**: Fix panic when env for runnerScopeFromEnv or ownerFromEnv is empty ([#6156](https://github.com/kedacore/keda/issues/6156))
- **RabbitMQ Scaler**: provide separate paremeters for user and password ([#2513](https://github.com/kedacore/keda/issues/2513))
- **RabbitMQ Scaler**: provide separate parameters for user and password ([#2513](https://github.com/kedacore/keda/issues/2513))
### Improvements
@ -201,7 +247,7 @@ New deprecation(s):
- **GitHub Scaler**: Fixed pagination, fetching repository list ([#5738](https://github.com/kedacore/keda/issues/5738))
- **Grafana dashboard**: Fix dashboard to handle wildcard scaledObject variables ([#6214](https://github.com/kedacore/keda/issues/6214))
- **IBMMQ Scaler**: Support multiple queues at the IBMMQ scaler ([#6181](https://github.com/kedacore/keda/issues/6181))
- **Kafka**: Allow disabling FAST negotation when using Kerberos ([#6188](https://github.com/kedacore/keda/issues/6188))
- **Kafka**: Allow disabling FAST negotiation when using Kerberos ([#6188](https://github.com/kedacore/keda/issues/6188))
- **Kafka**: Fix logic to scale to zero on invalid offset even with earliest offsetResetPolicy ([#5689](https://github.com/kedacore/keda/issues/5689))
- **RabbitMQ Scaler**: Add connection name for AMQP ([#5958](https://github.com/kedacore/keda/issues/5958))
- **Selenium Grid Scaler**: Add optional auth parameters `username`, `password`, `authType`, `accessToken` to configure a secure GraphQL endpoint ([#6144](https://github.com/kedacore/keda/issues/6144))
@ -577,7 +623,7 @@ None.
- **General**: Paused ScaledObject continues working after removing the annotation ([#4733](https://github.com/kedacore/keda/issues/4733))
- **General**: Skip resolving secrets if namespace is restricted ([#4519](https://github.com/kedacore/keda/issues/4519))
- **Prometheus**: Authenticated connections to Prometheus work in non-PodIdenty case ([#4695](https://github.com/kedacore/keda/issues/4695))
- **Prometheus**: Authenticated connections to Prometheus work in non-PodIdentity case ([#4695](https://github.com/kedacore/keda/issues/4695))
### Deprecations
@ -633,7 +679,7 @@ None.
- **General**: Allow to remove the finalizer even if the ScaledObject isn't valid ([#4396](https://github.com/kedacore/keda/issues/4396))
- **General**: Check ScaledObjects with multiple triggers with non unique name in the Admission Webhook ([#4664](https://github.com/kedacore/keda/issues/4664))
- **General**: Grafana Dashboard: Fix HPA metrics panel by replacing $namepsace to $exported_namespace due to label conflict ([#4539](https://github.com/kedacore/keda/pull/4539))
- **General**: Grafana Dashboard: Fix HPA metrics panel by replacing $namespace to $exported_namespace due to label conflict ([#4539](https://github.com/kedacore/keda/pull/4539))
- **General**: Grafana Dashboard: Fix HPA metrics panel to use range instead of instant ([#4513](https://github.com/kedacore/keda/pull/4513))
- **General**: ScaledJob: Check if MaxReplicaCount is nil before access to it ([#4568](https://github.com/kedacore/keda/issues/4568))
- **AWS SQS Scaler**: Respect `scaleOnInFlight` value ([#4276](https://github.com/kedacore/keda/issues/4276))
@ -720,7 +766,7 @@ Here is an overview of all new **experimental** features:
- **Azure Queue Scaler**: Fix azure queue length ([#4002](https://github.com/kedacore/keda/issues/4002))
- **Azure Service Bus Scaler**: Improve way clients are created to reduce amount of ARM requests ([#4262](https://github.com/kedacore/keda/issues/4262))
- **Azure Service Bus Scaler**: Use correct auth flows with pod identity ([#4026](https://github.com/kedacore/keda/issues/4026)|[#4123](https://github.com/kedacore/keda/issues/4123))
- **Cassandra Scaler**: Checking whether the port information is entered in the ClusterIPAddres is done correctly. ([#4110](https://github.com/kedacore/keda/issues/4110))
- **Cassandra Scaler**: Checking whether the port information is entered in the ClusterIPAddress is done correctly. ([#4110](https://github.com/kedacore/keda/issues/4110))
- **CPU Memory Scaler**: Store forgotten logger ([#4022](https://github.com/kedacore/keda/issues/4022))
- **Datadog Scaler**: Return correct error when getting a 429 error ([#4187](https://github.com/kedacore/keda/issues/4187))
- **Kafka Scaler**: Return error if the processing of the partition lag fails ([#4098](https://github.com/kedacore/keda/issues/4098))
@ -788,7 +834,7 @@ Here is an overview of all **stable** additions:
- **General**: Introduce new CouchDB Scaler ([#3746](https://github.com/kedacore/keda/issues/3746))
- **General**: Introduce new Etcd Scaler ([#3880](https://github.com/kedacore/keda/issues/3880))
- **General**: Introduce new Loki Scaler ([#3699](https://github.com/kedacore/keda/issues/3699))
- **General**: Introduce rate-limitting parameters to KEDA manager to allow override of client defaults ([#3730](https://github.com/kedacore/keda/issues/3730))
- **General**: Introduce rate-limiting parameters to KEDA manager to allow override of client defaults ([#3730](https://github.com/kedacore/keda/issues/3730))
- **General**: Introduction deprecation & breaking change policy ([#68](https://github.com/kedacore/governance/issues/68))
- **General**: Produce reproducible builds ([#3509](https://github.com/kedacore/keda/issues/3509))
- **General**: Provide off-the-shelf Grafana dashboard for application autoscaling ([#3911](https://github.com/kedacore/keda/issues/3911))
@ -934,7 +980,7 @@ None.
### Fixes
- **General**: Provide patch for CVE-2022-27191 vulnerability ([#3378](https://github.com/kedacore/keda/issues/3378))
- **General**: Refactor adapter startup to ensure proper log initilization. ([#2316](https://github.com/kedacore/keda/issues/2316))
- **General**: Refactor adapter startup to ensure proper log initialization. ([#2316](https://github.com/kedacore/keda/issues/2316))
- **General**: Scaleobject ready condition 'False/Unknown' to 'True' requeue ([#3096](https://github.com/kedacore/keda/issues/3096))
- **General**: Use `go install` in the Makefile for downloading dependencies ([#2916](https://github.com/kedacore/keda/issues/2916))
- **General**: Use metricName from GetMetricsSpec in ScaledJobs instead of `queueLength` ([#3032](https://github.com/kedacore/keda/issues/3032))

View File

@ -95,7 +95,7 @@ Every change should be added to our changelog under `Unreleased` which is locate
Here are some guidelines to follow:
- Always use `General: ` or `<Scaler Name>: ` as a prefix and sort them alphabetically
- General changes, however, should always be at the top
- Entries should always follow the `<Scaler Name / General>: <Description> (#<ID>)` where `<ID>` is preferrably the ID of an issue, otherwise a PR is OK.
- Entries should always follow the `<Scaler Name / General>: <Description> (#<ID>)` where `<ID>` is preferably the ID of an issue, otherwise a PR is OK.
- New scalers should use `General:` and use this template: `**General:** Introduce new XXXXXX Scaler ([#ISSUE](https://github.com/kedacore/keda/issues/ISSUE))`
## Including Documentation Changes

View File

@ -63,7 +63,7 @@ KEDA works in conjunction with Kubernetes Horizontal Pod Autoscaler (HPA). When
The return type of this function is `MetricSpec`, but in KEDA's case we will mostly write External metrics. So the property that should be filled is `ExternalMetricSource`, where the:
- `MetricName`: the name of our metric we are returning in this scaler. The name should be unique, to allow setting multiple (even the same type) Triggers in one ScaledObject, but each function call should return the same name.
- `TargetValue`: is the value of the metric we want to reach at all times at all costs. As long as the current metric doesn't match TargetValue, HPA will increase the number of the pods until it reaches the maximum number of pods allowed to scale to.
- `TargetAverageValue`: the value of the metric for which we require one pod to handle. e.g. if we have a scaler based on the length of a message queue, and we specificy 10 for `TargetAverageValue`, we are saying that each pod will handle 10 messages. So if the length of the queue becomes 30, we expect that we have 3 pods in our cluster. (`TargetAverageValue` and `TargetValue` are mutually exclusive).
- `TargetAverageValue`: the value of the metric for which we require one pod to handle. e.g. if we have a scaler based on the length of a message queue, and we specify 10 for `TargetAverageValue`, we are saying that each pod will handle 10 messages. So if the length of the queue becomes 30, we expect that we have 3 pods in our cluster. (`TargetAverageValue` and `TargetValue` are mutually exclusive).
All scalers receive a parameter named `triggerIndex` as part of `ScalerConfig`. This value is the index of the current scaler in a ScaledObject. All metric names have to start with `sX-` (where `X` is `triggerIndex`). This convention makes the metric name unique in the ScaledObject and brings the option to have more than 1 "similar metric name" defined in a ScaledObject.

View File

@ -39,4 +39,4 @@ COPY --from=builder /workspace/bin/keda-adapter .
USER 65532:65532
ENTRYPOINT ["/keda-adapter", "--secure-port=6443", "--logtostderr=true", "--v=0"]
ENTRYPOINT ["/keda-adapter", "--secure-port=6443", "--zap-log-level=info", "--zap-encoder=console"]

View File

@ -3,17 +3,8 @@
##################################################
SHELL = /bin/bash
# If E2E_IMAGE_TAG is defined, we are on pr e2e test and we have to use the new tag and append -test to the repository
ifeq '${E2E_IMAGE_TAG}' ''
VERSION ?= main
# SUFFIX here is intentional empty to not append nothing to the repository
SUFFIX =
endif
ifneq '${E2E_IMAGE_TAG}' ''
VERSION = ${E2E_IMAGE_TAG}
SUFFIX = -test
endif
SUFFIX ?=
IMAGE_REGISTRY ?= ghcr.io
IMAGE_REPO ?= kedacore
@ -26,7 +17,7 @@ ARCH ?=amd64
CGO ?=0
TARGET_OS ?=linux
BUILD_PLATFORMS ?= linux/amd64,linux/arm64
BUILD_PLATFORMS ?= linux/amd64,linux/arm64,linux/s390x
OUTPUT_TYPE ?= registry
GIT_VERSION ?= $(shell git describe --always --abbrev=7)
@ -79,6 +70,10 @@ all: build
test: manifests generate fmt vet envtest gotestsum ## Run tests and export the result to junit format.
KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) -p path)" $(GOTESTSUM) --format standard-quiet --rerun-fails --junitfile report.xml
.PHONY: test-race
test-race: manifests generate fmt vet envtest gotestsum ## Run tests and export the result to junit format.
KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) -p path)" $(GOTESTSUM) --format standard-quiet --rerun-fails --junitfile report-race.xml --packages=./... -- -race
.PHONY:
az-login:
@az login --service-principal -u $(TF_AZURE_SP_APP_ID) -p "$(AZURE_SP_KEY)" --tenant $(TF_AZURE_SP_TENANT)
@ -119,6 +114,9 @@ e2e-test-clean-crds: ## Delete all scaled objects and jobs across all namespaces
.PHONY: e2e-test-clean
e2e-test-clean: get-cluster-context ## Delete all namespaces labeled with type=e2e
kubectl delete ns -l type=e2e
# Clean up the strimzi CRDs, helm will not update them on Strimzi install if they already exist
# and we get stranded on old versions when we try to upgrade
kubectl get crd -o name | grep kafka.strimzi.io | xargs -r kubectl delete --ignore-not-found=true --timeout=60s
.PHONY: smoke-test
smoke-test: ## Run e2e tests against Kubernetes cluster configured in ~/.kube/config.

View File

@ -96,7 +96,7 @@ For details, see [Publishing a new version](https://github.com/kedacore/keda-doc
> Note: During hotfix releases, this step isn't required as we don't introduce new features
## 5. Setup continous container scanning with Snyk
## 5. Setup continuous container scanning with Snyk
In order to continuously scan our new container image, they must be imported in our [Snyk project](https://app.snyk.io/org/keda/projects) for all newly introduced tags.

View File

@ -17,6 +17,7 @@ limitations under the License.
package v1alpha1
import (
"context"
"encoding/json"
"fmt"
"slices"
@ -33,28 +34,61 @@ var cloudeventsourcelog = logf.Log.WithName("cloudeventsource-validation-webhook
func (ces *CloudEventSource) SetupWebhookWithManager(mgr ctrl.Manager) error {
return ctrl.NewWebhookManagedBy(mgr).
WithValidator(&CloudEventSourceCustomValidator{}).
For(ces).
Complete()
}
func (cces *ClusterCloudEventSource) SetupWebhookWithManager(mgr ctrl.Manager) error {
return ctrl.NewWebhookManagedBy(mgr).
WithValidator(&ClusterCloudEventSourceCustomValidator{}).
For(cces).
Complete()
}
// +kubebuilder:webhook:path=/validate-eventing-keda-sh-v1alpha1-cloudeventsource,mutating=false,failurePolicy=ignore,sideEffects=None,groups=eventing.keda.sh,resources=cloudeventsources,verbs=create;update,versions=v1alpha1,name=vcloudeventsource.kb.io,admissionReviewVersions=v1
var _ webhook.Validator = &CloudEventSource{}
// CloudEventSourceCustomValidator is a custom validator for CloudEventSource objects
type CloudEventSourceCustomValidator struct{}
func (cescv CloudEventSourceCustomValidator) ValidateCreate(ctx context.Context, obj runtime.Object) (warnings admission.Warnings, err error) {
request, err := admission.RequestFromContext(ctx)
if err != nil {
return nil, err
}
ces := obj.(*CloudEventSource)
return ces.ValidateCreate(request.DryRun)
}
func (cescv CloudEventSourceCustomValidator) ValidateUpdate(ctx context.Context, oldObj, newObj runtime.Object) (warnings admission.Warnings, err error) {
request, err := admission.RequestFromContext(ctx)
if err != nil {
return nil, err
}
ces := newObj.(*CloudEventSource)
old := oldObj.(*CloudEventSource)
return ces.ValidateUpdate(old, request.DryRun)
}
func (cescv CloudEventSourceCustomValidator) ValidateDelete(ctx context.Context, obj runtime.Object) (warnings admission.Warnings, err error) {
request, err := admission.RequestFromContext(ctx)
if err != nil {
return nil, err
}
ces := obj.(*CloudEventSource)
return ces.ValidateDelete(request.DryRun)
}
var _ webhook.CustomValidator = &CloudEventSourceCustomValidator{}
// ValidateCreate implements webhook.Validator so a webhook will be registered for the type
func (ces *CloudEventSource) ValidateCreate() (admission.Warnings, error) {
func (ces *CloudEventSource) ValidateCreate(_ *bool) (admission.Warnings, error) {
val, _ := json.MarshalIndent(ces, "", " ")
cloudeventsourcelog.Info(fmt.Sprintf("validating cloudeventsource creation for %s", string(val)))
return validateSpec(&ces.Spec)
}
func (ces *CloudEventSource) ValidateUpdate(old runtime.Object) (admission.Warnings, error) {
func (ces *CloudEventSource) ValidateUpdate(old runtime.Object, _ *bool) (admission.Warnings, error) {
val, _ := json.MarshalIndent(ces, "", " ")
cloudeventsourcelog.V(1).Info(fmt.Sprintf("validating cloudeventsource update for %s", string(val)))
@ -66,22 +100,53 @@ func (ces *CloudEventSource) ValidateUpdate(old runtime.Object) (admission.Warni
return validateSpec(&ces.Spec)
}
func (ces *CloudEventSource) ValidateDelete() (admission.Warnings, error) {
func (ces *CloudEventSource) ValidateDelete(_ *bool) (admission.Warnings, error) {
return nil, nil
}
// +kubebuilder:webhook:path=/validate-eventing-keda-sh-v1alpha1-clustercloudeventsource,mutating=false,failurePolicy=ignore,sideEffects=None,groups=eventing.keda.sh,resources=clustercloudeventsources,verbs=create;update,versions=v1alpha1,name=vclustercloudeventsource.kb.io,admissionReviewVersions=v1
var _ webhook.Validator = &ClusterCloudEventSource{}
// ClusterCloudEventSourceCustomValidator is a custom validator for ClusterCloudEventSource objects
type ClusterCloudEventSourceCustomValidator struct{}
func (ccescv ClusterCloudEventSourceCustomValidator) ValidateCreate(ctx context.Context, obj runtime.Object) (warnings admission.Warnings, err error) {
request, err := admission.RequestFromContext(ctx)
if err != nil {
return nil, err
}
cces := obj.(*ClusterCloudEventSource)
return cces.ValidateCreate(request.DryRun)
}
func (ccescv ClusterCloudEventSourceCustomValidator) ValidateUpdate(ctx context.Context, oldObj, newObj runtime.Object) (warnings admission.Warnings, err error) {
request, err := admission.RequestFromContext(ctx)
if err != nil {
return nil, err
}
cces := newObj.(*ClusterCloudEventSource)
old := oldObj.(*ClusterCloudEventSource)
return cces.ValidateUpdate(old, request.DryRun)
}
func (ccescv ClusterCloudEventSourceCustomValidator) ValidateDelete(ctx context.Context, obj runtime.Object) (warnings admission.Warnings, err error) {
request, err := admission.RequestFromContext(ctx)
if err != nil {
return nil, err
}
cces := obj.(*ClusterCloudEventSource)
return cces.ValidateDelete(request.DryRun)
}
var _ webhook.CustomValidator = &ClusterCloudEventSourceCustomValidator{}
// ValidateCreate implements webhook.Validator so a webhook will be registered for the type
func (cces *ClusterCloudEventSource) ValidateCreate() (admission.Warnings, error) {
func (cces *ClusterCloudEventSource) ValidateCreate(_ *bool) (admission.Warnings, error) {
val, _ := json.MarshalIndent(cces, "", " ")
cloudeventsourcelog.Info(fmt.Sprintf("validating clustercloudeventsource creation for %s", string(val)))
return validateSpec(&cces.Spec)
}
func (cces *ClusterCloudEventSource) ValidateUpdate(old runtime.Object) (admission.Warnings, error) {
func (cces *ClusterCloudEventSource) ValidateUpdate(old runtime.Object, _ *bool) (admission.Warnings, error) {
val, _ := json.MarshalIndent(cces, "", " ")
cloudeventsourcelog.V(1).Info(fmt.Sprintf("validating clustercloudeventsource update for %s", string(val)))
@ -93,7 +158,7 @@ func (cces *ClusterCloudEventSource) ValidateUpdate(old runtime.Object) (admissi
return validateSpec(&cces.Spec)
}
func (cces *ClusterCloudEventSource) ValidateDelete() (admission.Warnings, error) {
func (cces *ClusterCloudEventSource) ValidateDelete(_ *bool) (admission.Warnings, error) {
return nil, nil
}

View File

@ -48,6 +48,8 @@ type ScaledJob struct {
Status ScaledJobStatus `json:"status,omitempty"`
}
const ScaledJobExcludedLabelsAnnotation = "scaledjob.keda.sh/job-excluded-labels"
// ScaledJobSpec defines the desired state of ScaledJob
type ScaledJobSpec struct {
JobTargetRef *batchv1.JobSpec `json:"jobTargetRef"`

View File

@ -17,6 +17,7 @@ limitations under the License.
package v1alpha1
import (
"context"
"encoding/json"
"fmt"
@ -32,22 +33,54 @@ var scaledjoblog = logf.Log.WithName("scaledjob-validation-webhook")
func (s *ScaledJob) SetupWebhookWithManager(mgr ctrl.Manager) error {
return ctrl.NewWebhookManagedBy(mgr).
WithValidator(&ScaledJobCustomValidator{}).
For(s).
Complete()
}
// +kubebuilder:webhook:path=/validate-keda-sh-v1alpha1-scaledjob,mutating=false,failurePolicy=ignore,sideEffects=None,groups=keda.sh,resources=scaledjobs,verbs=create;update,versions=v1alpha1,name=vscaledjob.kb.io,admissionReviewVersions=v1
var _ webhook.Validator = &ScaledJob{}
// ScaledJobCustomValidator is a custom validator for ScaledJob objects
type ScaledJobCustomValidator struct{}
// ValidateCreate implements webhook.Validator so a webhook will be registered for the type
func (s *ScaledJob) ValidateCreate() (admission.Warnings, error) {
val, _ := json.MarshalIndent(s, "", " ")
scaledjoblog.Info(fmt.Sprintf("validating scaledjob creation for %s", string(val)))
return nil, verifyTriggers(s, "create", false)
func (sjcv ScaledJobCustomValidator) ValidateCreate(ctx context.Context, obj runtime.Object) (warnings admission.Warnings, err error) {
request, err := admission.RequestFromContext(ctx)
if err != nil {
return nil, err
}
sj := obj.(*ScaledJob)
return sj.ValidateCreate(request.DryRun)
}
func (s *ScaledJob) ValidateUpdate(old runtime.Object) (admission.Warnings, error) {
func (sjcv ScaledJobCustomValidator) ValidateUpdate(ctx context.Context, oldObj, newObj runtime.Object) (warnings admission.Warnings, err error) {
request, err := admission.RequestFromContext(ctx)
if err != nil {
return nil, err
}
sj := newObj.(*ScaledJob)
old := oldObj.(*ScaledJob)
return sj.ValidateUpdate(old, request.DryRun)
}
func (sjcv ScaledJobCustomValidator) ValidateDelete(ctx context.Context, obj runtime.Object) (warnings admission.Warnings, err error) {
request, err := admission.RequestFromContext(ctx)
if err != nil {
return nil, err
}
sj := obj.(*ScaledJob)
return sj.ValidateDelete(request.DryRun)
}
var _ webhook.CustomValidator = &ScaledJobCustomValidator{}
// ValidateCreate implements webhook.Validator so a webhook will be registered for the type
func (s *ScaledJob) ValidateCreate(dryRun *bool) (admission.Warnings, error) {
val, _ := json.MarshalIndent(s, "", " ")
scaledjoblog.Info(fmt.Sprintf("validating scaledjob creation for %s", string(val)))
return nil, verifyTriggers(s, "create", *dryRun)
}
func (s *ScaledJob) ValidateUpdate(old runtime.Object, dryRun *bool) (admission.Warnings, error) {
val, _ := json.MarshalIndent(s, "", " ")
scaledobjectlog.V(1).Info(fmt.Sprintf("validating scaledjob update for %s", string(val)))
@ -56,10 +89,10 @@ func (s *ScaledJob) ValidateUpdate(old runtime.Object) (admission.Warnings, erro
scaledjoblog.V(1).Info("finalizer removal, skipping validation")
return nil, nil
}
return nil, verifyTriggers(s, "update", false)
return nil, verifyTriggers(s, "update", *dryRun)
}
func (s *ScaledJob) ValidateDelete() (admission.Warnings, error) {
func (s *ScaledJob) ValidateDelete(_ *bool) (admission.Warnings, error) {
return nil, nil
}

View File

@ -53,6 +53,7 @@ type ScaledObject struct {
const ScaledObjectOwnerAnnotation = "scaledobject.keda.sh/name"
const ScaledObjectTransferHpaOwnershipAnnotation = "scaledobject.keda.sh/transfer-hpa-ownership"
const ScaledObjectExcludedLabelsAnnotation = "scaledobject.keda.sh/hpa-excluded-labels"
const ValidationsHpaOwnershipAnnotation = "validations.keda.sh/hpa-ownership"
const PausedReplicasAnnotation = "autoscaling.keda.sh/paused-replicas"
const PausedAnnotation = "autoscaling.keda.sh/paused"
@ -305,8 +306,13 @@ func CheckFallbackValid(scaledObject *ScaledObject) error {
if trigger.Type == cpuString || trigger.Type == memoryString {
continue
}
// If at least one trigger is of the type `AverageValue`, then having fallback is valid.
if trigger.MetricType == autoscalingv2.AverageValueMetricType {
effectiveMetricType := trigger.MetricType
if effectiveMetricType == "" {
effectiveMetricType = autoscalingv2.AverageValueMetricType
}
if effectiveMetricType == autoscalingv2.AverageValueMetricType {
fallbackValid = true
break
}

View File

@ -0,0 +1,683 @@
/*
Copyright 2023 The KEDA Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
"strings"
"testing"
autoscalingv2 "k8s.io/api/autoscaling/v2"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func TestCheckFallbackValid(t *testing.T) {
tests := []struct {
name string
scaledObject *ScaledObject
expectedError bool
errorContains string
}{
{
name: "No fallback configured",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
Fallback: nil,
Triggers: []ScaleTriggers{
{
Type: "couchdb",
},
},
},
},
expectedError: false,
},
{
name: "Explicit AverageValue metricType - valid",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
Fallback: &Fallback{
FailureThreshold: 3,
Replicas: 1,
},
Triggers: []ScaleTriggers{
{
Type: "couchdb",
MetricType: autoscalingv2.AverageValueMetricType,
},
},
},
},
expectedError: false,
},
{
name: "Implicit AverageValue metricType (empty string) - valid",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
Fallback: &Fallback{
FailureThreshold: 3,
Replicas: 1,
},
Triggers: []ScaleTriggers{
{
Type: "couchdb",
MetricType: "", // Empty string should default to AverageValue
},
},
},
},
expectedError: false,
},
{
name: "Value metricType - invalid",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
Fallback: &Fallback{
FailureThreshold: 3,
Replicas: 1,
},
Triggers: []ScaleTriggers{
{
Type: "couchdb",
MetricType: autoscalingv2.ValueMetricType,
},
},
},
},
expectedError: true,
errorContains: "type for the fallback to be enabled",
},
{
name: "Multiple triggers with one valid AverageValue - valid",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
Fallback: &Fallback{
FailureThreshold: 3,
Replicas: 1,
},
Triggers: []ScaleTriggers{
{
Type: "prometheus",
MetricType: autoscalingv2.ValueMetricType,
},
{
Type: "couchdb",
MetricType: autoscalingv2.AverageValueMetricType,
},
},
},
},
expectedError: false,
},
{
name: "CPU trigger - invalid",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
Fallback: &Fallback{
FailureThreshold: 3,
Replicas: 1,
},
Triggers: []ScaleTriggers{
{
Type: "cpu",
MetricType: autoscalingv2.AverageValueMetricType,
},
},
},
},
expectedError: true,
errorContains: "type for the fallback to be enabled",
},
{
name: "Memory trigger - invalid",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
Fallback: &Fallback{
FailureThreshold: 3,
Replicas: 1,
},
Triggers: []ScaleTriggers{
{
Type: "memory",
MetricType: autoscalingv2.AverageValueMetricType,
},
},
},
},
expectedError: true,
errorContains: "type for the fallback to be enabled",
},
{
name: "Multiple triggers with one CPU and one valid - valid",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
Fallback: &Fallback{
FailureThreshold: 3,
Replicas: 1,
},
Triggers: []ScaleTriggers{
{
Type: "cpu",
MetricType: autoscalingv2.UtilizationMetricType,
},
{
Type: "couchdb",
MetricType: autoscalingv2.AverageValueMetricType,
},
},
},
},
expectedError: false,
},
{
name: "Negative FailureThreshold - invalid",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
Fallback: &Fallback{
FailureThreshold: -1,
Replicas: 1,
},
Triggers: []ScaleTriggers{
{
Type: "couchdb",
MetricType: autoscalingv2.AverageValueMetricType,
},
},
},
},
expectedError: true,
errorContains: "must both be greater than or equal to 0",
},
{
name: "Negative Replicas - invalid",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
Fallback: &Fallback{
FailureThreshold: 3,
Replicas: -1,
},
Triggers: []ScaleTriggers{
{
Type: "couchdb",
MetricType: autoscalingv2.AverageValueMetricType,
},
},
},
},
expectedError: true,
errorContains: "must both be greater than or equal to 0",
},
{
name: "Using ScalingModifiers with AverageValue MetricType - valid",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
Fallback: &Fallback{
FailureThreshold: 3,
Replicas: 1,
},
Advanced: &AdvancedConfig{
ScalingModifiers: ScalingModifiers{
MetricType: autoscalingv2.AverageValueMetricType,
Formula: "x * 2",
},
},
Triggers: []ScaleTriggers{
{
Type: "couchdb",
},
},
},
},
expectedError: false,
},
{
name: "Using ScalingModifiers with Value MetricType - invalid",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
Fallback: &Fallback{
FailureThreshold: 3,
Replicas: 1,
},
Advanced: &AdvancedConfig{
ScalingModifiers: ScalingModifiers{
MetricType: autoscalingv2.ValueMetricType,
Formula: "x * 2",
},
},
Triggers: []ScaleTriggers{
{
Type: "couchdb",
},
},
},
},
expectedError: true,
errorContains: "ScaledObject.Spec.Advanced.ScalingModifiers.MetricType must be AverageValue",
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
err := CheckFallbackValid(test.scaledObject)
if test.expectedError && err == nil {
t.Error("Expected error but got nil")
}
if !test.expectedError && err != nil {
t.Errorf("Expected no error but got: %v", err)
}
if test.expectedError && err != nil && test.errorContains != "" {
if !contains(err.Error(), test.errorContains) {
t.Errorf("Error message does not contain expected text.\nExpected to contain: %s\nActual: %s", test.errorContains, err.Error())
}
}
})
}
}
func TestHasPausedReplicaAnnotation(t *testing.T) {
tests := []struct {
name string
annotations map[string]string
expectResult bool
}{
{
name: "No annotations",
annotations: nil,
expectResult: false,
},
{
name: "Has PausedReplicasAnnotation",
annotations: map[string]string{PausedReplicasAnnotation: "5"},
expectResult: true,
},
{
name: "Has other annotations but not PausedReplicasAnnotation",
annotations: map[string]string{"some-other-annotation": "value"},
expectResult: false,
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
so := &ScaledObject{
ObjectMeta: metav1.ObjectMeta{
Annotations: test.annotations,
},
}
result := so.HasPausedReplicaAnnotation()
if result != test.expectResult {
t.Errorf("Expected HasPausedReplicaAnnotation to return %v, got %v", test.expectResult, result)
}
})
}
}
func TestHasPausedAnnotation(t *testing.T) {
tests := []struct {
name string
annotations map[string]string
expectResult bool
}{
{
name: "No annotations",
annotations: nil,
expectResult: false,
},
{
name: "Has PausedAnnotation only",
annotations: map[string]string{PausedAnnotation: "true"},
expectResult: true,
},
{
name: "Has PausedReplicasAnnotation only",
annotations: map[string]string{PausedReplicasAnnotation: "5"},
expectResult: true,
},
{
name: "Has both annotations",
annotations: map[string]string{PausedAnnotation: "true", PausedReplicasAnnotation: "5"},
expectResult: true,
},
{
name: "Has other annotations but not paused ones",
annotations: map[string]string{"some-other-annotation": "value"},
expectResult: false,
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
so := &ScaledObject{
ObjectMeta: metav1.ObjectMeta{
Annotations: test.annotations,
},
}
result := so.HasPausedAnnotation()
if result != test.expectResult {
t.Errorf("Expected HasPausedAnnotation to return %v, got %v", test.expectResult, result)
}
})
}
}
func TestNeedToBePausedByAnnotation(t *testing.T) {
pausedReplicaCount := int32(5)
tests := []struct {
name string
annotations map[string]string
pausedReplicaCount *int32
expectResult bool
}{
{
name: "No annotations",
annotations: nil,
pausedReplicaCount: nil,
expectResult: false,
},
{
name: "PausedAnnotation with true value",
annotations: map[string]string{PausedAnnotation: "true"},
pausedReplicaCount: nil,
expectResult: true,
},
{
name: "PausedAnnotation with false value",
annotations: map[string]string{PausedAnnotation: "false"},
pausedReplicaCount: nil,
expectResult: false,
},
{
name: "PausedAnnotation with invalid value",
annotations: map[string]string{PausedAnnotation: "invalid"},
pausedReplicaCount: nil,
expectResult: true, // Non-boolean values should default to true
},
{
name: "PausedReplicasAnnotation with value and status set",
annotations: map[string]string{PausedReplicasAnnotation: "5"},
pausedReplicaCount: &pausedReplicaCount,
expectResult: true,
},
{
name: "PausedReplicasAnnotation with value but no status set",
annotations: map[string]string{PausedReplicasAnnotation: "5"},
pausedReplicaCount: nil,
expectResult: false,
},
{
name: "Both annotations set",
annotations: map[string]string{PausedAnnotation: "true", PausedReplicasAnnotation: "5"},
pausedReplicaCount: &pausedReplicaCount,
expectResult: true, // PausedReplicasAnnotation has precedence
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
so := &ScaledObject{
ObjectMeta: metav1.ObjectMeta{
Annotations: test.annotations,
},
Status: ScaledObjectStatus{
PausedReplicaCount: test.pausedReplicaCount,
},
}
result := so.NeedToBePausedByAnnotation()
if result != test.expectResult {
t.Errorf("Expected NeedToBePausedByAnnotation to return %v, got %v", test.expectResult, result)
}
})
}
}
func TestIsUsingModifiers(t *testing.T) {
tests := []struct {
name string
scaledObject *ScaledObject
expectResult bool
}{
{
name: "No Advanced config",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
Advanced: nil,
},
},
expectResult: false,
},
{
name: "Empty ScalingModifiers",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
Advanced: &AdvancedConfig{
ScalingModifiers: ScalingModifiers{},
},
},
},
expectResult: false,
},
{
name: "Has ScalingModifiers formula",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
Advanced: &AdvancedConfig{
ScalingModifiers: ScalingModifiers{
Formula: "x * 2",
},
},
},
},
expectResult: true,
},
{
name: "Has ScalingModifiers target",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
Advanced: &AdvancedConfig{
ScalingModifiers: ScalingModifiers{
Target: "100",
},
},
},
},
expectResult: true,
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
result := test.scaledObject.IsUsingModifiers()
if result != test.expectResult {
t.Errorf("Expected IsUsingModifiers to return %v, got %v", test.expectResult, result)
}
})
}
}
func TestCheckReplicaCountBoundsAreValid(t *testing.T) {
min1 := int32(1)
min2 := int32(2)
max5 := int32(5)
idle0 := int32(0)
idle1 := int32(1)
idle2 := int32(2)
tests := []struct {
name string
scaledObject *ScaledObject
expectedError bool
errorContains string
}{
{
name: "Valid: min 1, max 5, no idle",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
MinReplicaCount: &min1,
MaxReplicaCount: &max5,
IdleReplicaCount: nil,
},
},
expectedError: false,
},
{
name: "Valid: min 1, max 5, idle 0",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
MinReplicaCount: &min1,
MaxReplicaCount: &max5,
IdleReplicaCount: &idle0,
},
},
expectedError: false,
},
{
name: "Invalid: min 2 > max 1",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
MinReplicaCount: &min2,
MaxReplicaCount: &min1,
},
},
expectedError: true,
errorContains: "MinReplicaCount=2 must be less than MaxReplicaCount=1",
},
{
name: "Invalid: idle 1 == min 1",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
MinReplicaCount: &min1,
MaxReplicaCount: &max5,
IdleReplicaCount: &idle1,
},
},
expectedError: true,
errorContains: "IdleReplicaCount=1 must be less than MinReplicaCount=1",
},
{
name: "Invalid: idle 2 > min 1",
scaledObject: &ScaledObject{
Spec: ScaledObjectSpec{
MinReplicaCount: &min1,
MaxReplicaCount: &max5,
IdleReplicaCount: &idle2,
},
},
expectedError: true,
errorContains: "IdleReplicaCount=2 must be less than MinReplicaCount=1",
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
err := CheckReplicaCountBoundsAreValid(test.scaledObject)
if test.expectedError && err == nil {
t.Error("Expected error but got nil")
}
if !test.expectedError && err != nil {
t.Errorf("Expected no error but got: %v", err)
}
if test.expectedError && err != nil && test.errorContains != "" {
if !strings.Contains(err.Error(), test.errorContains) {
t.Errorf("Error message does not contain expected text.\nExpected to contain: %s\nActual: %s",
test.errorContains, err.Error())
}
}
})
}
}
func TestGetHPAReplicas(t *testing.T) {
min0 := int32(0)
min5 := int32(5)
max10 := int32(10)
tests := []struct {
name string
minReplicaCount *int32
maxReplicaCount *int32
expectedMin int32
expectedMax int32
}{
{
name: "Default min and max",
minReplicaCount: nil,
maxReplicaCount: nil,
expectedMin: 1, // default minimum
expectedMax: 100, // default maximum
},
{
name: "Custom min, default max",
minReplicaCount: &min5,
maxReplicaCount: nil,
expectedMin: 5,
expectedMax: 100,
},
{
name: "Default min, custom max",
minReplicaCount: nil,
maxReplicaCount: &max10,
expectedMin: 1,
expectedMax: 10,
},
{
name: "Custom min and max",
minReplicaCount: &min5,
maxReplicaCount: &max10,
expectedMin: 5,
expectedMax: 10,
},
{
name: "Zero min, default max",
minReplicaCount: &min0,
maxReplicaCount: nil,
expectedMin: 1, // should use default for 0 value
expectedMax: 100,
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
so := &ScaledObject{
Spec: ScaledObjectSpec{
MinReplicaCount: test.minReplicaCount,
MaxReplicaCount: test.maxReplicaCount,
},
}
minReplicas := so.GetHPAMinReplicas()
if *minReplicas != test.expectedMin {
t.Errorf("Expected GetHPAMinReplicas to return %d, got %d", test.expectedMin, *minReplicas)
}
maxReplicas := so.GetHPAMaxReplicas()
if maxReplicas != test.expectedMax {
t.Errorf("Expected GetHPAMaxReplicas to return %d, got %d", test.expectedMax, maxReplicas)
}
})
}
}
// Helper function to check if a string contains a substring
func contains(s, substr string) bool {
return strings.Contains(s, substr)
}

View File

@ -53,6 +53,18 @@ var memoryString = "memory"
var cpuString = "cpu"
func (so *ScaledObject) SetupWebhookWithManager(mgr ctrl.Manager, cacheMissFallback bool) error {
err := setupKubernetesClients(mgr, cacheMissFallback)
if err != nil {
return fmt.Errorf("failed to setup kubernetes clients: %w", err)
}
return ctrl.NewWebhookManagedBy(mgr).
WithValidator(&ScaledObjectCustomValidator{}).
For(so).
Complete()
}
func setupKubernetesClients(mgr ctrl.Manager, cacheMissFallback bool) error {
kc = mgr.GetClient()
restMapper = mgr.GetRESTMapper()
cacheMissToDirectClient = cacheMissFallback
@ -70,10 +82,8 @@ func (so *ScaledObject) SetupWebhookWithManager(mgr ctrl.Manager, cacheMissFallb
return fmt.Errorf("failed to initialize direct client: %w", err)
}
}
return ctrl.NewWebhookManagedBy(mgr).
WithValidator(&ScaledObjectCustomValidator{}).
For(so).
Complete()
return nil
}
// +kubebuilder:webhook:path=/validate-keda-sh-v1alpha1-scaledobject,mutating=false,failurePolicy=ignore,sideEffects=None,groups=keda.sh,resources=scaledobjects,verbs=create;update,versions=v1alpha1,name=vscaledobject.kb.io,admissionReviewVersions=v1

View File

@ -142,24 +142,24 @@ type AuthPodIdentity struct {
Provider PodIdentityProvider `json:"provider"`
// +optional
IdentityID *string `json:"identityId"`
IdentityID *string `json:"identityId,omitempty"`
// +optional
// Set identityTenantId to override the default Azure tenant id. If this is set, then the IdentityID must also be set
IdentityTenantID *string `json:"identityTenantId"`
IdentityTenantID *string `json:"identityTenantId,omitempty"`
// +optional
// Set identityAuthorityHost to override the default Azure authority host. If this is set, then the IdentityTenantID must also be set
IdentityAuthorityHost *string `json:"identityAuthorityHost"`
IdentityAuthorityHost *string `json:"identityAuthorityHost,omitempty"`
// +kubebuilder:validation:Optional
// RoleArn sets the AWS RoleArn to be used. Mutually exclusive with IdentityOwner
RoleArn *string `json:"roleArn"`
RoleArn *string `json:"roleArn,omitempty"`
// +kubebuilder:validation:Enum=keda;workload
// +optional
// IdentityOwner configures which identity has to be used during auto discovery, keda or the scaled workload. Mutually exclusive with roleArn
IdentityOwner *string `json:"identityOwner"`
IdentityOwner *string `json:"identityOwner,omitempty"`
}
func (a *AuthPodIdentity) GetIdentityID() string {

View File

@ -17,6 +17,7 @@ limitations under the License.
package v1alpha1
import (
"context"
"encoding/json"
"fmt"
@ -37,28 +38,61 @@ var triggerauthenticationlog = logf.Log.WithName("triggerauthentication-validati
func (ta *TriggerAuthentication) SetupWebhookWithManager(mgr ctrl.Manager) error {
return ctrl.NewWebhookManagedBy(mgr).
WithValidator(&TriggerAuthenticationCustomValidator{}).
For(ta).
Complete()
}
func (cta *ClusterTriggerAuthentication) SetupWebhookWithManager(mgr ctrl.Manager) error {
return ctrl.NewWebhookManagedBy(mgr).
WithValidator(&ClusterTriggerAuthenticationCustomValidator{}).
For(cta).
Complete()
}
// +kubebuilder:webhook:path=/validate-keda-sh-v1alpha1-triggerauthentication,mutating=false,failurePolicy=ignore,sideEffects=None,groups=keda.sh,resources=triggerauthentications,verbs=create;update,versions=v1alpha1,name=vstriggerauthentication.kb.io,admissionReviewVersions=v1
var _ webhook.Validator = &TriggerAuthentication{}
// TriggerAuthenticationCustomValidator is a custom validator for TriggerAuthentication objects
type TriggerAuthenticationCustomValidator struct{}
func (tacv TriggerAuthenticationCustomValidator) ValidateCreate(ctx context.Context, obj runtime.Object) (warnings admission.Warnings, err error) {
request, err := admission.RequestFromContext(ctx)
if err != nil {
return nil, err
}
ta := obj.(*TriggerAuthentication)
return ta.ValidateCreate(request.DryRun)
}
func (tacv TriggerAuthenticationCustomValidator) ValidateUpdate(ctx context.Context, oldObj, newObj runtime.Object) (warnings admission.Warnings, err error) {
request, err := admission.RequestFromContext(ctx)
if err != nil {
return nil, err
}
ta := newObj.(*TriggerAuthentication)
old := oldObj.(*TriggerAuthentication)
return ta.ValidateUpdate(old, request.DryRun)
}
func (tacv TriggerAuthenticationCustomValidator) ValidateDelete(ctx context.Context, obj runtime.Object) (warnings admission.Warnings, err error) {
request, err := admission.RequestFromContext(ctx)
if err != nil {
return nil, err
}
ta := obj.(*TriggerAuthentication)
return ta.ValidateDelete(request.DryRun)
}
var _ webhook.CustomValidator = &TriggerAuthenticationCustomValidator{}
// ValidateCreate implements webhook.Validator so a webhook will be registered for the type
func (ta *TriggerAuthentication) ValidateCreate() (admission.Warnings, error) {
func (ta *TriggerAuthentication) ValidateCreate(_ *bool) (admission.Warnings, error) {
val, _ := json.MarshalIndent(ta, "", " ")
triggerauthenticationlog.Info(fmt.Sprintf("validating triggerauthentication creation for %s", string(val)))
return validateSpec(&ta.Spec)
}
func (ta *TriggerAuthentication) ValidateUpdate(old runtime.Object) (admission.Warnings, error) {
func (ta *TriggerAuthentication) ValidateUpdate(old runtime.Object, _ *bool) (admission.Warnings, error) {
val, _ := json.MarshalIndent(ta, "", " ")
scaledobjectlog.V(1).Info(fmt.Sprintf("validating triggerauthentication update for %s", string(val)))
@ -70,22 +104,53 @@ func (ta *TriggerAuthentication) ValidateUpdate(old runtime.Object) (admission.W
return validateSpec(&ta.Spec)
}
func (ta *TriggerAuthentication) ValidateDelete() (admission.Warnings, error) {
func (ta *TriggerAuthentication) ValidateDelete(_ *bool) (admission.Warnings, error) {
return nil, nil
}
// +kubebuilder:webhook:path=/validate-keda-sh-v1alpha1-clustertriggerauthentication,mutating=false,failurePolicy=ignore,sideEffects=None,groups=keda.sh,resources=clustertriggerauthentications,verbs=create;update,versions=v1alpha1,name=vsclustertriggerauthentication.kb.io,admissionReviewVersions=v1
var _ webhook.Validator = &ClusterTriggerAuthentication{}
// ClusterTriggerAuthenticationCustomValidator is a custom validator for ClusterTriggerAuthentication objects
type ClusterTriggerAuthenticationCustomValidator struct{}
func (ctacv ClusterTriggerAuthenticationCustomValidator) ValidateCreate(ctx context.Context, obj runtime.Object) (warnings admission.Warnings, err error) {
request, err := admission.RequestFromContext(ctx)
if err != nil {
return nil, err
}
cta := obj.(*ClusterTriggerAuthentication)
return cta.ValidateCreate(request.DryRun)
}
func (ctacv ClusterTriggerAuthenticationCustomValidator) ValidateUpdate(ctx context.Context, oldObj, newObj runtime.Object) (warnings admission.Warnings, err error) {
request, err := admission.RequestFromContext(ctx)
if err != nil {
return nil, err
}
cta := newObj.(*ClusterTriggerAuthentication)
old := oldObj.(*ClusterTriggerAuthentication)
return cta.ValidateUpdate(old, request.DryRun)
}
func (ctacv ClusterTriggerAuthenticationCustomValidator) ValidateDelete(ctx context.Context, obj runtime.Object) (warnings admission.Warnings, err error) {
request, err := admission.RequestFromContext(ctx)
if err != nil {
return nil, err
}
cta := obj.(*ClusterTriggerAuthentication)
return cta.ValidateDelete(request.DryRun)
}
var _ webhook.CustomValidator = &ClusterTriggerAuthenticationCustomValidator{}
// ValidateCreate implements webhook.Validator so a webhook will be registered for the type
func (cta *ClusterTriggerAuthentication) ValidateCreate() (admission.Warnings, error) {
func (cta *ClusterTriggerAuthentication) ValidateCreate(_ *bool) (admission.Warnings, error) {
val, _ := json.MarshalIndent(cta, "", " ")
triggerauthenticationlog.Info(fmt.Sprintf("validating clustertriggerauthentication creation for %s", string(val)))
return validateSpec(&cta.Spec)
}
func (cta *ClusterTriggerAuthentication) ValidateUpdate(old runtime.Object) (admission.Warnings, error) {
func (cta *ClusterTriggerAuthentication) ValidateUpdate(old runtime.Object, _ *bool) (admission.Warnings, error) {
val, _ := json.MarshalIndent(cta, "", " ")
scaledobjectlog.V(1).Info(fmt.Sprintf("validating clustertriggerauthentication update for %s", string(val)))
@ -98,7 +163,7 @@ func (cta *ClusterTriggerAuthentication) ValidateUpdate(old runtime.Object) (adm
return validateSpec(&cta.Spec)
}
func (cta *ClusterTriggerAuthentication) ValidateDelete() (admission.Warnings, error) {
func (cta *ClusterTriggerAuthentication) ValidateDelete(_ *bool) (admission.Warnings, error) {
return nil, nil
}

View File

@ -26,15 +26,15 @@ import (
grpcprom "github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/collectors"
"go.uber.org/zap/zapcore"
appsv1 "k8s.io/api/apps/v1"
apimetrics "k8s.io/apiserver/pkg/endpoints/metrics"
"k8s.io/client-go/kubernetes/scheme"
kubemetrics "k8s.io/component-base/metrics"
"k8s.io/component-base/metrics/legacyregistry"
"k8s.io/klog/v2"
"k8s.io/klog/v2/klogr"
ctrl "sigs.k8s.io/controller-runtime"
ctrlcache "sigs.k8s.io/controller-runtime/pkg/cache"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
ctrlmetrics "sigs.k8s.io/controller-runtime/pkg/metrics"
"sigs.k8s.io/controller-runtime/pkg/metrics/server"
basecmd "sigs.k8s.io/custom-metrics-apiserver/pkg/cmd"
@ -54,10 +54,7 @@ type Adapter struct {
Message string
}
// https://github.com/kedacore/keda/issues/5732
//
//nolint:staticcheck // SA1019: klogr.New is deprecated.
var logger = klogr.New().WithName("keda_metrics_adapter")
var setupLog = ctrl.Log.WithName("keda_metrics_adapter")
var (
adapterClientRequestQPS float32
@ -67,21 +64,24 @@ var (
metricsServiceAddr string
profilingAddr string
metricsServiceGRPCAuthority string
logToSTDerr bool
verbosityLevel int
stdErrThreshold string
)
func (a *Adapter) makeProvider(ctx context.Context) (provider.ExternalMetricsProvider, error) {
scheme := scheme.Scheme
if err := appsv1.SchemeBuilder.AddToScheme(scheme); err != nil {
logger.Error(err, "failed to add apps/v1 scheme to runtime scheme")
setupLog.Error(err, "failed to add apps/v1 scheme to runtime scheme")
return nil, fmt.Errorf("failed to add apps/v1 scheme to runtime scheme (%s)", err)
}
if err := kedav1alpha1.SchemeBuilder.AddToScheme(scheme); err != nil {
logger.Error(err, "failed to add keda scheme to runtime scheme")
setupLog.Error(err, "failed to add keda scheme to runtime scheme")
return nil, fmt.Errorf("failed to add keda scheme to runtime scheme (%s)", err)
}
namespaces, err := kedautil.GetWatchNamespaces()
if err != nil {
logger.Error(err, "failed to get watch namespace")
setupLog.Error(err, "failed to get watch namespace")
return nil, fmt.Errorf("failed to get watch namespace (%s)", err)
}
@ -104,23 +104,23 @@ func (a *Adapter) makeProvider(ctx context.Context) (provider.ExternalMetricsPro
PprofBindAddress: profilingAddr,
})
if err != nil {
logger.Error(err, "failed to setup manager")
setupLog.Error(err, "failed to setup manager")
return nil, err
}
logger.Info("Connecting Metrics Service gRPC client to the server", "address", metricsServiceAddr)
grpcClient, err := metricsservice.NewGrpcClient(metricsServiceAddr, a.SecureServing.ServerCert.CertDirectory, metricsServiceGRPCAuthority, clientMetrics)
setupLog.Info("Connecting Metrics Service gRPC client to the server", "address", metricsServiceAddr)
grpcClient, err := metricsservice.NewGrpcClient(ctx, metricsServiceAddr, a.SecureServing.ServerCert.CertDirectory, metricsServiceGRPCAuthority, clientMetrics)
if err != nil {
logger.Error(err, "error connecting Metrics Service gRPC client to the server", "address", metricsServiceAddr)
setupLog.Error(err, "error connecting Metrics Service gRPC client to the server", "address", metricsServiceAddr)
return nil, err
}
go func() {
if err := mgr.Start(ctx); err != nil {
logger.Error(err, "controller-runtime encountered an error")
setupLog.Error(err, "controller-runtime encountered an error")
os.Exit(1)
}
}()
return kedaprovider.NewProvider(ctx, logger, mgr.GetClient(), *grpcClient), nil
return kedaprovider.NewProvider(ctx, setupLog, mgr.GetClient(), *grpcClient), nil
}
// getMetricHandler returns a http handler that exposes metrics from controller-runtime and apiserver
@ -181,7 +181,7 @@ func RunMetricsServer(ctx context.Context) {
}
go func() {
logger.Info("starting /metrics server endpoint")
setupLog.Info("starting /metrics server endpoint")
// nosemgrep: use-tls
err := server.ListenAndServe()
if err != http.ErrServerClosed {
@ -192,7 +192,7 @@ func RunMetricsServer(ctx context.Context) {
go func() {
<-ctx.Done()
if err := server.Shutdown(ctx); err != nil {
logger.Error(err, "http server shutdown error")
setupLog.Error(err, "http server shutdown error")
}
}()
}
@ -207,15 +207,15 @@ func generateDefaultMetricsServiceAddr() string {
func printWelcomeMsg(cmd *Adapter) error {
clientset, err := cmd.DiscoveryClient()
if err != nil {
logger.Error(err, "not able to get Kubernetes version")
setupLog.Error(err, "not able to get Kubernetes version")
return err
}
version, err := clientset.ServerVersion()
if err != nil {
logger.Error(err, "not able to get Kubernetes version")
setupLog.Error(err, "not able to get Kubernetes version")
return err
}
kedautil.PrintWelcome(logger, kedautil.NewK8sVersion(version), "metrics server")
kedautil.PrintWelcome(setupLog, kedautil.NewK8sVersion(version), "metrics server")
return nil
}
@ -225,18 +225,14 @@ func main() {
var err error
defer func() {
if err != nil {
logger.Error(err, "unable to run external metrics adapter")
setupLog.Error(err, "unable to run external metrics adapter")
}
}()
defer klog.Flush()
klog.InitFlags(nil)
cmd := &Adapter{}
cmd.Name = "keda-adapter"
cmd.Flags().StringVar(&cmd.Message, "msg", "starting adapter...", "startup message")
cmd.Flags().AddGoFlagSet(flag.CommandLine) // make sure we get the klog flags
cmd.Flags().IntVar(&metricsAPIServerPort, "port", 8080, "Set the port for the metrics API server")
cmd.Flags().StringVar(&metricsServiceAddr, "metrics-service-address", generateDefaultMetricsServiceAddr(), "The address of the GRPC Metrics Service Server.")
cmd.Flags().StringVar(&metricsServiceGRPCAuthority, "metrics-service-grpc-authority", "", "Host Authority override for the Metrics Service if the Host Authority is not the same as the address used for the GRPC Metrics Service Server.")
@ -245,31 +241,55 @@ func main() {
cmd.Flags().IntVar(&adapterClientRequestBurst, "kube-api-burst", 30, "Set the burst for throttling requests sent to the apiserver")
cmd.Flags().BoolVar(&disableCompression, "disable-compression", true, "Disable response compression for k8s restAPI in client-go. ")
// legacy klogr flags handled for backwards compatibility. Default set to -1 so it doesn't override values set via zap options
cmd.Flags().IntVar(&verbosityLevel, "v", -1, "Logging level for Metrics Server. (DEPRECATED)")
cmd.Flags().BoolVar(&logToSTDerr, "logtostderr", false, "Logs are written to standard error instead of to files. (DEPRECATED)")
// legacy klogr flags handled to prevent breakage on upgrade, but are treated as no-ops. Since logToSTDerr was set to true
// in prior versions of KEDA, this flag had no actual effect. See
// https://pkg.go.dev/k8s.io/klog/v2
cmd.Flags().StringVar(&stdErrThreshold, "stderrthreshold", "ERROR", "Logging stderrthreshold for Metrics Server. (DEPRECATED)")
// Make sure we get the zap flags
opts := zap.Options{}
opts.BindFlags(flag.CommandLine)
cmd.Flags().AddGoFlagSet(flag.CommandLine)
if err := cmd.Flags().Parse(os.Args); err != nil {
return
}
ctrl.SetLogger(logger)
zapOpts := []zap.Opts{zap.UseFlagOptions(&opts)}
if verbosityLevel > 0 {
// A zap log level should be multiplied by -1 to get the logr verbosity and vice-versa.
zapOpts = append(zapOpts, zap.Level(zapcore.Level(verbosityLevel*-1)))
}
if logToSTDerr {
zapOpts = append(zapOpts, zap.ConsoleEncoder())
}
ctrl.SetLogger(zap.New(zapOpts...))
err = printWelcomeMsg(cmd)
if err != nil {
return
}
err = kedautil.ConfigureMaxProcs(logger)
err = kedautil.ConfigureMaxProcs(setupLog)
if err != nil {
logger.Error(err, "failed to set max procs")
setupLog.Error(err, "failed to set max procs")
return
}
kedaProvider, err := cmd.makeProvider(ctx)
if err != nil {
logger.Error(err, "making provider")
setupLog.Error(err, "making provider")
return
}
cmd.WithExternalMetrics(kedaProvider)
logger.Info(cmd.Message)
setupLog.Info(cmd.Message)
RunMetricsServer(ctx)

View File

@ -57,9 +57,7 @@ spec:
args:
- /usr/local/bin/keda-adapter
- --secure-port=6443
- --logtostderr=true
- --stderrthreshold=ERROR
- --v=0
- --zap-log-level=error
- --client-ca-file=/certs/ca.crt
- --tls-cert-file=/certs/tls.crt
- --tls-private-key-file=/certs/tls.key

View File

@ -38,6 +38,19 @@ import (
version "github.com/kedacore/keda/v2/version"
)
// storeHpaNameInStatus updates the ScaledObject status subresource with the hpaName.
func (r *ScaledObjectReconciler) storeHpaNameInStatus(ctx context.Context, logger logr.Logger, scaledObject *kedav1alpha1.ScaledObject, hpaName string) error {
status := scaledObject.Status.DeepCopy()
status.HpaName = hpaName
err := kedastatus.UpdateScaledObjectStatus(ctx, r.Client, logger, scaledObject, status)
if err != nil {
logger.Error(err, "Failed to update scaledObject status with used hpaName")
return err
}
return nil
}
// createAndDeployNewHPA creates and deploy HPA in the cluster for specified ScaledObject
func (r *ScaledObjectReconciler) createAndDeployNewHPA(ctx context.Context, logger logr.Logger, scaledObject *kedav1alpha1.ScaledObject, gvkr *kedav1alpha1.GroupVersionKindResource) error {
hpaName := getHPAName(scaledObject)
@ -54,17 +67,7 @@ func (r *ScaledObjectReconciler) createAndDeployNewHPA(ctx context.Context, logg
return err
}
// store hpaName in the ScaledObject
status := scaledObject.Status.DeepCopy()
status.HpaName = hpaName
err = kedastatus.UpdateScaledObjectStatus(ctx, r.Client, logger, scaledObject, status)
if err != nil {
logger.Error(err, "Failed to update scaledObject status with used hpaName")
return err
}
return nil
return r.storeHpaNameInStatus(ctx, logger, scaledObject, hpaName)
}
// newHPAForScaledObject returns HPA as it is specified in ScaledObject
@ -95,7 +98,20 @@ func (r *ScaledObjectReconciler) newHPAForScaledObject(ctx context.Context, logg
"app.kubernetes.io/part-of": scaledObject.Name,
"app.kubernetes.io/managed-by": "keda-operator",
}
excludedLabels := map[string]struct{}{}
if labels, ok := scaledObject.ObjectMeta.Annotations[kedav1alpha1.ScaledObjectExcludedLabelsAnnotation]; ok {
for _, excludedLabel := range strings.Split(labels, ",") {
excludedLabels[excludedLabel] = struct{}{}
}
}
for key, value := range scaledObject.ObjectMeta.Labels {
if _, ok := excludedLabels[key]; ok {
continue
}
labels[key] = value
}

View File

@ -193,9 +193,10 @@ func (r *ScaledObjectReconciler) Reconcile(ctx context.Context, req ctrl.Request
msg, err := r.reconcileScaledObject(ctx, reqLogger, scaledObject, &conditions)
if err != nil {
reqLogger.Error(err, msg)
conditions.SetReadyCondition(metav1.ConditionFalse, "ScaledObjectCheckFailed", msg)
fullErrMsg := fmt.Sprintf("%s: %s", msg, err.Error())
conditions.SetReadyCondition(metav1.ConditionFalse, "ScaledObjectCheckFailed", fullErrMsg)
conditions.SetActiveCondition(metav1.ConditionUnknown, "UnknownState", "ScaledObject check failed")
r.EventEmitter.Emit(scaledObject, req.NamespacedName.Namespace, corev1.EventTypeWarning, eventingv1alpha1.ScaledObjectFailedType, eventreason.ScaledObjectCheckFailed, msg)
r.EventEmitter.Emit(scaledObject, req.NamespacedName.Namespace, corev1.EventTypeWarning, eventingv1alpha1.ScaledObjectFailedType, eventreason.ScaledObjectCheckFailed, fullErrMsg)
} else {
wasReady := conditions.GetReadyCondition()
if wasReady.IsFalse() || wasReady.IsUnknown() {
@ -478,6 +479,14 @@ func (r *ScaledObjectReconciler) ensureHPAForScaledObjectExists(ctx context.Cont
return false, err
}
// If the HPA name does not match the one in ScaledObject status, we need to update the status
if scaledObject.Status.HpaName != hpaName {
err = r.storeHpaNameInStatus(ctx, logger, scaledObject, hpaName)
if err != nil {
return false, err
}
}
return false, nil
}

View File

@ -349,6 +349,62 @@ var _ = Describe("ScaledObjectController", func() {
Expect(errors.IsNotFound(err)).To(Equal(true))
})
It("sets the hpaName in status if not set and HPA already exists", func() {
// Create the scaling target.
deploymentName := "hpa-name-update"
soName := "so-" + deploymentName
err := k8sClient.Create(context.Background(), generateDeployment(deploymentName))
Expect(err).ToNot(HaveOccurred())
// Create the ScaledObject without specifying name.
so := &kedav1alpha1.ScaledObject{
ObjectMeta: metav1.ObjectMeta{Name: soName, Namespace: "default"},
Spec: kedav1alpha1.ScaledObjectSpec{
ScaleTargetRef: &kedav1alpha1.ScaleTarget{
Name: deploymentName,
},
Advanced: &kedav1alpha1.AdvancedConfig{
HorizontalPodAutoscalerConfig: &kedav1alpha1.HorizontalPodAutoscalerConfig{},
},
Triggers: []kedav1alpha1.ScaleTriggers{
{
Type: "cron",
Metadata: map[string]string{
"timezone": "UTC",
"start": "0 * * * *",
"end": "1 * * * *",
"desiredReplicas": "1",
},
},
},
},
}
err = k8sClient.Create(context.Background(), so)
Expect(err).ToNot(HaveOccurred())
// Get and confirm the HPA.
hpa := &autoscalingv2.HorizontalPodAutoscaler{}
Eventually(func() error {
return k8sClient.Get(context.Background(), types.NamespacedName{Name: fmt.Sprintf("keda-hpa-%s", soName), Namespace: "default"}, hpa)
}).ShouldNot(HaveOccurred())
Expect(hpa.Name).To(Equal(fmt.Sprintf("keda-hpa-%s", soName)))
// Remove the HPA name from the ScaledObject.
Eventually(func() error {
err = k8sClient.Get(context.Background(), types.NamespacedName{Name: soName, Namespace: "default"}, so)
Expect(err).ToNot(HaveOccurred())
so.Status.HpaName = ""
return k8sClient.Status().Update(context.Background(), so)
}).ShouldNot(HaveOccurred())
// Wait until the hpaName is updated in the scaled object.
Eventually(func() string {
err = k8sClient.Get(context.Background(), types.NamespacedName{Name: soName, Namespace: "default"}, so)
Expect(err).ToNot(HaveOccurred())
return so.Status.HpaName
}).WithTimeout(60 * time.Second).WithPolling(2 * time.Second).Should(Equal(fmt.Sprintf("keda-hpa-%s", soName)))
})
//https://github.com/kedacore/keda/issues/2407
It("cache is correctly recreated if SO is deleted and created", func() {
// Create the scaling target.

8
go.mod
View File

@ -42,7 +42,6 @@ require (
github.com/beanstalkd/go-beanstalk v0.2.0
github.com/bradleyfalzon/ghinstallation/v2 v2.14.0
github.com/cloudevents/sdk-go/v2 v2.16.0
github.com/denisenkom/go-mssqldb v0.12.3
github.com/dysnix/predictkube-libs v0.0.4-0.20230109175007-5a82fccd31c7
github.com/dysnix/predictkube-proto v0.0.0-20241017230806-4c74c627f2bb
github.com/elastic/go-elasticsearch/v7 v7.17.10
@ -77,7 +76,7 @@ require (
github.com/prometheus/client_golang v1.21.1
github.com/prometheus/client_model v0.6.1
github.com/prometheus/common v0.63.0
github.com/prometheus/prometheus v0.0.0-00010101000000-000000000000
github.com/prometheus/prometheus v0.54.0
github.com/rabbitmq/amqp091-go v1.10.0
github.com/rcrowley/go-metrics v0.0.0-20250401214520-65e299d6c5c9
github.com/redis/go-redis/v9 v9.7.3
@ -213,7 +212,7 @@ require (
github.com/facebookgo/clock v0.0.0-20150410010913-600d898af40a // indirect
github.com/fatih/color v1.18.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.8.0 // indirect
github.com/fsnotify/fsnotify v1.9.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.8 // indirect
github.com/go-errors/errors v1.5.1 // indirect
@ -287,6 +286,7 @@ require (
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.15 // indirect
github.com/microsoft/go-mssqldb v1.8.0 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/moby/spdystream v0.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
@ -342,7 +342,7 @@ require (
go.opentelemetry.io/proto/otlp v1.5.0 // indirect
go.temporal.io/api v1.44.1 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
go.uber.org/zap v1.27.0
golang.org/x/exp v0.0.0-20250210185358-939b2ce775ac // indirect
golang.org/x/mod v0.24.0 // indirect
golang.org/x/net v0.38.0 // indirect

8
go.sum
View File

@ -894,8 +894,6 @@ github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/denisenkom/go-mssqldb v0.12.3 h1:pBSGx9Tq67pBOTLmxNuirNTeB8Vjmf886Kx+8Y+8shw=
github.com/denisenkom/go-mssqldb v0.12.3/go.mod h1:k0mtMFOnU+AihqFxPMiF05rtiDrorD1Vrm1KEz5hxDo=
github.com/dennwc/varint v1.0.0 h1:kGNFFSSw8ToIy3obO/kKr8U9GZYUAxQEVuix4zfDWzE=
github.com/dennwc/varint v1.0.0/go.mod h1:hnItb35rvZvJrbTALZtY/iQfDs48JKRG1RPpgziApxA=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
@ -970,8 +968,8 @@ github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHk
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/fsnotify/fsnotify v1.8.0 h1:dAwr6QBTBZIkG8roQaJjGof0pp0EeF+tNV7YBP3F/8M=
github.com/fsnotify/fsnotify v1.8.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E=
github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ=
github.com/gabriel-vasile/mimetype v1.4.8 h1:FfZ3gj38NjllZIeJAmMhr+qKL8Wu+nOoI3GqacKw1NM=
@ -1395,6 +1393,8 @@ github.com/microsoft/ApplicationInsights-Go v0.4.4 h1:G4+H9WNs6ygSCe6sUyxRc2U81T
github.com/microsoft/ApplicationInsights-Go v0.4.4/go.mod h1:fKRUseBqkw6bDiXTs3ESTiU/4YTIHsQS4W3fP2ieF4U=
github.com/microsoft/azure-devops-go-api/azuredevops v1.0.0-b5 h1:YH424zrwLTlyHSH/GzLMJeu5zhYVZSx5RQxGKm1h96s=
github.com/microsoft/azure-devops-go-api/azuredevops v1.0.0-b5/go.mod h1:PoGiBqKSQK1vIfQ+yVaFcGjDySHvym6FM1cNYnwzbrY=
github.com/microsoft/go-mssqldb v1.8.0 h1:7cyZ/AT7ycDsEoWPIXibd+aVKFtteUNhDGf3aobP+tw=
github.com/microsoft/go-mssqldb v1.8.0/go.mod h1:6znkekS3T2vp0waiMhen4GPU1BiAsrP+iXHcE7a7rFo=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/minio/asm2plan9s v0.0.0-20200509001527-cdd76441f9d8/go.mod h1:mC1jAcsrzbxHt8iiaC+zU4b1ylILSosueou12R++wfY=
github.com/minio/c2goasm v0.0.0-20190812172519-36a3d3bbc4f3/go.mod h1:RagcQ7I8IeTMnF8JTXieKnO4Z6JCsikNEzj0DwauVzE=

View File

@ -26,15 +26,6 @@ const (
)
var (
scaledObjectValidatingTotal = prometheus.NewCounterVec(
prometheus.CounterOpts{
Namespace: DefaultPromMetricsNamespace,
Subsystem: "webhook",
Name: "scaled_object_validation_total",
Help: "DEPRECATED - will be removed in 2.16 - Use `scaled_object_validations_total` instead.",
},
[]string{"namespace", "action"},
)
scaledObjectValidationsTotal = prometheus.NewCounterVec(
prometheus.CounterOpts{
Namespace: DefaultPromMetricsNamespace,
@ -44,15 +35,6 @@ var (
},
[]string{"namespace", "action"},
)
scaledObjectValidatingErrors = prometheus.NewCounterVec(
prometheus.CounterOpts{
Namespace: DefaultPromMetricsNamespace,
Subsystem: "webhook",
Name: "scaled_object_validation_errors",
Help: "DEPRECATED - will be removed in 2.16 - Use `scaled_object_validation_errors_total` instead.",
},
[]string{"namespace", "action", "reason"},
)
scaledObjectValidationErrorsTotal = prometheus.NewCounterVec(
prometheus.CounterOpts{
Namespace: DefaultPromMetricsNamespace,
@ -65,22 +47,18 @@ var (
)
func init() {
metrics.Registry.MustRegister(scaledObjectValidatingTotal)
metrics.Registry.MustRegister(scaledObjectValidationsTotal)
metrics.Registry.MustRegister(scaledObjectValidatingErrors)
metrics.Registry.MustRegister(scaledObjectValidationErrorsTotal)
}
// RecordScaledObjectValidatingTotal counts the number of ScaledObject validations
func RecordScaledObjectValidatingTotal(namespace, action string) {
labels := prometheus.Labels{"namespace": namespace, "action": action}
scaledObjectValidatingTotal.With(labels).Inc()
scaledObjectValidationsTotal.With(labels).Inc()
}
// RecordScaledObjectValidatingErrors counts the number of ScaledObject validating errors
func RecordScaledObjectValidatingErrors(namespace, action, reason string) {
labels := prometheus.Labels{"namespace": namespace, "action": action, "reason": reason}
scaledObjectValidatingErrors.With(labels).Inc()
scaledObjectValidationErrorsTotal.With(labels).Inc()
}

View File

@ -37,7 +37,7 @@ type GrpcClient struct {
connection *grpc.ClientConn
}
func NewGrpcClient(url, certDir, authority string, clientMetrics *grpcprom.ClientMetrics) (*GrpcClient, error) {
func NewGrpcClient(ctx context.Context, url, certDir, authority string, clientMetrics *grpcprom.ClientMetrics) (*GrpcClient, error) {
defaultConfig := `{
"methodConfig": [{
"timeout": "3s",
@ -50,7 +50,7 @@ func NewGrpcClient(url, certDir, authority string, clientMetrics *grpcprom.Clien
}
}]}`
creds, err := utils.LoadGrpcTLSCredentials(certDir, false)
creds, err := utils.LoadGrpcTLSCredentials(ctx, certDir, false)
if err != nil {
return nil, err
}
@ -96,7 +96,7 @@ func (c *GrpcClient) GetMetrics(ctx context.Context, scaledObjectName, scaledObj
}
// WaitForConnectionReady waits for gRPC connection to be ready
// returns true if the connection was successful, false if we hit a timeut from context
// returns true if the connection was successful, false if we hit a timeout from context
func (c *GrpcClient) WaitForConnectionReady(ctx context.Context, logger logr.Logger) bool {
currentState := c.connection.GetState()
if currentState != connectivity.Ready {

View File

@ -88,7 +88,7 @@ func (s *GrpcServer) startServer() error {
func (s *GrpcServer) Start(ctx context.Context) error {
<-s.certsReady
if s.server == nil {
creds, err := utils.LoadGrpcTLSCredentials(s.certDir, true)
creds, err := utils.LoadGrpcTLSCredentials(ctx, s.certDir, true)
if err != nil {
return err
}

View File

@ -17,19 +17,30 @@ limitations under the License.
package utils
import (
"context"
"crypto/tls"
"crypto/x509"
"fmt"
"os"
"path"
"strings"
"sync"
"github.com/fsnotify/fsnotify"
"google.golang.org/grpc/credentials"
logf "sigs.k8s.io/controller-runtime/pkg/log"
)
var log = logf.Log.WithName("grpc_server_certificates")
// LoadGrpcTLSCredentials reads the certificate from the given path and returns TLS transport credentials
func LoadGrpcTLSCredentials(certDir string, server bool) (credentials.TransportCredentials, error) {
func LoadGrpcTLSCredentials(ctx context.Context, certDir string, server bool) (credentials.TransportCredentials, error) {
caPath := path.Join(certDir, "ca.crt")
certPath := path.Join(certDir, "tls.crt")
keyPath := path.Join(certDir, "tls.key")
// Load certificate of the CA who signed client's certificate
pemClientCA, err := os.ReadFile(path.Join(certDir, "ca.crt"))
pemClientCA, err := os.ReadFile(caPath)
if err != nil {
return nil, err
}
@ -43,16 +54,90 @@ func LoadGrpcTLSCredentials(certDir string, server bool) (credentials.TransportC
return nil, fmt.Errorf("failed to add client CA's certificate")
}
// Load certificate and private key
cert, err := tls.LoadX509KeyPair(path.Join(certDir, "tls.crt"), path.Join(certDir, "tls.key"))
// Load initial certificate and private key
mTLSCertificate, err := tls.LoadX509KeyPair(certPath, keyPath)
if err != nil {
return nil, err
}
// Start the watcher for cert updates
watcher, err := fsnotify.NewWatcher()
if err != nil {
return nil, err
}
err = watcher.Add(certDir)
if err != nil {
return nil, err
}
certMutex := sync.RWMutex{}
go func() {
log.V(1).Info("starting mTLS certificates monitoring")
for {
select {
case event, ok := <-watcher.Events:
if !ok { // Channel was closed (i.e. Watcher.Close() was called).
log.Error(err, "watcher stopped")
return
}
// We are only interested on Create changes on ..data dir
// as kubernetes creates first a temp folder with the new
// cert and then rename the whole folder.
// This unix.IN_MOVED_TO is treated as fsnotify.Create
if !event.Has(fsnotify.Create) ||
!strings.HasSuffix(event.Name, "..data") {
continue
}
log.V(1).Info("detected change on certificates, reloading")
pemClientCA, err := os.ReadFile(caPath)
if err != nil {
log.Error(err, "error reading grpc ca certificate")
continue
}
if !certPool.AppendCertsFromPEM(pemClientCA) {
log.Error(err, "failed to add client CA's certificate")
continue
}
log.V(1).Info("grpc ca certificate has been updated")
// Load certificate of the CA who signed client's certificate
cert, err := tls.LoadX509KeyPair(certPath, keyPath)
if err != nil {
log.Error(err, "error reading grpc certificate")
continue
}
certMutex.Lock()
mTLSCertificate = cert
certMutex.Unlock()
log.V(1).Info("grpc mTLS certificate has been updated")
case err, ok := <-watcher.Errors:
if !ok { // Channel was closed (i.e. Watcher.Close() was called).
log.Error(err, "watcher stopped")
return
}
log.Error(err, "error reading grpc certificate changes")
case <-ctx.Done():
log.V(1).Info("stopping mTLS certificates monitoring")
return
}
}
}()
// Create the credentials and return it
config := &tls.Config{
MinVersion: tls.VersionTLS13,
Certificates: []tls.Certificate{cert},
MinVersion: tls.VersionTLS13,
GetCertificate: func(_ *tls.ClientHelloInfo) (*tls.Certificate, error) {
certMutex.RLock()
defer certMutex.RUnlock()
return &mTLSCertificate, nil
},
GetClientCertificate: func(_ *tls.CertificateRequestInfo) (*tls.Certificate, error) {
certMutex.RLock()
defer certMutex.RUnlock()
return &mTLSCertificate, nil
},
}
if server {
config.ClientAuth = tls.RequireAndVerifyClientCert

View File

@ -274,7 +274,7 @@ func (s *apacheKafkaScaler) getTopicPartitions(ctx context.Context) (map[string]
for _, topic := range metadata.Topics {
partitions := make([]int, 0)
for _, partition := range topic.Partitions {
// if no partitions limitatitions are specified, all partitions are considered
// if no partitions limitations are specified, all partitions are considered
if (len(s.metadata.PartitionLimitation) == 0) ||
(len(s.metadata.PartitionLimitation) > 0 && kedautil.Contains(s.metadata.PartitionLimitation, partition.ID)) {
partitions = append(partitions, partition.ID)

View File

@ -190,7 +190,7 @@ func (s *arangoDBScaler) getQueryResult(ctx context.Context) (float64, error) {
var result dbResult
if _, err = cursor.ReadDocument(ctx, &result); err != nil {
return -1, fmt.Errorf("query result is not in the specified format, pleast check the query, %w", err)
return -1, fmt.Errorf("query result is not in the specified format, please check the query, %w", err)
}
return result.Value, nil

View File

@ -160,7 +160,7 @@ func (a *sharedConfigCache) retrievePodIdentityCredentials(ctx context.Context,
a.logger.V(1).Info(fmt.Sprintf("using assume web identity role to retrieve token for arnRole %s", roleArn))
return aws.NewCredentialsCache(webIdentityCredentialProvider)
}
a.logger.V(1).Error(err, fmt.Sprintf("error retreiving arnRole %s via WebIdentity", roleArn))
a.logger.V(1).Error(err, fmt.Sprintf("error retrieving arnRole %s via WebIdentity", roleArn))
}
// Fallback to Assume Role
@ -173,7 +173,7 @@ func (a *sharedConfigCache) retrievePodIdentityCredentials(ctx context.Context,
// retrieveStaticCredentials returns an *aws.CredentialsCache for given
// AuthorizationMetadata (using static credentials). This is used for static
// authenticatyion via AwsAccessKeyID & AwsAccessKeySecret
// authentication via AwsAccessKeyID & AwsAccessKeySecret
func (*sharedConfigCache) retrieveStaticCredentials(awsAuthorization AuthorizationMetadata) *aws.CredentialsCache {
staticCredentialsProvider := aws.NewCredentialsCache(credentials.NewStaticCredentialsProvider(awsAuthorization.AwsAccessKeyID, awsAuthorization.AwsSecretAccessKey, awsAuthorization.AwsSessionToken))
return staticCredentialsProvider

View File

@ -70,7 +70,7 @@ func (rt *roundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
return transport.RoundTrip(req)
}
// parseAwsAMPMetadata parses the data to get the AWS sepcific auth info and metadata
// parseAwsAMPMetadata parses the data to get the AWS specific auth info and metadata
func parseAwsAMPMetadata(config *scalersconfig.ScalerConfig, awsRegion string) (*AuthorizationMetadata, error) {
auth, err := GetAwsAuthorization(config.TriggerUniqueKey, awsRegion, config.PodIdentity, config.TriggerMetadata, config.AuthParams, config.ResolvedEnv)
if err != nil {

View File

@ -175,7 +175,7 @@ func (s *awsDynamoDBStreamsScaler) getDynamoDBStreamShardCount(ctx context.Conte
}
for {
if lastShardID != nil {
// The upper limit of shard num to retrun is 100.
// The upper limit of shard num to return is 100.
// ExclusiveStartShardId is the shard ID of the first item that the operation will evaluate.
input = dynamodbstreams.DescribeStreamInput{
StreamArn: s.streamArn,

View File

@ -29,7 +29,7 @@ type awsSqsQueueScaler struct {
type awsSqsQueueMetadata struct {
TargetQueueLength int64 `keda:"name=queueLength, order=triggerMetadata, default=5"`
ActivationTargetQueueLength int64 `keda:"name=activationQueueLength, order=triggerMetadata, default=0"`
QueueURL string `keda:"name=queueURL;queueURLFromEnv, order=triggerMetadata;resolvedEnv"`
QueueURL string `keda:"name=queueURL, order=triggerMetadata;resolvedEnv"`
queueName string
AwsRegion string `keda:"name=awsRegion, order=triggerMetadata;authParams"`
AwsEndpoint string `keda:"name=awsEndpoint, order=triggerMetadata, optional"`

View File

@ -304,7 +304,7 @@ var testAWSSQSMetadata = []parseAWSSQSMetadataTestData{
map[string]string{
"QUEUE_URL": "",
},
false,
true,
"empty QUEUE_URL env value"},
}
@ -533,3 +533,77 @@ func TestProcessQueueLengthFromSqsQueueAttributesOutput(t *testing.T) {
})
}
}
func TestQueueURLFromEnvResolution(t *testing.T) {
testCases := []struct {
name string
metadata map[string]string
resolvedEnv map[string]string
expectedURL string
expectError bool
}{
{
name: "direct queueURL",
metadata: map[string]string{
"queueURL": testAWSSQSProperQueueURL,
"awsRegion": "eu-west-1",
},
resolvedEnv: map[string]string{},
expectedURL: testAWSSQSProperQueueURL,
expectError: false,
},
{
name: "queueURL from environment variable",
metadata: map[string]string{
"queueURLFromEnv": "QUEUE_URL",
"awsRegion": "eu-west-1",
},
resolvedEnv: map[string]string{
"QUEUE_URL": testAWSSQSProperQueueURL,
},
expectedURL: testAWSSQSProperQueueURL,
expectError: false,
},
{
name: "missing environment variable",
metadata: map[string]string{
"queueURLFromEnv": "MISSING_ENV_VAR",
"awsRegion": "eu-west-1",
},
resolvedEnv: map[string]string{
"QUEUE_URL": testAWSSQSProperQueueURL,
},
expectedURL: "",
expectError: true,
},
{
name: "empty environment variable value",
metadata: map[string]string{
"queueURLFromEnv": "EMPTY_ENV_VAR",
"awsRegion": "eu-west-1",
},
resolvedEnv: map[string]string{
"EMPTY_ENV_VAR": "",
},
expectedURL: "",
expectError: true,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
meta, err := parseAwsSqsQueueMetadata(&scalersconfig.ScalerConfig{
TriggerMetadata: tc.metadata,
ResolvedEnv: tc.resolvedEnv,
AuthParams: testAWSSQSAuthentication,
})
if tc.expectError {
assert.Error(t, err)
} else {
assert.NoError(t, err)
assert.Equal(t, tc.expectedURL, meta.QueueURL)
}
})
}
}

View File

@ -79,7 +79,7 @@ func getDataExplorerAuthConfig(metadata *DataExplorerMetadata) (*kusto.Connectio
return nil, fmt.Errorf("missing credentials. please ensure that TenantID is provided")
}
kcsb.WithAadAppKey(metadata.ClientID, metadata.ClientSecret, metadata.TenantID)
// This should be here because internaly the SDK resets the configuration
// This should be here because internally the SDK resets the configuration
// after calling `WithAadAppKey`
clientOptions := &policy.ClientOptions{
Cloud: cloud.Configuration{

View File

@ -14,15 +14,15 @@ import (
// EventHubInfo to keep event hub connection and resources
type EventHubInfo struct {
EventHubConnection string
EventHubConsumerGroup string
StorageConnection string
StorageAccountName string
EventHubConnection string `keda:"name=connection, order=authParams;resolvedEnv, optional"`
EventHubConsumerGroup string `keda:"name=consumerGroup, order=triggerMetadata, default=$Default"`
StorageConnection string `keda:"name=storageConnection, order=authParams;resolvedEnv, optional"`
StorageAccountName string `keda:"name=storageAccountName, order=triggerMetadata, optional"`
BlobStorageEndpoint string
BlobContainer string
Namespace string
EventHubName string
CheckpointStrategy string
BlobContainer string `keda:"name=blobContainer, order=triggerMetadata, optional"`
Namespace string `keda:"name=eventHubNamespace, order=triggerMetadata;resolvedEnv, optional"`
EventHubName string `keda:"name=eventHubName, order=triggerMetadata;resolvedEnv, optional"`
CheckpointStrategy string `keda:"name=checkpointStrategy, order=triggerMetadata, optional"`
ServiceBusEndpointSuffix string
PodIdentity kedav1alpha1.AuthPodIdentity
}

View File

@ -43,10 +43,10 @@ func TestCheckpointFromBlobStorageAzureFunction(t *testing.T) {
}
client, err := GetStorageBlobClient(logr.Discard(), eventHubInfo.PodIdentity, eventHubInfo.StorageConnection, eventHubInfo.StorageAccountName, eventHubInfo.BlobStorageEndpoint, 3*time.Second)
assert.NoError(t, err, "error creting the blob client")
assert.NoError(t, err, "error creating the blob client")
err = createNewCheckpointInStorage(ctx, client, containerName, urlPath, checkpoint, nil)
assert.NoError(t, err, "error creating checkoiunt")
assert.NoError(t, err, "error creating checkpoint")
expectedCheckpoint := Checkpoint{
PartitionID: partitionID,
@ -81,10 +81,10 @@ func TestCheckpointFromBlobStorageDefault(t *testing.T) {
BlobContainer: containerName,
}
client, err := GetStorageBlobClient(logr.Discard(), eventHubInfo.PodIdentity, eventHubInfo.StorageConnection, eventHubInfo.StorageAccountName, eventHubInfo.BlobStorageEndpoint, 3*time.Second)
assert.NoError(t, err, "error creting the blob client")
assert.NoError(t, err, "error creating the blob client")
err = createNewCheckpointInStorage(ctx, client, containerName, urlPath, checkpoint, nil)
assert.NoError(t, err, "error creating checkoiunt")
assert.NoError(t, err, "error creating checkpoint")
expectedCheckpoint := Checkpoint{
PartitionID: partitionID,
@ -120,10 +120,10 @@ func TestCheckpointFromBlobStorageDefaultDeprecatedPythonCheckpoint(t *testing.T
}
client, err := GetStorageBlobClient(logr.Discard(), eventHubInfo.PodIdentity, eventHubInfo.StorageConnection, eventHubInfo.StorageAccountName, eventHubInfo.BlobStorageEndpoint, 3*time.Second)
assert.NoError(t, err, "error creting the blob client")
assert.NoError(t, err, "error creating the blob client")
err = createNewCheckpointInStorage(ctx, client, containerName, urlPath, checkpoint, nil)
assert.NoError(t, err, "error creating checkoiunt")
assert.NoError(t, err, "error creating checkpoint")
expectedCheckpoint := Checkpoint{
PartitionID: partitionID,
@ -162,10 +162,10 @@ func TestCheckpointFromBlobStorageWithBlobMetadata(t *testing.T) {
}
client, err := GetStorageBlobClient(logr.Discard(), eventHubInfo.PodIdentity, eventHubInfo.StorageConnection, eventHubInfo.StorageAccountName, eventHubInfo.BlobStorageEndpoint, 3*time.Second)
assert.NoError(t, err, "error creting the blob client")
assert.NoError(t, err, "error creating the blob client")
err = createNewCheckpointInStorage(ctx, client, containerName, urlPath, "", metadata)
assert.NoError(t, err, "error creating checkoiunt")
assert.NoError(t, err, "error creating checkpoint")
expectedCheckpoint := Checkpoint{
PartitionID: partitionID,
@ -201,10 +201,10 @@ func TestCheckpointFromBlobStorageGoSdk(t *testing.T) {
}
client, err := GetStorageBlobClient(logr.Discard(), eventHubInfo.PodIdentity, eventHubInfo.StorageConnection, eventHubInfo.StorageAccountName, eventHubInfo.BlobStorageEndpoint, 3*time.Second)
assert.NoError(t, err, "error creting the blob client")
assert.NoError(t, err, "error creating the blob client")
err = createNewCheckpointInStorage(ctx, client, containerName, urlPath, checkpoint, nil)
assert.NoError(t, err, "error creating checkoiunt")
assert.NoError(t, err, "error creating checkpoint")
expectedCheckpoint := Checkpoint{
PartitionID: partitionID,
@ -243,10 +243,10 @@ func TestCheckpointFromBlobStorageDapr(t *testing.T) {
}
client, err := GetStorageBlobClient(logr.Discard(), eventHubInfo.PodIdentity, eventHubInfo.StorageConnection, eventHubInfo.StorageAccountName, eventHubInfo.BlobStorageEndpoint, 3*time.Second)
assert.NoError(t, err, "error creting the blob client")
assert.NoError(t, err, "error creating the blob client")
err = createNewCheckpointInStorage(ctx, client, containerName, urlPath, checkpoint, nil)
assert.NoError(t, err, "error creating checkoiunt")
assert.NoError(t, err, "error creating checkpoint")
expectedCheckpoint := Checkpoint{
PartitionID: partitionID,

View File

@ -20,7 +20,6 @@ import (
"context"
"fmt"
"math"
"strconv"
"strings"
"github.com/Azure/azure-sdk-for-go/sdk/messaging/azeventhubs"
@ -38,10 +37,7 @@ import (
)
const (
defaultEventHubMessageThreshold = 64
eventHubMetricType = "External"
thresholdMetricName = "unprocessedEventThreshold"
activationThresholdMetricName = "activationUnprocessedEventThreshold"
defaultEventHubConsumerGroup = "$Default"
defaultBlobContainer = ""
defaultCheckpointStrategy = ""
@ -57,10 +53,10 @@ type azureEventHubScaler struct {
}
type eventHubMetadata struct {
eventHubInfo azure.EventHubInfo
threshold int64
activationThreshold int64
stalePartitionInfoThreshold int64
Threshold int64 `keda:"name=unprocessedEventThreshold, order=triggerMetadata, default=64"`
ActivationThreshold int64 `keda:"name=activationUnprocessedEventThreshold, order=triggerMetadata, default=0"`
StalePartitionInfoThreshold int64 `keda:"name=stalePartitionInfoThreshold, order=triggerMetadata, default=10000"`
EventHubInfo azure.EventHubInfo `keda:"optional"`
triggerIndex int
}
@ -78,12 +74,12 @@ func NewAzureEventHubScaler(config *scalersconfig.ScalerConfig) (Scaler, error)
return nil, fmt.Errorf("unable to get eventhub metadata: %w", err)
}
eventHubClient, err := azure.GetEventHubClient(parsedMetadata.eventHubInfo, logger)
eventHubClient, err := azure.GetEventHubClient(parsedMetadata.EventHubInfo, logger)
if err != nil {
return nil, fmt.Errorf("unable to get eventhub client: %w", err)
}
blobStorageClient, err := azure.GetStorageBlobClient(logger, config.PodIdentity, parsedMetadata.eventHubInfo.StorageConnection, parsedMetadata.eventHubInfo.StorageAccountName, parsedMetadata.eventHubInfo.BlobStorageEndpoint, config.GlobalHTTPTimeout)
blobStorageClient, err := azure.GetStorageBlobClient(logger, config.PodIdentity, parsedMetadata.EventHubInfo.StorageConnection, parsedMetadata.EventHubInfo.StorageAccountName, parsedMetadata.EventHubInfo.BlobStorageEndpoint, config.GlobalHTTPTimeout)
if err != nil {
return nil, fmt.Errorf("unable to get eventhub client: %w", err)
}
@ -100,7 +96,11 @@ func NewAzureEventHubScaler(config *scalersconfig.ScalerConfig) (Scaler, error)
// parseAzureEventHubMetadata parses metadata
func parseAzureEventHubMetadata(logger logr.Logger, config *scalersconfig.ScalerConfig) (*eventHubMetadata, error) {
meta := eventHubMetadata{
eventHubInfo: azure.EventHubInfo{},
EventHubInfo: azure.EventHubInfo{},
}
if err := config.TypedConfig(&meta); err != nil {
return nil, fmt.Errorf("error parsing azure eventhub metadata: %w", err)
}
err := parseCommonAzureEventHubMetadata(config, &meta)
@ -117,48 +117,6 @@ func parseAzureEventHubMetadata(logger logr.Logger, config *scalersconfig.Scaler
}
func parseCommonAzureEventHubMetadata(config *scalersconfig.ScalerConfig, meta *eventHubMetadata) error {
meta.threshold = defaultEventHubMessageThreshold
if val, ok := config.TriggerMetadata[thresholdMetricName]; ok {
threshold, err := strconv.ParseInt(val, 10, 64)
if err != nil {
return fmt.Errorf("error parsing azure eventhub metadata %s: %w", thresholdMetricName, err)
}
meta.threshold = threshold
}
meta.activationThreshold = 0
if val, ok := config.TriggerMetadata[activationThresholdMetricName]; ok {
activationThreshold, err := strconv.ParseInt(val, 10, 64)
if err != nil {
return fmt.Errorf("error parsing azure eventhub metadata %s: %w", activationThresholdMetricName, err)
}
meta.activationThreshold = activationThreshold
}
if config.AuthParams["storageConnection"] != "" {
meta.eventHubInfo.StorageConnection = config.AuthParams["storageConnection"]
} else if config.TriggerMetadata["storageConnectionFromEnv"] != "" {
meta.eventHubInfo.StorageConnection = config.ResolvedEnv[config.TriggerMetadata["storageConnectionFromEnv"]]
}
meta.eventHubInfo.EventHubConsumerGroup = defaultEventHubConsumerGroup
if val, ok := config.TriggerMetadata["consumerGroup"]; ok {
meta.eventHubInfo.EventHubConsumerGroup = val
}
meta.eventHubInfo.CheckpointStrategy = defaultCheckpointStrategy
if val, ok := config.TriggerMetadata["checkpointStrategy"]; ok {
meta.eventHubInfo.CheckpointStrategy = val
}
meta.eventHubInfo.BlobContainer = defaultBlobContainer
if val, ok := config.TriggerMetadata["blobContainer"]; ok {
meta.eventHubInfo.BlobContainer = val
}
serviceBusEndpointSuffixProvider := func(env az.Environment) (string, error) {
return env.ServiceBusEndpointSuffix, nil
}
@ -166,16 +124,7 @@ func parseCommonAzureEventHubMetadata(config *scalersconfig.ScalerConfig, meta *
if err != nil {
return err
}
meta.eventHubInfo.ServiceBusEndpointSuffix = serviceBusEndpointSuffix
meta.stalePartitionInfoThreshold = defaultStalePartitionInfoThreshold
if val, ok := config.TriggerMetadata["stalePartitionInfoThreshold"]; ok {
stalePartitionInfoThreshold, err := strconv.ParseInt(val, 10, 64)
if err != nil {
return fmt.Errorf("error parsing azure eventhub metadata stalePartitionInfoThreshold: %w", err)
}
meta.stalePartitionInfoThreshold = stalePartitionInfoThreshold
}
meta.EventHubInfo.ServiceBusEndpointSuffix = serviceBusEndpointSuffix
meta.triggerIndex = config.TriggerIndex
@ -183,32 +132,21 @@ func parseCommonAzureEventHubMetadata(config *scalersconfig.ScalerConfig, meta *
}
func parseAzureEventHubAuthenticationMetadata(logger logr.Logger, config *scalersconfig.ScalerConfig, meta *eventHubMetadata) error {
meta.eventHubInfo.PodIdentity = config.PodIdentity
meta.EventHubInfo.PodIdentity = config.PodIdentity
switch config.PodIdentity.Provider {
case "", v1alpha1.PodIdentityProviderNone:
if len(meta.eventHubInfo.StorageConnection) == 0 {
if len(meta.EventHubInfo.StorageConnection) == 0 {
return fmt.Errorf("no storage connection string given")
}
connection := ""
if config.AuthParams["connection"] != "" {
connection = config.AuthParams["connection"]
} else if config.TriggerMetadata["connectionFromEnv"] != "" {
connection = config.ResolvedEnv[config.TriggerMetadata["connectionFromEnv"]]
}
connection := meta.EventHubInfo.EventHubConnection
if len(connection) == 0 {
return fmt.Errorf("no event hub connection string given")
}
if !strings.Contains(connection, "EntityPath") {
eventHubName := ""
if config.TriggerMetadata["eventHubName"] != "" {
eventHubName = config.TriggerMetadata["eventHubName"]
} else if config.TriggerMetadata["eventHubNameFromEnv"] != "" {
eventHubName = config.ResolvedEnv[config.TriggerMetadata["eventHubNameFromEnv"]]
}
eventHubName := meta.EventHubInfo.EventHubName
if eventHubName == "" {
return fmt.Errorf("connection string does not contain event hub name, and parameter eventHubName not provided")
@ -217,16 +155,13 @@ func parseAzureEventHubAuthenticationMetadata(logger logr.Logger, config *scaler
connection = fmt.Sprintf("%s;EntityPath=%s", connection, eventHubName)
}
meta.eventHubInfo.EventHubConnection = connection
meta.EventHubInfo.EventHubConnection = connection
case v1alpha1.PodIdentityProviderAzureWorkload:
meta.eventHubInfo.StorageAccountName = ""
if val, ok := config.TriggerMetadata["storageAccountName"]; ok {
meta.eventHubInfo.StorageAccountName = val
} else {
if meta.EventHubInfo.StorageAccountName == "" {
logger.Info("no 'storageAccountName' provided to enable identity based authentication to Blob Storage. Attempting to use connection string instead")
}
if len(meta.eventHubInfo.StorageAccountName) != 0 {
if len(meta.EventHubInfo.StorageAccountName) != 0 {
storageEndpointSuffixProvider := func(env az.Environment) (string, error) {
return env.StorageEndpointSuffix, nil
}
@ -234,30 +169,18 @@ func parseAzureEventHubAuthenticationMetadata(logger logr.Logger, config *scaler
if err != nil {
return err
}
meta.eventHubInfo.BlobStorageEndpoint = "blob." + storageEndpointSuffix
meta.EventHubInfo.BlobStorageEndpoint = "blob." + storageEndpointSuffix
}
if len(meta.eventHubInfo.StorageConnection) == 0 && len(meta.eventHubInfo.StorageAccountName) == 0 {
if len(meta.EventHubInfo.StorageConnection) == 0 && len(meta.EventHubInfo.StorageAccountName) == 0 {
return fmt.Errorf("no storage connection string or storage account name for pod identity based authentication given")
}
if config.TriggerMetadata["eventHubNamespace"] != "" {
meta.eventHubInfo.Namespace = config.TriggerMetadata["eventHubNamespace"]
} else if config.TriggerMetadata["eventHubNamespaceFromEnv"] != "" {
meta.eventHubInfo.Namespace = config.ResolvedEnv[config.TriggerMetadata["eventHubNamespaceFromEnv"]]
}
if len(meta.eventHubInfo.Namespace) == 0 {
if len(meta.EventHubInfo.Namespace) == 0 {
return fmt.Errorf("no event hub namespace string given")
}
if config.TriggerMetadata["eventHubName"] != "" {
meta.eventHubInfo.EventHubName = config.TriggerMetadata["eventHubName"]
} else if config.TriggerMetadata["eventHubNameFromEnv"] != "" {
meta.eventHubInfo.EventHubName = config.ResolvedEnv[config.TriggerMetadata["eventHubNameFromEnv"]]
}
if len(meta.eventHubInfo.EventHubName) == 0 {
if len(meta.EventHubInfo.EventHubName) == 0 {
return fmt.Errorf("no event hub name string given")
}
}
@ -272,17 +195,17 @@ func (s *azureEventHubScaler) GetUnprocessedEventCountInPartition(ctx context.Co
return 0, azure.Checkpoint{}, nil
}
checkpoint, err = azure.GetCheckpointFromBlobStorage(ctx, s.blobStorageClient, s.metadata.eventHubInfo, partitionInfo.PartitionID)
checkpoint, err = azure.GetCheckpointFromBlobStorage(ctx, s.blobStorageClient, s.metadata.EventHubInfo, partitionInfo.PartitionID)
if err != nil {
// if blob not found return the total partition event count
if bloberror.HasCode(err, bloberror.BlobNotFound, bloberror.ContainerNotFound) {
s.logger.V(1).Error(err, fmt.Sprintf("Blob container : %s not found to use checkpoint strategy, getting unprocessed event count without checkpoint", s.metadata.eventHubInfo.BlobContainer))
s.logger.V(1).Error(err, fmt.Sprintf("Blob container : %s not found to use checkpoint strategy, getting unprocessed event count without checkpoint", s.metadata.EventHubInfo.BlobContainer))
return GetUnprocessedEventCountWithoutCheckpoint(partitionInfo), azure.Checkpoint{}, nil
}
return -1, azure.Checkpoint{}, fmt.Errorf("unable to get checkpoint from storage: %w", err)
}
unprocessedEventCountInPartition := calculateUnprocessedEvents(partitionInfo, checkpoint, s.metadata.stalePartitionInfoThreshold)
unprocessedEventCountInPartition := calculateUnprocessedEvents(partitionInfo, checkpoint, s.metadata.StalePartitionInfoThreshold)
return unprocessedEventCountInPartition, checkpoint, nil
}
@ -329,9 +252,9 @@ func GetUnprocessedEventCountWithoutCheckpoint(partitionInfo azeventhubs.Partiti
func (s *azureEventHubScaler) GetMetricSpecForScaling(context.Context) []v2.MetricSpec {
externalMetric := &v2.ExternalMetricSource{
Metric: v2.MetricIdentifier{
Name: GenerateMetricNameWithIndex(s.metadata.triggerIndex, kedautil.NormalizeString(fmt.Sprintf("azure-eventhub-%s", s.metadata.eventHubInfo.EventHubConsumerGroup))),
Name: GenerateMetricNameWithIndex(s.metadata.triggerIndex, kedautil.NormalizeString(fmt.Sprintf("azure-eventhub-%s", s.metadata.EventHubInfo.EventHubConsumerGroup))),
},
Target: GetMetricTarget(s.metricType, s.metadata.threshold),
Target: GetMetricTarget(s.metricType, s.metadata.Threshold),
}
metricSpec := v2.MetricSpec{External: externalMetric, Type: eventHubMetricType}
return []v2.MetricSpec{metricSpec}
@ -395,11 +318,11 @@ func (s *azureEventHubScaler) GetMetricsAndActivity(ctx context.Context, metricN
}
// don't scale out beyond the number of partitions
lagRelatedToPartitionCount := getTotalLagRelatedToPartitionAmount(totalUnprocessedEventCount, int64(len(partitionIDs)), s.metadata.threshold)
lagRelatedToPartitionCount := getTotalLagRelatedToPartitionAmount(totalUnprocessedEventCount, int64(len(partitionIDs)), s.metadata.Threshold)
s.logger.V(1).Info(fmt.Sprintf("Unprocessed events in event hub total: %d, scaling for a lag of %d related to %d partitions", totalUnprocessedEventCount, lagRelatedToPartitionCount, len(partitionIDs)))
metric := GenerateMetricInMili(metricName, float64(lagRelatedToPartitionCount))
return []external_metrics.ExternalMetricValue{metric}, totalUnprocessedEventCount > s.metadata.activationThreshold, nil
return []external_metrics.ExternalMetricValue{metric}, totalUnprocessedEventCount > s.metadata.ActivationThreshold, nil
}

View File

@ -261,7 +261,7 @@ var eventHubMetricIdentifiers = []eventHubMetricIdentifier{
var testEventHubScaler = azureEventHubScaler{
metadata: &eventHubMetadata{
eventHubInfo: azure.EventHubInfo{
EventHubInfo: azure.EventHubInfo{
EventHubConnection: "none",
StorageConnection: "none",
},
@ -326,10 +326,10 @@ func TestGetUnprocessedEventCountInPartition(t *testing.T) {
}
// Can actually test that numbers return
testEventHubScaler.metadata.eventHubInfo.EventHubConnection = eventHubConnectionString
testEventHubScaler.metadata.EventHubInfo.EventHubConnection = eventHubConnectionString
testEventHubScaler.eventHubClient = eventHubProducer
testEventHubScaler.blobStorageClient = blobClient
testEventHubScaler.metadata.eventHubInfo.EventHubConsumerGroup = "$Default"
testEventHubScaler.metadata.EventHubInfo.EventHubConsumerGroup = "$Default"
// Send 1 message to event hub first
t.Log("Sending message to event hub")
@ -411,10 +411,10 @@ func TestGetUnprocessedEventCountIfNoCheckpointExists(t *testing.T) {
}
// Can actually test that numbers return
testEventHubScaler.metadata.eventHubInfo.EventHubConnection = eventHubConnectionString
testEventHubScaler.metadata.EventHubInfo.EventHubConnection = eventHubConnectionString
testEventHubScaler.eventHubClient = client
testEventHubScaler.blobStorageClient = blobClient
testEventHubScaler.metadata.eventHubInfo.EventHubConsumerGroup = "$Default"
testEventHubScaler.metadata.EventHubInfo.EventHubConsumerGroup = "$Default"
// Send 1 message to event hub first
t.Log("Sending message to event hub")

View File

@ -19,7 +19,6 @@ package scalers
import (
"context"
"fmt"
"strconv"
"strings"
"time"
@ -48,18 +47,67 @@ type azureLogAnalyticsScaler struct {
}
type azureLogAnalyticsMetadata struct {
tenantID string
clientID string
clientSecret string
workspaceID string
podIdentity kedav1alpha1.AuthPodIdentity
query string
threshold float64
activationThreshold float64
triggerIndex int
cloud azcloud.Configuration
unsafeSsl bool
timeout time.Duration // custom HTTP client timeout
TenantID string `keda:"name=tenantId, order=authParams;triggerMetadata;resolvedEnv, optional"`
ClientID string `keda:"name=clientId, order=authParams;triggerMetadata;resolvedEnv, optional"`
ClientSecret string `keda:"name=clientSecret, order=authParams;triggerMetadata;resolvedEnv, optional"`
WorkspaceID string `keda:"name=workspaceId, order=authParams;triggerMetadata;resolvedEnv"`
PodIdentity kedav1alpha1.AuthPodIdentity
Query string `keda:"name=query, order=triggerMetadata"`
Threshold float64 `keda:"name=threshold, order=triggerMetadata"`
ActivationThreshold float64 `keda:"name=activationThreshold, order=triggerMetadata, default=0"`
LogAnalyticsResourceURL string `keda:"name=logAnalyticsResourceURL, order=triggerMetadata, optional"`
TriggerIndex int
CloudName string `keda:"name=cloud, order=triggerMetadata, default=azurePublicCloud"`
Cloud azcloud.Configuration
UnsafeSsl bool `keda:"name=unsafeSsl, order=triggerMetadata, default=false"`
Timeout time.Duration `keda:"name=timeout, order=triggerMetadata, optional"`
}
func (m *azureLogAnalyticsMetadata) Validate() error {
missingParameter := ""
switch m.PodIdentity.Provider {
case "", kedav1alpha1.PodIdentityProviderNone:
if m.TenantID == "" {
missingParameter = "tenantId"
}
if m.ClientID == "" {
missingParameter = "clientId"
}
if m.ClientSecret == "" {
missingParameter = "clientSecret"
}
case kedav1alpha1.PodIdentityProviderAzureWorkload:
break
default:
return fmt.Errorf("error parsing metadata. Details: Log Analytics Scaler doesn't support pod identity %s", m.PodIdentity.Provider)
}
m.Cloud = azcloud.AzurePublic
if strings.EqualFold(m.CloudName, azure.PrivateCloud) {
if m.LogAnalyticsResourceURL != "" {
m.Cloud.Services[azquery.ServiceNameLogs] = azcloud.ServiceConfiguration{
Endpoint: fmt.Sprintf("%s/v1", m.LogAnalyticsResourceURL),
Audience: m.LogAnalyticsResourceURL,
}
} else {
return fmt.Errorf("logAnalyticsResourceURL must be provided for %s cloud type", azure.PrivateCloud)
}
} else if resource, ok := azure.AzureClouds[strings.ToUpper(m.CloudName)]; ok {
m.Cloud = resource
} else {
return fmt.Errorf("there is no cloud environment matching the name %s", m.CloudName)
}
if m.Timeout > 0 {
m.Timeout *= time.Millisecond
}
if missingParameter != "" {
return fmt.Errorf("error parsing metadata. Details: %s was not found in metadata. Check your ScaledObject configuration", missingParameter)
}
return nil
}
// NewAzureLogAnalyticsScaler creates a new Azure Log Analytics Scaler
@ -96,7 +144,7 @@ func CreateAzureLogsClient(config *scalersconfig.ScalerConfig, meta *azureLogAna
var err error
switch config.PodIdentity.Provider {
case "", kedav1alpha1.PodIdentityProviderNone:
creds, err = azidentity.NewClientSecretCredential(meta.tenantID, meta.clientID, meta.clientSecret, nil)
creds, err = azidentity.NewClientSecretCredential(meta.TenantID, meta.ClientID, meta.ClientSecret, nil)
case kedav1alpha1.PodIdentityProviderAzureWorkload:
creds, err = azure.NewChainedCredential(logger, config.PodIdentity)
default:
@ -107,8 +155,8 @@ func CreateAzureLogsClient(config *scalersconfig.ScalerConfig, meta *azureLogAna
}
client, err := azquery.NewLogsClient(creds, &azquery.LogsClientOptions{
ClientOptions: policy.ClientOptions{
Transport: kedautil.CreateHTTPClient(meta.timeout, meta.unsafeSsl),
Cloud: meta.cloud,
Transport: kedautil.CreateHTTPClient(meta.Timeout, meta.UnsafeSsl),
Cloud: meta.Cloud,
},
})
if err != nil {
@ -118,124 +166,15 @@ func CreateAzureLogsClient(config *scalersconfig.ScalerConfig, meta *azureLogAna
}
func parseAzureLogAnalyticsMetadata(config *scalersconfig.ScalerConfig) (*azureLogAnalyticsMetadata, error) {
meta := azureLogAnalyticsMetadata{}
switch config.PodIdentity.Provider {
case "", kedav1alpha1.PodIdentityProviderNone:
// Getting tenantId
tenantID, err := getParameterFromConfig(config, "tenantId", true)
if err != nil {
return nil, err
}
meta.tenantID = tenantID
// Getting clientId
clientID, err := getParameterFromConfig(config, "clientId", true)
if err != nil {
return nil, err
}
meta.clientID = clientID
// Getting clientSecret
clientSecret, err := getParameterFromConfig(config, "clientSecret", true)
if err != nil {
return nil, err
}
meta.clientSecret = clientSecret
meta.podIdentity = config.PodIdentity
case kedav1alpha1.PodIdentityProviderAzureWorkload:
meta.podIdentity = config.PodIdentity
default:
return nil, fmt.Errorf("error parsing metadata. Details: Log Analytics Scaler doesn't support pod identity %s", config.PodIdentity.Provider)
meta := &azureLogAnalyticsMetadata{}
meta.TriggerIndex = config.TriggerIndex
meta.Timeout = config.GlobalHTTPTimeout
meta.PodIdentity = config.PodIdentity
if err := config.TypedConfig(meta); err != nil {
return nil, fmt.Errorf("error parsing azure loganalytics metadata: %w", err)
}
// Getting workspaceId
workspaceID, err := getParameterFromConfig(config, "workspaceId", true)
if err != nil {
return nil, err
}
meta.workspaceID = workspaceID
// Getting query, observe that we dont check AuthParams for query
query, err := getParameterFromConfig(config, "query", false)
if err != nil {
return nil, err
}
meta.query = query
// Getting threshold, observe that we don't check AuthParams for threshold
val, err := getParameterFromConfig(config, "threshold", false)
if err != nil {
if config.AsMetricSource {
val = "0"
} else {
return nil, err
}
}
threshold, err := strconv.ParseFloat(val, 64)
if err != nil {
return nil, fmt.Errorf("error parsing metadata. Details: can't parse threshold. Inner Error: %w", err)
}
meta.threshold = threshold
// Getting activationThreshold
meta.activationThreshold = 0
val, err = getParameterFromConfig(config, "activationThreshold", false)
if err == nil {
activationThreshold, err := strconv.ParseFloat(val, 64)
if err != nil {
return nil, fmt.Errorf("error parsing metadata. Details: can't parse threshold. Inner Error: %w", err)
}
meta.activationThreshold = activationThreshold
}
meta.triggerIndex = config.TriggerIndex
meta.cloud = azcloud.AzurePublic
if cloud, ok := config.TriggerMetadata["cloud"]; ok {
if strings.EqualFold(cloud, azure.PrivateCloud) {
if resource, ok := config.TriggerMetadata["logAnalyticsResourceURL"]; ok && resource != "" {
meta.cloud.Services[azquery.ServiceNameLogs] = azcloud.ServiceConfiguration{
Endpoint: fmt.Sprintf("%s/v1", resource),
Audience: resource,
}
} else {
return nil, fmt.Errorf("logAnalyticsResourceURL must be provided for %s cloud type", azure.PrivateCloud)
}
} else if resource, ok := azure.AzureClouds[strings.ToUpper(cloud)]; ok {
meta.cloud = resource
} else {
return nil, fmt.Errorf("there is no cloud environment matching the name %s", cloud)
}
}
// Getting unsafeSsl, observe that we don't check AuthParams for unsafeSsl
meta.unsafeSsl = false
unsafeSslVal, err := getParameterFromConfig(config, "unsafeSsl", false)
if err == nil {
unsafeSsl, err := strconv.ParseBool(unsafeSslVal)
if err != nil {
return nil, fmt.Errorf("error parsing metadata. Details: can't parse unsafeSsl. Inner Error: %w", err)
}
meta.unsafeSsl = unsafeSsl
}
// Resolve HTTP client timeout
meta.timeout = config.GlobalHTTPTimeout
timeoutVal, err := getParameterFromConfig(config, "timeout", false)
if err == nil {
timeout, err := strconv.Atoi(timeoutVal)
if err != nil {
return nil, fmt.Errorf("unable to parse timeout: %w", err)
}
if timeout <= 0 {
return nil, fmt.Errorf("timeout must be greater than 0: %w", err)
}
meta.timeout = time.Duration(timeout) * time.Millisecond
}
return &meta, nil
return meta, nil
}
// getParameterFromConfig gets the parameter from the configs, if checkAuthParams is true
@ -254,9 +193,9 @@ func getParameterFromConfig(config *scalersconfig.ScalerConfig, parameter string
func (s *azureLogAnalyticsScaler) GetMetricSpecForScaling(context.Context) []v2.MetricSpec {
externalMetric := &v2.ExternalMetricSource{
Metric: v2.MetricIdentifier{
Name: GenerateMetricNameWithIndex(s.metadata.triggerIndex, kedautil.NormalizeString(fmt.Sprintf("%s-%s", "azure-log-analytics", s.metadata.workspaceID))),
Name: GenerateMetricNameWithIndex(s.metadata.TriggerIndex, kedautil.NormalizeString(fmt.Sprintf("%s-%s", "azure-log-analytics", s.metadata.WorkspaceID))),
},
Target: GetMetricTargetMili(s.metricType, s.metadata.threshold),
Target: GetMetricTargetMili(s.metricType, s.metadata.Threshold),
}
metricSpec := v2.MetricSpec{External: externalMetric, Type: externalMetricType}
return []v2.MetricSpec{metricSpec}
@ -272,7 +211,7 @@ func (s *azureLogAnalyticsScaler) GetMetricsAndActivity(ctx context.Context, met
metric := GenerateMetricInMili(metricName, val)
return []external_metrics.ExternalMetricValue{metric}, val > s.metadata.activationThreshold, nil
return []external_metrics.ExternalMetricValue{metric}, val > s.metadata.ActivationThreshold, nil
}
func (s *azureLogAnalyticsScaler) Close(context.Context) error {
@ -280,8 +219,8 @@ func (s *azureLogAnalyticsScaler) Close(context.Context) error {
}
func (s *azureLogAnalyticsScaler) getMetricData(ctx context.Context) (float64, error) {
response, err := s.client.QueryWorkspace(ctx, s.metadata.workspaceID, azquery.Body{
Query: &s.metadata.query,
response, err := s.client.QueryWorkspace(ctx, s.metadata.WorkspaceID, azquery.Body{
Query: &s.metadata.Query,
}, nil)
if err != nil {
return -1, err

View File

@ -100,8 +100,6 @@ var testLogAnalyticsMetadata = []parseLogAnalyticsMetadataTestData{
{map[string]string{"tenantIdFromEnv": "d248da64-0e1e-4f79-b8c6-72ab7aa055eb", "clientIdFromEnv": "41826dd4-9e0a-4357-a5bd-a88ad771ea7d", "clientSecretFromEnv": "U6DtAX5r6RPZxd~l12Ri3X8J9urt5Q-xs", "workspaceIdFromEnv": "074dd9f8-c368-4220-9400-acb6e80fc325", "query": query, "threshold": "1900000000", "cloud": "azureGermanCloud"}, true},
// Valid HTTP timeout
{map[string]string{"tenantId": "d248da64-0e1e-4f79-b8c6-72ab7aa055eb", "clientId": "41826dd4-9e0a-4357-a5bd-a88ad771ea7d", "clientSecret": "U6DtAX5r6RPZxd~l12Ri3X8J9urt5Q-xs", "workspaceId": "074dd9f8-c368-4220-9400-acb6e80fc325", "query": query, "threshold": "1900000000", "timeout": "1000"}, false},
// Invalid - 0 - HTTP timeout
{map[string]string{"tenantId": "d248da64-0e1e-4f79-b8c6-72ab7aa055eb", "clientId": "41826dd4-9e0a-4357-a5bd-a88ad771ea7d", "clientSecret": "U6DtAX5r6RPZxd~l12Ri3X8J9urt5Q-xs", "workspaceId": "074dd9f8-c368-4220-9400-acb6e80fc325", "query": query, "threshold": "1900000000", "timeout": "0"}, true},
// Invalid - negative - HTTP timeout
{map[string]string{"tenantId": "d248da64-0e1e-4f79-b8c6-72ab7aa055eb", "clientId": "41826dd4-9e0a-4357-a5bd-a88ad771ea7d", "clientSecret": "U6DtAX5r6RPZxd~l12Ri3X8J9urt5Q-xs", "workspaceId": "074dd9f8-c368-4220-9400-acb6e80fc325", "query": query, "threshold": "1900000000", "timeout": "-1"}, true},
// Invalid - not a number - HTTP timeout
@ -233,8 +231,8 @@ func TestLogAnalyticsParseMetadataUnsafeSsl(t *testing.T) {
t.Error("Expected error but got success")
}
if meta != nil {
if meta.unsafeSsl != testData.unsafeSsl {
t.Errorf("Expected unsafeSsl to be %v but got %v", testData.unsafeSsl, meta.unsafeSsl)
if meta.UnsafeSsl != testData.unsafeSsl {
t.Errorf("Expected unsafeSsl to be %v but got %v", testData.unsafeSsl, meta.UnsafeSsl)
}
}
}

View File

@ -67,7 +67,7 @@ type azureServiceBusMetadata struct {
FullyQualifiedNamespace string
UseRegex bool `keda:"name=useRegex, order=triggerMetadata, optional"`
EntityNameRegex *regexp.Regexp
Operation string `keda:"name=operation, order=triggerMetadata, enum=sum;max;avg, optional"`
Operation string `keda:"name=operation, order=triggerMetadata, enum=sum;max;avg, default=sum"`
triggerIndex int
timeout time.Duration
}

View File

@ -108,6 +108,7 @@ var parseServiceBusMetadataDataset = []parseServiceBusMetadataTestData{
{map[string]string{"queueName": queueName, "connectionFromEnv": connectionSetting, "useRegex": "true", "operation": avgOperation}, false, queue, defaultSuffix, map[string]string{}, ""},
{map[string]string{"queueName": queueName, "connectionFromEnv": connectionSetting, "useRegex": "true", "operation": sumOperation}, false, queue, defaultSuffix, map[string]string{}, ""},
{map[string]string{"queueName": queueName, "connectionFromEnv": connectionSetting, "useRegex": "true", "operation": maxOperation}, false, queue, defaultSuffix, map[string]string{}, ""},
{map[string]string{"queueName": queueName, "connectionFromEnv": connectionSetting, "useRegex": "true"}, false, queue, defaultSuffix, map[string]string{}, ""},
{map[string]string{"queueName": queueName, "connectionFromEnv": connectionSetting, "useRegex": "true", "operation": "random"}, true, queue, defaultSuffix, map[string]string{}, ""},
// queue with invalid regex string
{map[string]string{"queueName": "*", "connectionFromEnv": connectionSetting, "useRegex": "true", "operation": "avg"}, true, queue, defaultSuffix, map[string]string{}, ""},

View File

@ -34,12 +34,12 @@ type BeanstalkdScaler struct {
}
type BeanstalkdMetadata struct {
Server string `keda:"name=server, order=triggerMetadata"`
Tube string `keda:"name=tube, order=triggerMetadata"`
Value float64 `keda:"name=value, order=triggerMetadata"`
ActivationValue float64 `keda:"name=activationValue, order=triggerMetadata, optional"`
IncludeDelayed bool `keda:"name=includeDelayed, order=triggerMetadata, optional"`
Timeout uint `keda:"name=timeout, order=triggerMetadata, default=30"`
Server string `keda:"name=server, order=triggerMetadata"`
Tube string `keda:"name=tube, order=triggerMetadata"`
Value float64 `keda:"name=value, order=triggerMetadata"`
ActivationValue float64 `keda:"name=activationValue, order=triggerMetadata, optional"`
IncludeDelayed bool `keda:"name=includeDelayed, order=triggerMetadata, optional"`
Timeout time.Duration `keda:"name=timeout, order=triggerMetadata, default=30"`
TriggerIndex int
}
@ -70,9 +70,7 @@ func NewBeanstalkdScaler(config *scalersconfig.ScalerConfig) (Scaler, error) {
}
s.metadata = meta
timeout := time.Duration(s.metadata.Timeout) * time.Second
conn, err := beanstalk.DialTimeout(beanstalkdNetworkProtocol, s.metadata.Server, timeout)
conn, err := beanstalk.DialTimeout(beanstalkdNetworkProtocol, s.metadata.Server, s.metadata.Timeout)
if err != nil {
return nil, fmt.Errorf("error connecting to beanstalkd: %w", err)
}

View File

@ -21,7 +21,7 @@ type cpuMemoryScaler struct {
}
type cpuMemoryMetadata struct {
Type string `keda:"name=type, order=triggerMetadata, enum=Utilization;AverageValue, optional, deprecatedAnnounce=The 'type' setting is DEPRECATED and will be removed in v2.18 - Use 'metricType' instead."`
Type string `keda:"name=type, order=triggerMetadata, enum=Utilization;AverageValue, optional, deprecated=The 'type' setting is DEPRECATED and is removed in v2.18 - Use 'metricType' instead."`
Value string `keda:"name=value, order=triggerMetadata"`
ContainerName string `keda:"name=containerName, order=triggerMetadata, optional"`
AverageValue *resource.Quantity
@ -58,18 +58,6 @@ func parseResourceMetadata(config *scalersconfig.ScalerConfig) (cpuMemoryMetadat
meta.MetricType = config.MetricType
}
// This is deprecated and can be removed later
if meta.Type != "" {
switch meta.Type {
case "AverageValue":
meta.MetricType = v2.AverageValueMetricType
case "Utilization":
meta.MetricType = v2.UtilizationMetricType
default:
return meta, fmt.Errorf("unknown metric type: %s, allowed values are 'Utilization' or 'AverageValue'", meta.Type)
}
}
switch meta.MetricType {
case v2.AverageValueMetricType:
averageValueQuantity := resource.MustParse(meta.Value)

View File

@ -18,27 +18,20 @@ type parseCPUMemoryMetadataTestData struct {
}
var validCPUMemoryMetadata = map[string]string{
"type": "Utilization",
"value": "50",
}
var validContainerCPUMemoryMetadata = map[string]string{
"type": "Utilization",
"value": "50",
"containerName": "foo",
}
var testCPUMemoryMetadata = []parseCPUMemoryMetadataTestData{
{"", map[string]string{}, true},
{"", validCPUMemoryMetadata, false},
{"", validContainerCPUMemoryMetadata, false},
{"", map[string]string{"type": "Utilization", "value": "50"}, false},
{v2.UtilizationMetricType, map[string]string{"value": "50"}, false},
{"", map[string]string{"type": "AverageValue", "value": "50"}, false},
{v2.AverageValueMetricType, map[string]string{"value": "50"}, false},
{"", map[string]string{"type": "Value", "value": "50"}, true},
{v2.ValueMetricType, map[string]string{"value": "50"}, true},
{"", map[string]string{"type": "AverageValue"}, true},
{"", map[string]string{"type": "xxx", "value": "50"}, true},
{"", map[string]string{"value": ""}, true},
{"", map[string]string{}, true},
}
func TestCPUMemoryParseMetadata(t *testing.T) {
@ -58,9 +51,9 @@ func TestCPUMemoryParseMetadata(t *testing.T) {
}
func TestGetMetricSpecForScaling(t *testing.T) {
// Using trigger.metadata.type field for type
config := &scalersconfig.ScalerConfig{
TriggerMetadata: validCPUMemoryMetadata,
MetricType: v2.UtilizationMetricType,
}
scaler, _ := NewCPUMemoryScaler(v1.ResourceCPU, config)
metricSpec := scaler.GetMetricSpecForScaling(context.Background())
@ -68,24 +61,12 @@ func TestGetMetricSpecForScaling(t *testing.T) {
assert.Equal(t, metricSpec[0].Type, v2.ResourceMetricSourceType)
assert.Equal(t, metricSpec[0].Resource.Name, v1.ResourceCPU)
assert.Equal(t, metricSpec[0].Resource.Target.Type, v2.UtilizationMetricType)
// Using trigger.metricType field for type
config = &scalersconfig.ScalerConfig{
TriggerMetadata: map[string]string{"value": "50"},
MetricType: v2.UtilizationMetricType,
}
scaler, _ = NewCPUMemoryScaler(v1.ResourceCPU, config)
metricSpec = scaler.GetMetricSpecForScaling(context.Background())
assert.Equal(t, metricSpec[0].Type, v2.ResourceMetricSourceType)
assert.Equal(t, metricSpec[0].Resource.Name, v1.ResourceCPU)
assert.Equal(t, metricSpec[0].Resource.Target.Type, v2.UtilizationMetricType)
}
func TestGetContainerMetricSpecForScaling(t *testing.T) {
// Using trigger.metadata.type field for type
config := &scalersconfig.ScalerConfig{
TriggerMetadata: validContainerCPUMemoryMetadata,
MetricType: v2.UtilizationMetricType,
}
scaler, _ := NewCPUMemoryScaler(v1.ResourceCPU, config)
metricSpec := scaler.GetMetricSpecForScaling(context.Background())
@ -94,17 +75,4 @@ func TestGetContainerMetricSpecForScaling(t *testing.T) {
assert.Equal(t, metricSpec[0].ContainerResource.Name, v1.ResourceCPU)
assert.Equal(t, metricSpec[0].ContainerResource.Target.Type, v2.UtilizationMetricType)
assert.Equal(t, metricSpec[0].ContainerResource.Container, validContainerCPUMemoryMetadata["containerName"])
// Using trigger.metricType field for type
config = &scalersconfig.ScalerConfig{
TriggerMetadata: map[string]string{"value": "50", "containerName": "bar"},
MetricType: v2.UtilizationMetricType,
}
scaler, _ = NewCPUMemoryScaler(v1.ResourceCPU, config)
metricSpec = scaler.GetMetricSpecForScaling(context.Background())
assert.Equal(t, metricSpec[0].Type, v2.ContainerResourceMetricSourceType)
assert.Equal(t, metricSpec[0].ContainerResource.Name, v1.ResourceCPU)
assert.Equal(t, metricSpec[0].ContainerResource.Target.Type, v2.UtilizationMetricType)
assert.Equal(t, metricSpec[0].ContainerResource.Container, "bar")
}

View File

@ -141,7 +141,7 @@ func (s *cronScaler) GetMetricsAndActivity(_ context.Context, metricName string)
isWithinInterval = currentTime.After(nextStartTime) || currentTime.Before(nextEndTime)
}
metricValue := float64(1)
metricValue := float64(0)
if isWithinInterval {
metricValue = float64(s.metadata.DesiredReplicas)
}

View File

@ -101,7 +101,7 @@ func TestGetMetrics(t *testing.T) {
if currentDay == "Thursday" {
assert.Equal(t, metrics[0].Value.Value(), int64(10))
} else {
assert.Equal(t, metrics[0].Value.Value(), int64(1))
assert.Equal(t, metrics[0].Value.Value(), int64(0))
}
}
@ -112,7 +112,7 @@ func TestGetMetricsRange(t *testing.T) {
if currentHour%2 == 0 {
assert.Equal(t, metrics[0].Value.Value(), int64(10))
} else {
assert.Equal(t, metrics[0].Value.Value(), int64(1))
assert.Equal(t, metrics[0].Value.Value(), int64(0))
}
}

View File

@ -2,7 +2,6 @@ package scalers
import (
"context"
"errors"
"fmt"
"io"
"net/http"
@ -30,48 +29,49 @@ type datadogScaler struct {
httpClient *http.Client
logger logr.Logger
useClusterAgentProxy bool
metricType v2.MetricTargetType
}
// TODO: Need to check whether we can deprecate vType and how should we proceed with it
type datadogMetadata struct {
// AuthParams Cluster Agent Proxy
datadogNamespace string
datadogMetricsService string
datadogMetricsServicePort int
unsafeSsl bool
DatadogNamespace string `keda:"name=datadogNamespace, order=authParams, optional"`
DatadogMetricsService string `keda:"name=datadogMetricsService, order=authParams, optional"`
DatadogMetricsServicePort int `keda:"name=datadogMetricsServicePort, order=authParams, default=8443"`
UnsafeSsl bool `keda:"name=unsafeSsl, order=authParams, default=false"`
// bearer auth Cluster Agent Proxy
enableBearerAuth bool
bearerToken string
AuthMode string `keda:"name=authMode, order=authParams, optional"`
EnableBearerAuth bool
BearerToken string `keda:"name=token, order=authParams, optional"`
// TriggerMetadata Cluster Agent Proxy
datadogMetricServiceURL string
datadogMetricName string
datadogMetricNamespace string
activationTargetValue float64
DatadogMetricServiceURL string
DatadogMetricName string `keda:"name=datadogMetricName, order=triggerMetadata, optional"`
DatadogMetricNamespace string `keda:"name=datadogMetricNamespace, order=triggerMetadata, optional"`
ActivationTargetValue float64 `keda:"name=activationTargetValue, order=triggerMetadata, default=0"`
// AuthParams Datadog API
apiKey string
appKey string
datadogSite string
APIKey string `keda:"name=apiKey, order=authParams, optional"`
AppKey string `keda:"name=appKey, order=authParams, optional"`
DatadogSite string `keda:"name=datadogSite, order=authParams, default=datadoghq.com"`
// TriggerMetadata Datadog API
query string
queryAggegrator string
activationQueryValue float64
age int
timeWindowOffset int
lastAvailablePointOffset int
Query string `keda:"name=query, order=triggerMetadata, optional"`
QueryAggegrator string `keda:"name=queryAggregator, order=triggerMetadata, optional, enum=average;max"`
ActivationQueryValue float64 `keda:"name=activationQueryValue, order=triggerMetadata, default=0"`
Age int `keda:"name=age, order=triggerMetadata, default=90"`
TimeWindowOffset int `keda:"name=timeWindowOffset, order=triggerMetadata, default=0"`
LastAvailablePointOffset int `keda:"name=lastAvailablePointOffset,order=triggerMetadata, default=0"`
// TriggerMetadata Common
hpaMetricName string
fillValue float64
targetValue float64
useFiller bool
HpaMetricName string `keda:"name=hpaMetricName, order=triggerMetadata, optional"`
FillValue float64 `keda:"name=metricUnavailableValue, order=triggerMetadata, default=0"`
UseFiller bool
TargetValue float64 `keda:"name=targetValue;queryValue, order=triggerMetadata, default=-1"`
vType v2.MetricTargetType
}
const maxString = "max"
const avgString = "average"
var filter *regexp.Regexp
@ -82,11 +82,14 @@ func init() {
// NewDatadogScaler creates a new Datadog scaler
func NewDatadogScaler(ctx context.Context, config *scalersconfig.ScalerConfig) (Scaler, error) {
metricType, err := GetMetricTargetType(config)
if err != nil {
return nil, fmt.Errorf("error getting scaler metric type: %w", err)
}
logger := InitializeLogger(config, "datadog_scaler")
var useClusterAgentProxy bool
var meta *datadogMetadata
var err error
var apiClient *datadog.APIClient
var httpClient *http.Client
@ -102,7 +105,7 @@ func NewDatadogScaler(ctx context.Context, config *scalersconfig.ScalerConfig) (
if err != nil {
return nil, fmt.Errorf("error parsing Datadog metadata: %w", err)
}
httpClient = kedautil.CreateHTTPClient(config.GlobalHTTPTimeout, meta.unsafeSsl)
httpClient = kedautil.CreateHTTPClient(config.GlobalHTTPTimeout, meta.UnsafeSsl)
} else {
meta, err = parseDatadogAPIMetadata(config, logger)
if err != nil {
@ -115,6 +118,7 @@ func NewDatadogScaler(ctx context.Context, config *scalersconfig.ScalerConfig) (
}
return &datadogScaler{
metricType: metricType,
metadata: meta,
apiClient: apiClient,
httpClient: httpClient,
@ -144,113 +148,27 @@ func buildMetricURL(datadogClusterAgentURL, datadogMetricNamespace, datadogMetri
}
func parseDatadogAPIMetadata(config *scalersconfig.ScalerConfig, logger logr.Logger) (*datadogMetadata, error) {
meta := datadogMetadata{}
if val, ok := config.TriggerMetadata["age"]; ok {
age, err := strconv.Atoi(val)
if err != nil {
return nil, fmt.Errorf("age parsing error %w", err)
}
meta.age = age
if age < 0 {
return nil, fmt.Errorf("age should not be smaller than 0 seconds")
}
if age < 60 {
logger.Info("selecting a window smaller than 60 seconds can cause Datadog not finding a metric value for the query")
}
} else {
meta.age = 90 // Default window 90 seconds
meta := &datadogMetadata{}
if err := config.TypedConfig(meta); err != nil {
return nil, fmt.Errorf("error parsing Datadog metadata: %w", err)
}
if val, ok := config.TriggerMetadata["timeWindowOffset"]; ok {
timeWindowOffset, err := strconv.Atoi(val)
if err != nil {
return nil, fmt.Errorf("timeWindowOffset parsing error %w", err)
}
if timeWindowOffset < 0 {
return nil, fmt.Errorf("timeWindowOffset should not be smaller than 0 seconds")
}
meta.timeWindowOffset = timeWindowOffset
} else {
meta.timeWindowOffset = 0 // Default delay 0 seconds
if meta.Age < 60 {
logger.Info("selecting a window smaller than 60 seconds can cause Datadog not finding a metric value for the query")
}
if val, ok := config.TriggerMetadata["lastAvailablePointOffset"]; ok {
lastAvailablePointOffset, err := strconv.Atoi(val)
if err != nil {
return nil, fmt.Errorf("lastAvailablePointOffset parsing error %w", err)
}
if lastAvailablePointOffset < 0 {
return nil, fmt.Errorf("lastAvailablePointOffset should not be smaller than 0")
}
meta.lastAvailablePointOffset = lastAvailablePointOffset
} else {
meta.lastAvailablePointOffset = 0 // Default use the last point
if meta.AppKey == "" {
return nil, fmt.Errorf("error parsing Datadog metadata: missing AppKey")
}
if val, ok := config.TriggerMetadata["query"]; ok {
_, err := parseDatadogQuery(val)
if err != nil {
return nil, fmt.Errorf("error in query: %w", err)
}
meta.query = val
} else {
return nil, fmt.Errorf("no query given")
if meta.APIKey == "" {
return nil, fmt.Errorf("error parsing Datadog metadata: missing APIKey")
}
if val, ok := config.TriggerMetadata["targetValue"]; ok {
targetValue, err := strconv.ParseFloat(val, 64)
if err != nil {
return nil, fmt.Errorf("targetValue parsing error %w", err)
}
meta.targetValue = targetValue
} else if val, ok := config.TriggerMetadata["queryValue"]; ok {
targetValue, err := strconv.ParseFloat(val, 64)
if err != nil {
return nil, fmt.Errorf("queryValue parsing error %w", err)
}
meta.targetValue = targetValue
} else {
if meta.TargetValue == -1 {
if config.AsMetricSource {
meta.targetValue = 0
meta.TargetValue = 0
} else {
return nil, fmt.Errorf("no targetValue or queryValue given")
}
}
if val, ok := config.TriggerMetadata["queryAggregator"]; ok && val != "" {
queryAggregator := strings.ToLower(val)
switch queryAggregator {
case avgString, maxString:
meta.queryAggegrator = queryAggregator
default:
return nil, fmt.Errorf("queryAggregator value %s has to be one of '%s, %s'", queryAggregator, avgString, maxString)
}
} else {
meta.queryAggegrator = ""
}
meta.activationQueryValue = 0
if val, ok := config.TriggerMetadata["activationQueryValue"]; ok {
activationQueryValue, err := strconv.ParseFloat(val, 64)
if err != nil {
return nil, fmt.Errorf("queryValue parsing error %w", err)
}
meta.activationQueryValue = activationQueryValue
}
if val, ok := config.TriggerMetadata["metricUnavailableValue"]; ok {
fillValue, err := strconv.ParseFloat(val, 64)
if err != nil {
return nil, fmt.Errorf("metricUnavailableValue parsing error %w", err)
}
meta.fillValue = fillValue
meta.useFiller = true
}
if val, ok := config.TriggerMetadata["type"]; ok {
logger.V(0).Info("trigger.metadata.type is deprecated in favor of trigger.metricType")
if config.MetricType != "" {
@ -272,115 +190,45 @@ func parseDatadogAPIMetadata(config *scalersconfig.ScalerConfig, logger logr.Log
}
meta.vType = metricType
}
if meta.Query == "" {
return nil, fmt.Errorf("error parsing Datadog metadata: missing Query")
}
if val, ok := config.AuthParams["apiKey"]; ok {
meta.apiKey = val
if meta.Query != "" {
meta.HpaMetricName = meta.Query[0:strings.Index(meta.Query, "{")]
meta.HpaMetricName = GenerateMetricNameWithIndex(config.TriggerIndex, kedautil.NormalizeString(fmt.Sprintf("datadog-%s", meta.HpaMetricName)))
} else {
return nil, fmt.Errorf("no api key given")
meta.HpaMetricName = "datadogmetric@" + meta.DatadogMetricNamespace + ":" + meta.DatadogMetricName
}
if val, ok := config.AuthParams["appKey"]; ok {
meta.appKey = val
} else {
return nil, fmt.Errorf("no app key given")
}
siteVal := "datadoghq.com"
if val, ok := config.AuthParams["datadogSite"]; ok && val != "" {
siteVal = val
}
meta.datadogSite = siteVal
hpaMetricName := meta.query[0:strings.Index(meta.query, "{")]
meta.hpaMetricName = GenerateMetricNameWithIndex(config.TriggerIndex, kedautil.NormalizeString(fmt.Sprintf("datadog-%s", hpaMetricName)))
return &meta, nil
return meta, nil
}
func parseDatadogClusterAgentMetadata(config *scalersconfig.ScalerConfig, logger logr.Logger) (*datadogMetadata, error) {
meta := datadogMetadata{}
if val, ok := config.AuthParams["datadogNamespace"]; ok {
meta.datadogNamespace = val
} else {
return nil, fmt.Errorf("no datadogNamespace key given")
meta := &datadogMetadata{}
if err := config.TypedConfig(meta); err != nil {
return nil, fmt.Errorf("error parsing Datadog metadata: %w", err)
}
if meta.DatadogMetricsService == "" {
return nil, fmt.Errorf("datadog metrics service is required")
}
if val, ok := config.AuthParams["datadogMetricsService"]; ok {
meta.datadogMetricsService = val
} else {
return nil, fmt.Errorf("no datadogMetricsService key given")
if meta.DatadogMetricName == "" {
return nil, fmt.Errorf("datadog metric name is required")
}
if val, ok := config.AuthParams["datadogMetricsServicePort"]; ok {
port, err := strconv.Atoi(val)
if err != nil {
return nil, fmt.Errorf("datadogMetricServicePort parsing error %w", err)
}
meta.datadogMetricsServicePort = port
} else {
meta.datadogMetricsServicePort = 8443
if meta.DatadogNamespace == "" {
return nil, fmt.Errorf("datadog namespace is required")
}
meta.datadogMetricServiceURL = buildClusterAgentURL(meta.datadogMetricsService, meta.datadogNamespace, meta.datadogMetricsServicePort)
meta.unsafeSsl = false
if val, ok := config.AuthParams["unsafeSsl"]; ok {
unsafeSsl, err := strconv.ParseBool(val)
if err != nil {
return nil, fmt.Errorf("error parsing unsafeSsl: %w", err)
}
meta.unsafeSsl = unsafeSsl
if meta.DatadogMetricNamespace == "" {
return nil, fmt.Errorf("datadog metric namespace is required")
}
if val, ok := config.TriggerMetadata["datadogMetricName"]; ok {
meta.datadogMetricName = val
} else {
return nil, fmt.Errorf("no datadogMetricName key given")
}
if val, ok := config.TriggerMetadata["datadogMetricNamespace"]; ok {
meta.datadogMetricNamespace = val
} else {
return nil, fmt.Errorf("no datadogMetricNamespace key given")
}
meta.hpaMetricName = "datadogmetric@" + meta.datadogMetricNamespace + ":" + meta.datadogMetricName
if val, ok := config.TriggerMetadata["targetValue"]; ok {
targetValue, err := strconv.ParseFloat(val, 64)
if err != nil {
return nil, fmt.Errorf("targetValue parsing error %w", err)
}
meta.targetValue = targetValue
} else {
if meta.TargetValue == -1 {
if config.AsMetricSource {
meta.targetValue = 0
meta.TargetValue = 0
} else {
return nil, fmt.Errorf("no targetValue given")
return nil, fmt.Errorf("no targetValue or queryValue given")
}
}
meta.activationTargetValue = 0
if val, ok := config.TriggerMetadata["activationTargetValue"]; ok {
activationTargetValue, err := strconv.ParseFloat(val, 64)
if err != nil {
return nil, fmt.Errorf("activationTargetValue parsing error %w", err)
}
meta.activationTargetValue = activationTargetValue
}
if val, ok := config.TriggerMetadata["metricUnavailableValue"]; ok {
fillValue, err := strconv.ParseFloat(val, 64)
if err != nil {
return nil, fmt.Errorf("metricUnavailableValue parsing error %w", err)
}
meta.fillValue = fillValue
meta.useFiller = true
}
if val, ok := config.TriggerMetadata["type"]; ok {
logger.V(0).Info("trigger.metadata.type is deprecated in favor of trigger.metricType")
if config.MetricType != "" {
@ -402,27 +250,11 @@ func parseDatadogClusterAgentMetadata(config *scalersconfig.ScalerConfig, logger
}
meta.vType = metricType
}
meta.HpaMetricName = "datadogmetric@" + meta.DatadogMetricNamespace + ":" + meta.DatadogMetricName
authMode, ok := config.AuthParams["authMode"]
// no authMode specified
if !ok {
return &meta, nil
}
meta.DatadogMetricServiceURL = buildClusterAgentURL(meta.DatadogMetricsService, meta.DatadogNamespace, meta.DatadogMetricsServicePort)
authType := authentication.Type(strings.TrimSpace(authMode))
switch authType {
case authentication.BearerAuthType:
if len(config.AuthParams["token"]) == 0 {
return nil, errors.New("no token provided")
}
meta.bearerToken = config.AuthParams["token"]
meta.enableBearerAuth = true
default:
return nil, fmt.Errorf("err incorrect value for authMode is given: %s", authMode)
}
return &meta, nil
return meta, nil
}
// newDatadogAPIConnection tests a connection to the Datadog API
@ -432,10 +264,10 @@ func newDatadogAPIConnection(ctx context.Context, meta *datadogMetadata, config
datadog.ContextAPIKeys,
map[string]datadog.APIKey{
"apiKeyAuth": {
Key: meta.apiKey,
Key: meta.APIKey,
},
"appKeyAuth": {
Key: meta.appKey,
Key: meta.AppKey,
},
},
)
@ -443,7 +275,7 @@ func newDatadogAPIConnection(ctx context.Context, meta *datadogMetadata, config
ctx = context.WithValue(ctx,
datadog.ContextServerVariables,
map[string]string{
"site": meta.datadogSite,
"site": meta.DatadogSite,
})
configuration := datadog.NewConfiguration()
@ -473,10 +305,10 @@ func (s *datadogScaler) getQueryResult(ctx context.Context) (float64, error) {
datadog.ContextAPIKeys,
map[string]datadog.APIKey{
"apiKeyAuth": {
Key: s.metadata.apiKey,
Key: s.metadata.APIKey,
},
"appKeyAuth": {
Key: s.metadata.appKey,
Key: s.metadata.AppKey,
},
},
)
@ -484,12 +316,12 @@ func (s *datadogScaler) getQueryResult(ctx context.Context) (float64, error) {
ctx = context.WithValue(ctx,
datadog.ContextServerVariables,
map[string]string{
"site": s.metadata.datadogSite,
"site": s.metadata.DatadogSite,
})
timeWindowTo := time.Now().Unix() - int64(s.metadata.timeWindowOffset)
timeWindowFrom := timeWindowTo - int64(s.metadata.age)
resp, r, err := s.apiClient.MetricsApi.QueryMetrics(ctx, timeWindowFrom, timeWindowTo, s.metadata.query) //nolint:bodyclose
timeWindowTo := time.Now().Unix() - int64(s.metadata.TimeWindowOffset)
timeWindowFrom := timeWindowTo - int64(s.metadata.Age)
resp, r, err := s.apiClient.MetricsApi.QueryMetrics(ctx, timeWindowFrom, timeWindowTo, s.metadata.Query) //nolint:bodyclose
if r != nil {
if r.StatusCode == 429 {
@ -522,14 +354,14 @@ func (s *datadogScaler) getQueryResult(ctx context.Context) (float64, error) {
series := resp.GetSeries()
if len(series) == 0 {
if !s.metadata.useFiller {
if !s.metadata.UseFiller {
return 0, fmt.Errorf("no Datadog metrics returned for the given time window")
}
return s.metadata.fillValue, nil
return s.metadata.FillValue, nil
}
// Require queryAggregator be set explicitly for multi-query
if len(series) > 1 && s.metadata.queryAggegrator == "" {
if len(series) > 1 && s.metadata.QueryAggegrator == "" {
return 0, fmt.Errorf("query returned more than 1 series; modify the query to return only 1 series or add a queryAggregator")
}
@ -545,22 +377,22 @@ func (s *datadogScaler) getQueryResult(ctx context.Context) (float64, error) {
break
}
}
if index < s.metadata.lastAvailablePointOffset {
if index < s.metadata.LastAvailablePointOffset {
return 0, fmt.Errorf("index is smaller than the lastAvailablePointOffset")
}
index -= s.metadata.lastAvailablePointOffset
index -= s.metadata.LastAvailablePointOffset
if len(points) == 0 || len(points[index]) < 2 || points[index][1] == nil {
if !s.metadata.useFiller {
if !s.metadata.UseFiller {
return 0, fmt.Errorf("no Datadog metrics returned for the given time window")
}
return s.metadata.fillValue, nil
return s.metadata.FillValue, nil
}
// Return the last point from the series
results[i] = *points[index][1]
}
switch s.metadata.queryAggegrator {
switch s.metadata.QueryAggegrator {
case avgString:
return AvgFloatFromSlice(results), nil
default:
@ -608,12 +440,12 @@ func (s *datadogScaler) getDatadogClusterAgentHTTPRequest(ctx context.Context, u
var err error
switch {
case s.metadata.enableBearerAuth:
case s.metadata.EnableBearerAuth:
req, err = http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return nil, err
}
req.Header.Add("Authorization", fmt.Sprintf("Bearer %s", s.metadata.bearerToken))
req.Header.Add("Authorization", fmt.Sprintf("Bearer %s", s.metadata.BearerToken))
if err != nil {
return nil, err
}
@ -633,9 +465,9 @@ func (s *datadogScaler) getDatadogClusterAgentHTTPRequest(ctx context.Context, u
func (s *datadogScaler) GetMetricSpecForScaling(context.Context) []v2.MetricSpec {
externalMetric := &v2.ExternalMetricSource{
Metric: v2.MetricIdentifier{
Name: s.metadata.hpaMetricName,
Name: s.metadata.HpaMetricName,
},
Target: GetMetricTargetMili(s.metadata.vType, s.metadata.targetValue),
Target: GetMetricTargetMili(s.metricType, s.metadata.TargetValue),
}
metricSpec := v2.MetricSpec{
External: externalMetric, Type: externalMetricType,
@ -650,7 +482,7 @@ func (s *datadogScaler) GetMetricsAndActivity(ctx context.Context, metricName st
var err error
if s.useClusterAgentProxy {
url := buildMetricURL(s.metadata.datadogMetricServiceURL, s.metadata.datadogMetricNamespace, s.metadata.hpaMetricName)
url := buildMetricURL(s.metadata.DatadogMetricServiceURL, s.metadata.DatadogMetricNamespace, s.metadata.HpaMetricName)
req, err := s.getDatadogClusterAgentHTTPRequest(ctx, url)
if (err != nil) || (req == nil) {
@ -663,7 +495,7 @@ func (s *datadogScaler) GetMetricsAndActivity(ctx context.Context, metricName st
}
metric = GenerateMetricInMili(metricName, num)
return []external_metrics.ExternalMetricValue{metric}, num > s.metadata.activationTargetValue, nil
return []external_metrics.ExternalMetricValue{metric}, num > s.metadata.ActivationTargetValue, nil
}
num, err = s.getQueryResult(ctx)
if err != nil {
@ -672,7 +504,7 @@ func (s *datadogScaler) GetMetricsAndActivity(ctx context.Context, metricName st
}
metric = GenerateMetricInMili(metricName, num)
return []external_metrics.ExternalMetricValue{metric}, num > s.metadata.activationQueryValue, nil
return []external_metrics.ExternalMetricValue{metric}, num > s.metadata.ActivationQueryValue, nil
}
// AvgFloatFromSlice finds the average value in a slice of floats
@ -683,3 +515,37 @@ func AvgFloatFromSlice(results []float64) float64 {
}
return total / float64(len(results))
}
func (s *datadogMetadata) Validate() error {
if s.Age < 0 {
return fmt.Errorf("age should not be smaller than 0 seconds")
}
if s.TimeWindowOffset < 0 {
return fmt.Errorf("timeWindowOffset should not be smaller than 0 seconds")
}
if s.LastAvailablePointOffset < 0 {
return fmt.Errorf("lastAvailablePointOffset should not be smaller than 0")
}
if s.Query != "" {
if _, err := parseDatadogQuery(s.Query); err != nil {
return fmt.Errorf("error in query: %w", err)
}
}
if s.FillValue == 0 {
s.UseFiller = false
}
if s.AuthMode != "" {
authType := authentication.Type(strings.TrimSpace(s.AuthMode))
switch authType {
case authentication.BearerAuthType:
if s.BearerToken == "" {
return fmt.Errorf("BearerToken is required")
}
s.EnableBearerAuth = true
default:
return fmt.Errorf("err incorrect value for authMode is given: %s", s.AuthMode)
}
}
return nil
}

View File

@ -2,7 +2,9 @@ package scalers
import (
"context"
"fmt"
"slices"
"strings"
"testing"
"github.com/go-logr/logr"
@ -107,6 +109,7 @@ var testDatadogClusterAgentMetadata = []datadogAuthMetadataTestData{
// Default Datadog service name and port
{"", map[string]string{"useClusterAgentProxy": "true", "datadogMetricName": "nginx-hits", "datadogMetricNamespace": "default", "targetValue": "2", "type": "global"}, map[string]string{"token": "token", "datadogNamespace": "datadog", "datadogMetricsService": "datadog-cluster-agent-metrics-api", "unsafeSsl": "true", "authMode": "bearer"}, false},
// TODO: Fix this failed test case
// both metadata type and trigger type
{v2.AverageValueMetricType, map[string]string{"useClusterAgentProxy": "true", "datadogMetricName": "nginx-hits", "datadogMetricNamespace": "default", "targetValue": "2", "type": "global"}, map[string]string{"token": "token", "datadogNamespace": "datadog", "datadogMetricsService": "datadog-cluster-agent-metrics-api", "unsafeSsl": "true", "authMode": "bearer"}, true},
// missing DatadogMetric name
@ -119,6 +122,10 @@ var testDatadogClusterAgentMetadata = []datadogAuthMetadataTestData{
{"", map[string]string{"useClusterAgentProxy": "true", "datadogMetricName": "nginx-hits", "datadogMetricNamespace": "default", "targetValue": "notanint", "type": "global"}, map[string]string{"token": "token", "datadogNamespace": "datadog", "datadogMetricsService": "datadog-cluster-agent-metrics-api", "datadogMetricsServicePort": "8080", "unsafeSsl": "true", "authMode": "bearer"}, true},
// wrong type
{"", map[string]string{"useClusterAgentProxy": "true", "datadogMetricName": "nginx-hits", "datadogMetricNamespace": "default", "targetValue": "2", "type": "notatype"}, map[string]string{"token": "token", "datadogNamespace": "datadog", "datadogMetricsService": "datadog-cluster-agent-metrics-api", "datadogMetricsServicePort": "8080", "unsafeSsl": "true", "authMode": "bearer"}, true},
// Test case with different datadogNamespace and datadogMetricNamespace to ensure the correct namespace is used in URL
{"", map[string]string{"useClusterAgentProxy": "true", "datadogMetricName": "test-metric", "datadogMetricNamespace": "application-metrics", "targetValue": "10"}, map[string]string{"token": "test-token", "datadogNamespace": "datadog-system", "datadogMetricsService": "datadog-cluster-agent-metrics-api", "datadogMetricsServicePort": "8443", "authMode": "bearer"}, false},
// Test case with custom service name and port to verify URL building
{"", map[string]string{"useClusterAgentProxy": "true", "datadogMetricName": "custom-metric", "datadogMetricNamespace": "prod-metrics", "targetValue": "5"}, map[string]string{"token": "test-token", "datadogNamespace": "monitoring", "datadogMetricsService": "custom-datadog-service", "datadogMetricsServicePort": "9443", "authMode": "bearer"}, false},
}
var testDatadogAPIMetadata = []datadogAuthMetadataTestData{
@ -167,27 +174,53 @@ var testDatadogAPIMetadata = []datadogAuthMetadataTestData{
}
func TestDatadogScalerAPIAuthParams(t *testing.T) {
for _, testData := range testDatadogAPIMetadata {
for idx, testData := range testDatadogAPIMetadata {
_, err := parseDatadogAPIMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadata, AuthParams: testData.authParams, MetricType: testData.metricType}, logr.Discard())
if err != nil && !testData.isError {
t.Error("Expected success but got error", err)
t.Errorf("Expected success but got error: %s for test case %d", err, idx)
}
if testData.isError && err == nil {
t.Error("Expected error but got success")
t.Errorf("Expected error but got success for test case %d", idx)
}
}
}
func TestDatadogScalerClusterAgentAuthParams(t *testing.T) {
for _, testData := range testDatadogClusterAgentMetadata {
_, err := parseDatadogClusterAgentMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadata, AuthParams: testData.authParams, MetricType: testData.metricType}, logr.Discard())
for idx, testData := range testDatadogClusterAgentMetadata {
meta, err := parseDatadogClusterAgentMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadata, AuthParams: testData.authParams, MetricType: testData.metricType}, logr.Discard())
if err != nil && !testData.isError {
t.Error("Expected success but got error", err)
t.Errorf("Expected success but got error: %s for test case %d", err, idx)
}
if testData.isError && err == nil {
t.Error("Expected error but got success")
t.Errorf("Expected error but got success for test case %d", idx)
}
// Additional validation for URL building when we have valid metadata
// This validates that datadogNamespace is used correctly in URL building (issue #6769)
if !testData.isError && meta != nil {
datadogNamespace := testData.authParams["datadogNamespace"]
datadogMetricNamespace := testData.metadata["datadogMetricNamespace"]
if datadogNamespace != "" && datadogMetricNamespace != "" {
// Verify that the URL contains the service namespace (datadogNamespace), not the metric namespace
if !strings.Contains(meta.DatadogMetricServiceURL, datadogNamespace) {
t.Errorf("Test case %d: DatadogMetricServiceURL should contain datadogNamespace '%s', but got %s", idx, datadogNamespace, meta.DatadogMetricServiceURL)
}
// When namespaces are different, ensure metric namespace is NOT used in the service URL
if datadogNamespace != datadogMetricNamespace {
datadogMetricsService := testData.authParams["datadogMetricsService"]
datadogMetricsServicePort := testData.authParams["datadogMetricsServicePort"]
incorrectURL := fmt.Sprintf("https://%s.%s:%s/apis/external.metrics.k8s.io/v1beta1",
datadogMetricsService, datadogMetricNamespace, datadogMetricsServicePort)
if meta.DatadogMetricServiceURL == incorrectURL {
t.Errorf("Test case %d: Bug detected - DatadogMetricServiceURL incorrectly uses datadogMetricNamespace instead of datadogNamespace. Got %s", idx, meta.DatadogMetricServiceURL)
}
}
}
}
}
}
@ -198,11 +231,12 @@ var datadogMetricIdentifiers = []datadogMetricIdentifier{
{&testDatadogClusterAgentMetadata[1], clusterAgentType, 0, "datadogmetric@default:nginx-hits"},
}
// TODO: Need to check whether we need to rewrite this test case because vType is long deprecated
func TestDatadogGetMetricSpecForScaling(t *testing.T) {
var err error
var meta *datadogMetadata
for _, testData := range datadogMetricIdentifiers {
for idx, testData := range datadogMetricIdentifiers {
if testData.typeOfScaler == apiType {
meta, err = parseDatadogAPIMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadataTestData.metadata, AuthParams: testData.metadataTestData.authParams, TriggerIndex: testData.triggerIndex, MetricType: testData.metadataTestData.metricType}, logr.Discard())
} else {
@ -221,7 +255,7 @@ func TestDatadogGetMetricSpecForScaling(t *testing.T) {
metricSpec := mockDatadogScaler.GetMetricSpecForScaling(context.Background())
metricName := metricSpec[0].External.Metric.Name
if metricName != testData.name {
t.Error("Wrong External metric source name:", metricName)
t.Errorf("Wrong External metric source name:%s for test case %d", metricName, idx)
}
}
}

View File

@ -2,8 +2,9 @@ package scalers
import (
"context"
"errors"
"fmt"
"strconv"
"io"
"strings"
"sync"
"time"
@ -34,14 +35,17 @@ type externalPushScaler struct {
}
type externalScalerMetadata struct {
scalerAddress string
originalMetadata map[string]string
triggerIndex int
caCert string
tlsClientCert string
tlsClientKey string
enableTLS bool
unsafeSsl bool
ScalerAddress string `keda:"name=scalerAddress, order=triggerMetadata"`
EnableTLS bool `keda:"name=enableTLS, order=triggerMetadata, optional"`
UnsafeSsl bool `keda:"name=unsafeSsl, order=triggerMetadata, optional"`
// auth
CaCert string `keda:"name=caCert, order=authParams, optional"`
TLSClientCert string `keda:"name=tlsClientCert, order=authParams, optional"`
TLSClientKey string `keda:"name=tlsClientKey, order=authParams, optional"`
}
type connectionGroup struct {
@ -105,46 +109,13 @@ func NewExternalPushScaler(config *scalersconfig.ScalerConfig) (PushScaler, erro
}
func parseExternalScalerMetadata(config *scalersconfig.ScalerConfig) (externalScalerMetadata, error) {
meta := externalScalerMetadata{
originalMetadata: config.TriggerMetadata,
}
// Check if scalerAddress is present
if val, ok := config.TriggerMetadata["scalerAddress"]; ok && val != "" {
meta.scalerAddress = val
} else {
return meta, fmt.Errorf("scaler Address is a required field")
meta := externalScalerMetadata{}
meta.triggerIndex = config.TriggerIndex
if err := config.TypedConfig(&meta); err != nil {
return meta, fmt.Errorf("error parsing external scaler metadata: %w", err)
}
meta.originalMetadata = make(map[string]string)
if val, ok := config.AuthParams["caCert"]; ok {
meta.caCert = val
}
if val, ok := config.AuthParams["tlsClientCert"]; ok {
meta.tlsClientCert = val
}
if val, ok := config.AuthParams["tlsClientKey"]; ok {
meta.tlsClientKey = val
}
meta.unsafeSsl = false
if val, ok := config.TriggerMetadata["unsafeSsl"]; ok && val != "" {
boolVal, err := strconv.ParseBool(val)
if err != nil {
return meta, fmt.Errorf("failed to parse insecureSkipVerify value. Must be either true or false")
}
meta.unsafeSsl = boolVal
}
if val, ok := config.TriggerMetadata["enableTLS"]; ok && val != "" {
boolVal, err := strconv.ParseBool(val)
if err != nil {
return meta, fmt.Errorf("failed to parse enableTLS value. Must be either true or false")
}
meta.enableTLS = boolVal
}
// Add elements to metadata
for key, value := range config.TriggerMetadata {
// Check if key is in resolved environment and resolve
if strings.HasSuffix(key, "FromEnv") {
@ -155,7 +126,7 @@ func parseExternalScalerMetadata(config *scalersconfig.ScalerConfig) (externalSc
meta.originalMetadata[key] = value
}
}
meta.triggerIndex = config.TriggerIndex
return meta, nil
}
@ -249,27 +220,35 @@ func (s *externalScaler) GetMetricsAndActivity(ctx context.Context, metricName s
// handleIsActiveStream is the only writer to the active channel and will close it on return.
func (s *externalPushScaler) Run(ctx context.Context, active chan<- bool) {
defer close(active)
// retry on error from runWithLog() starting by 2 sec backing off * 2 with a max of 2 minutes
retryDuration := time.Second * 2
// It's possible for the connection to get terminated anytime, we need to run this in a retry loop
runWithLog := func() {
grpcClient, err := getClientForConnectionPool(s.metadata)
if err != nil {
s.logger.Error(err, "error running internalRun")
s.logger.Error(err, "unable to get connection from the pool")
return
}
if err := handleIsActiveStream(ctx, &s.scaledObjectRef, grpcClient, active); err != nil {
s.logger.Error(err, "error running internalRun")
if !errors.Is(err, io.EOF) { // If io.EOF is returned, the stream has terminated with an OK status
s.logger.Error(err, "error running internalRun")
return
}
// if the connection is properly closed, we reset the timer
retryDuration = time.Second * 2
return
}
}
// retry on error from runWithLog() starting by 2 sec backing off * 2 with a max of 2 minute
retryDuration := time.Second * 2
// the caller of this function needs to ensure that they call Stop() on the resulting
// timer, to release background resources.
retryBackoff := func() *time.Timer {
tmr := time.NewTimer(retryDuration)
s.logger.V(1).Info("external push retry backoff", "duration", retryDuration)
retryDuration *= 2
if retryDuration > time.Minute*1 {
if retryDuration > time.Minute {
retryDuration = time.Minute * 1
}
return tmr
@ -317,26 +296,26 @@ func getClientForConnectionPool(metadata externalScalerMetadata) (pb.ExternalSca
defer connectionPoolMutex.Unlock()
buildGRPCConnection := func(metadata externalScalerMetadata) (*grpc.ClientConn, error) {
tlsConfig, err := util.NewTLSConfig(metadata.tlsClientCert, metadata.tlsClientKey, metadata.caCert, metadata.unsafeSsl)
tlsConfig, err := util.NewTLSConfig(metadata.TLSClientCert, metadata.TLSClientKey, metadata.CaCert, metadata.UnsafeSsl)
if err != nil {
return nil, err
}
if metadata.enableTLS || len(tlsConfig.Certificates) > 0 || metadata.caCert != "" {
if metadata.EnableTLS || len(tlsConfig.Certificates) > 0 || metadata.CaCert != "" {
// nosemgrep: go.grpc.ssrf.grpc-tainted-url-host.grpc-tainted-url-host
return grpc.NewClient(metadata.scalerAddress,
return grpc.NewClient(metadata.ScalerAddress,
grpc.WithDefaultServiceConfig(grpcConfig),
grpc.WithTransportCredentials(credentials.NewTLS(tlsConfig)))
}
return grpc.NewClient(metadata.scalerAddress,
return grpc.NewClient(metadata.ScalerAddress,
grpc.WithDefaultServiceConfig(grpcConfig),
grpc.WithTransportCredentials(insecure.NewCredentials()))
}
// create a unique key per-metadata. If scaledObjects share the same connection properties
// in the metadata, they will share the same grpc.ClientConn
key, err := hashstructure.Hash(metadata.scalerAddress, nil)
key, err := hashstructure.Hash(metadata.ScalerAddress, nil)
if err != nil {
return nil, err
}

View File

@ -80,11 +80,11 @@ func TestExternalScalerParseMetadata(t *testing.T) {
t.Error("Expected success but got error", err)
}
if testData.metadata["unsafeSsl"] == "true" && !metadata.unsafeSsl {
t.Error("Expected unsafeSsl to be true but got", metadata.unsafeSsl)
if testData.metadata["unsafeSsl"] == "true" && !metadata.UnsafeSsl {
t.Error("Expected unsafeSsl to be true but got", metadata.UnsafeSsl)
}
if testData.metadata["enableTLS"] == "true" && !metadata.enableTLS {
t.Error("Expected enableTLS to be true but got", metadata.enableTLS)
if testData.metadata["enableTLS"] == "true" && !metadata.EnableTLS {
t.Error("Expected enableTLS to be true but got", metadata.EnableTLS)
}
if testData.isError && err == nil {
t.Error("Expected error but got success")
@ -133,14 +133,16 @@ func TestExternalPushScaler_Run(t *testing.T) {
defer cancel()
for {
<-time.After(time.Second * 1)
if resultCount == serverCount*iterationCount {
t.Logf("resultCount == %d", resultCount)
currentCount := atomic.LoadInt64(&resultCount)
if currentCount == serverCount*iterationCount {
t.Logf("resultCount == %d", currentCount)
return
}
retries++
if retries > 10 {
t.Fatalf("Expected resultCount to be %d after %d retries, but got %d", serverCount*iterationCount, retries, resultCount)
currentCount = atomic.LoadInt64(&resultCount)
t.Fatalf("Expected resultCount to be %d after %d retries, but got %d", serverCount*iterationCount, retries, currentCount)
return
}
}

View File

@ -26,17 +26,17 @@ const (
// keep that value to not break the behaviour
// We need to revisit this in KEDA v3
// https://github.com/kedacore/keda/issues/5429
defaultTimeHorizon = "2m"
defaultTimeHorizon = 2 * time.Minute
// Visualization of aggregation window:
// aggregationTimeHorizon: [- - - - -]
// alignmentPeriod: [- - -][- - -] (may shift slightly left or right arbitrarily)
// For aggregations, a shorter time horizon may not return any data
aggregationTimeHorizon = "5m"
aggregationTimeHorizon = 5 * time.Minute
// To prevent the aggregation window from being too big,
// which may result in the data being stale for too long
alignmentPeriod = "3m"
alignmentPeriod = 3 * time.Minute
// Not all aggregations are meaningful for distribution metrics,
// so we only support a subset of them
@ -347,9 +347,9 @@ func getActualProjectID(s *StackDriverClient, projectID string) string {
// | align delta(3m)
// | every 3m
// | group_by [], count(value)
func (s StackDriverClient) BuildMQLQuery(projectID, resourceType, metric, resourceName, aggregation, timeHorizon string) (string, error) {
func (s StackDriverClient) BuildMQLQuery(projectID, resourceType, metric, resourceName, aggregation string, timeHorizon time.Duration) (string, error) {
th := timeHorizon
if th == "" {
if time.Duration(0) >= timeHorizon {
th = defaultTimeHorizon
if aggregation != "" {
th = aggregationTimeHorizon

View File

@ -2,6 +2,7 @@ package gcp
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
)
@ -13,48 +14,48 @@ func TestBuildMQLQuery(t *testing.T) {
metric string
resourceName string
aggregation string
timeHorizon string
timeHorizon time.Duration
expected string
isError bool
}{
{
"topic with aggregation",
"topic", "pubsub.googleapis.com/topic/x", "mytopic", "count", "1m",
"topic", "pubsub.googleapis.com/topic/x", "mytopic", "count", time.Minute,
"fetch pubsub_topic | metric 'pubsub.googleapis.com/topic/x' | filter (resource.project_id == 'myproject' && resource.topic_id == 'mytopic')" +
" | within 1m | align delta(3m) | every 3m | group_by [], count(value)",
" | within 1m0s | align delta(3m0s) | every 3m0s | group_by [], count(value)",
false,
},
{
"topic without aggregation",
"topic", "pubsub.googleapis.com/topic/x", "mytopic", "", "",
"topic", "pubsub.googleapis.com/topic/x", "mytopic", "", time.Duration(0),
"fetch pubsub_topic | metric 'pubsub.googleapis.com/topic/x' | filter (resource.project_id == 'myproject' && resource.topic_id == 'mytopic')" +
" | within 2m",
" | within 2m0s",
false,
},
{
"subscription with aggregation",
"subscription", "pubsub.googleapis.com/subscription/x", "mysubscription", "percentile99", "",
"subscription", "pubsub.googleapis.com/subscription/x", "mysubscription", "percentile99", time.Duration(0),
"fetch pubsub_subscription | metric 'pubsub.googleapis.com/subscription/x' | filter (resource.project_id == 'myproject' && resource.subscription_id == 'mysubscription')" +
" | within 5m | align delta(3m) | every 3m | group_by [], percentile(value, 99)",
" | within 5m0s | align delta(3m0s) | every 3m0s | group_by [], percentile(value, 99)",
false,
},
{
"subscription without aggregation",
"subscription", "pubsub.googleapis.com/subscription/x", "mysubscription", "", "4m",
"subscription", "pubsub.googleapis.com/subscription/x", "mysubscription", "", time.Minute * 4,
"fetch pubsub_subscription | metric 'pubsub.googleapis.com/subscription/x' | filter (resource.project_id == 'myproject' && resource.subscription_id == 'mysubscription')" +
" | within 4m",
" | within 4m0s",
false,
},
{
"invalid percentile",
"topic", "pubsub.googleapis.com/topic/x", "mytopic", "percentile101", "1m",
"topic", "pubsub.googleapis.com/topic/x", "mytopic", "percentile101", time.Minute,
"invalid percentile value: 101",
true,
},
{
"unsupported aggregation function",
"topic", "pubsub.googleapis.com/topic/x", "mytopic", "max", "",
"topic", "pubsub.googleapis.com/topic/x", "mytopic", "max", time.Duration(0),
"unsupported aggregation function: max",
true,
},

View File

@ -31,7 +31,7 @@ type gcpCloudTasksMetricIdentifier struct {
var testGcpCloudTasksMetadata = []parseGcpCloudTasksMetadataTestData{
{map[string]string{}, map[string]string{}, true, nil, "erro case"},
{map[string]string{}, map[string]string{}, true, nil, "error case"},
{nil, map[string]string{"queueName": "myQueue", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS", "projectID": "myproject", "activationValue": "5"}, false, &gcpCloudTaskMetadata{
Value: 7,

View File

@ -2,11 +2,10 @@ package scalers
import (
"context"
"errors"
"fmt"
"regexp"
"strconv"
"strings"
"time"
"github.com/go-logr/logr"
v2 "k8s.io/api/autoscaling/v2"
@ -24,8 +23,8 @@ const (
resourceTypePubSubSubscription = "subscription"
resourceTypePubSubTopic = "topic"
pubSubModeSubscriptionSize = "SubscriptionSize"
pubSubDefaultValue = 10
pubSubDefaultModeSubscriptionSize = "SubscriptionSize"
pubSubDefaultValue = 10
)
var regexpCompositeSubscriptionIDPrefix = regexp.MustCompile(compositeSubscriptionIDPrefix)
@ -38,18 +37,20 @@ type pubsubScaler struct {
}
type pubsubMetadata struct {
mode string
value float64
activationValue float64
SubscriptionSize int `keda:"name=subscriptionSize, order=triggerMetadata, optional, deprecatedAnnounce=The 'subscriptionSize' setting is DEPRECATED and will be removed in v2.20 - Use 'mode' and 'value' instead"`
Mode string `keda:"name=mode, order=triggerMetadata, default=SubscriptionSize"`
Value float64 `keda:"name=value, order=triggerMetadata, default=10"`
ActivationValue float64 `keda:"name=activationValue, order=triggerMetadata, default=0"`
Aggregation string `keda:"name=aggregation, order=triggerMetadata, optional"`
TimeHorizon time.Duration `keda:"name=timeHorizon, order=triggerMetadata, optional"`
ValueIfNull *float64 `keda:"name=valueIfNull, order=triggerMetadata, optional"`
SubscriptionName string `keda:"name=subscriptionName, order=triggerMetadata;resolvedEnv, optional"`
TopicName string `keda:"name=topicName, order=triggerMetadata;resolvedEnv, optional"`
// a resource is one of subscription or topic
resourceType string
resourceName string
gcpAuthorization *gcp.AuthorizationMetadata
triggerIndex int
aggregation string
timeHorizon string
valueIfNull *float64
}
// NewPubSubScaler creates a new pubsubScaler
@ -59,9 +60,7 @@ func NewPubSubScaler(config *scalersconfig.ScalerConfig) (Scaler, error) {
return nil, fmt.Errorf("error getting scaler metric type: %w", err)
}
logger := InitializeLogger(config, "gcp_pub_sub_scaler")
meta, err := parsePubSubMetadata(config, logger)
meta, err := parsePubSubMetadata(config)
if err != nil {
return nil, fmt.Errorf("error parsing PubSub metadata: %w", err)
}
@ -69,150 +68,37 @@ func NewPubSubScaler(config *scalersconfig.ScalerConfig) (Scaler, error) {
return &pubsubScaler{
metricType: metricType,
metadata: meta,
logger: logger,
}, nil
}
func parsePubSubResourceConfig(config *scalersconfig.ScalerConfig, meta *pubsubMetadata) error {
sub, subPresent := config.TriggerMetadata["subscriptionName"]
subFromEnv, subFromEnvPresent := config.TriggerMetadata["subscriptionNameFromEnv"]
if subPresent && subFromEnvPresent {
return fmt.Errorf("exactly one of subscriptionName or subscriptionNameFromEnv is allowed")
}
hasSub := subPresent || subFromEnvPresent
func parsePubSubMetadata(config *scalersconfig.ScalerConfig) (*pubsubMetadata, error) {
meta := &pubsubMetadata{}
topic, topicPresent := config.TriggerMetadata["topicName"]
topicFromEnv, topicFromEnvPresent := config.TriggerMetadata["topicNameFromEnv"]
if topicPresent && topicFromEnvPresent {
return fmt.Errorf("exactly one of topicName or topicNameFromEnv is allowed")
}
hasTopic := topicPresent || topicFromEnvPresent
if (!hasSub && !hasTopic) || (hasSub && hasTopic) {
return fmt.Errorf("exactly one of subscription or topic name must be given")
if err := config.TypedConfig(meta); err != nil {
return nil, fmt.Errorf("error parsing gcp pubsub metadata: %w", err)
}
if hasSub {
if subPresent {
if sub == "" {
return fmt.Errorf("no subscription name given")
}
meta.resourceName = sub
} else {
if subFromEnv == "" {
return fmt.Errorf("no environment variable name given for resolving subscription name")
}
resolvedSub, ok := config.ResolvedEnv[subFromEnv]
if !ok {
return fmt.Errorf("resolved environment doesn't contain name '%s'", subFromEnv)
}
if resolvedSub == "" {
return fmt.Errorf("resolved environment subscription name is empty")
}
meta.resourceName = config.ResolvedEnv[subFromEnv]
}
if meta.SubscriptionSize != 0 {
meta.Mode = pubSubDefaultModeSubscriptionSize
meta.Value = float64(meta.SubscriptionSize)
}
if meta.SubscriptionName != "" {
meta.resourceName = meta.SubscriptionName
meta.resourceType = resourceTypePubSubSubscription
} else {
if topicPresent {
if topic == "" {
return fmt.Errorf("no topic name given")
}
meta.resourceName = topic
} else {
if topicFromEnv == "" {
return fmt.Errorf("no environment variable name given for resolving topic name")
}
resolvedTopic, ok := config.ResolvedEnv[topicFromEnv]
if !ok {
return fmt.Errorf("resolved environment doesn't contain name '%s'", topicFromEnv)
}
if resolvedTopic == "" {
return fmt.Errorf("resolved environment topic name is empty")
}
meta.resourceName = config.ResolvedEnv[topicFromEnv]
}
meta.resourceName = meta.TopicName
meta.resourceType = resourceTypePubSubTopic
}
return nil
}
func parsePubSubMetadata(config *scalersconfig.ScalerConfig, logger logr.Logger) (*pubsubMetadata, error) {
meta := pubsubMetadata{mode: pubSubModeSubscriptionSize, value: pubSubDefaultValue}
mode, modePresent := config.TriggerMetadata["mode"]
value, valuePresent := config.TriggerMetadata["value"]
if subSize, subSizePresent := config.TriggerMetadata["subscriptionSize"]; subSizePresent {
if modePresent || valuePresent {
return nil, errors.New("you can use either mode and value fields or subscriptionSize field")
}
if _, topicPresent := config.TriggerMetadata["topicName"]; topicPresent {
return nil, errors.New("you cannot use subscriptionSize field together with topicName field. Use subscriptionName field instead")
}
logger.Info("subscriptionSize field is deprecated. Use mode and value fields instead")
subSizeValue, err := strconv.ParseFloat(subSize, 64)
if err != nil {
return nil, fmt.Errorf("value parsing error %w", err)
}
meta.value = subSizeValue
} else {
if modePresent {
meta.mode = mode
}
if valuePresent {
triggerValue, err := strconv.ParseFloat(value, 64)
if err != nil {
return nil, fmt.Errorf("value parsing error %w", err)
}
meta.value = triggerValue
}
}
if val, ok := config.TriggerMetadata["valueIfNull"]; ok && val != "" {
valueIfNull, err := strconv.ParseFloat(val, 64)
if err != nil {
return nil, fmt.Errorf("valueIfNull parsing error %w", err)
}
meta.valueIfNull = &valueIfNull
}
meta.aggregation = config.TriggerMetadata["aggregation"]
meta.timeHorizon = config.TriggerMetadata["timeHorizon"]
err := parsePubSubResourceConfig(config, &meta)
if err != nil {
return nil, err
}
meta.activationValue = 0
if val, ok := config.TriggerMetadata["activationValue"]; ok {
activationValue, err := strconv.ParseFloat(val, 64)
if err != nil {
return nil, fmt.Errorf("activationValue parsing error %w", err)
}
meta.activationValue = activationValue
}
auth, err := gcp.GetGCPAuthorization(config)
if err != nil {
return nil, err
}
meta.gcpAuthorization = auth
meta.triggerIndex = config.TriggerIndex
return &meta, nil
return meta, nil
}
func (s *pubsubScaler) Close(context.Context) error {
@ -226,13 +112,29 @@ func (s *pubsubScaler) Close(context.Context) error {
return nil
}
func (meta *pubsubMetadata) Validate() error {
if meta.SubscriptionSize != 0 {
if meta.TopicName != "" {
return fmt.Errorf("you cannot use subscriptionSize field together with topicName field. Use subscriptionName field instead")
}
}
hasSub := meta.SubscriptionName != ""
hasTopic := meta.TopicName != ""
if (!hasSub && !hasTopic) || (hasSub && hasTopic) {
return fmt.Errorf("exactly one of subscription or topic name must be given")
}
return nil
}
// GetMetricSpecForScaling returns the metric spec for the HPA
func (s *pubsubScaler) GetMetricSpecForScaling(context.Context) []v2.MetricSpec {
externalMetric := &v2.ExternalMetricSource{
Metric: v2.MetricIdentifier{
Name: GenerateMetricNameWithIndex(s.metadata.triggerIndex, kedautil.NormalizeString(fmt.Sprintf("gcp-ps-%s", s.metadata.resourceName))),
},
Target: GetMetricTargetMili(s.metricType, s.metadata.value),
Target: GetMetricTargetMili(s.metricType, s.metadata.Value),
}
// Create the metric spec for the HPA
@ -246,11 +148,11 @@ func (s *pubsubScaler) GetMetricSpecForScaling(context.Context) []v2.MetricSpec
// GetMetricsAndActivity connects to Stack Driver and finds the size of the pub sub subscription
func (s *pubsubScaler) GetMetricsAndActivity(ctx context.Context, metricName string) ([]external_metrics.ExternalMetricValue, bool, error) {
mode := s.metadata.mode
mode := s.metadata.Mode
// SubscriptionSize is actually NumUndeliveredMessages in GCP PubSub.
// Considering backward compatibility, fallback "SubscriptionSize" to "NumUndeliveredMessages"
if mode == pubSubModeSubscriptionSize {
if mode == pubSubDefaultModeSubscriptionSize {
mode = "NumUndeliveredMessages"
}
@ -264,7 +166,7 @@ func (s *pubsubScaler) GetMetricsAndActivity(ctx context.Context, metricName str
metric := GenerateMetricInMili(metricName, value)
return []external_metrics.ExternalMetricValue{metric}, value > s.metadata.activationValue, nil
return []external_metrics.ExternalMetricValue{metric}, value > s.metadata.ActivationValue, nil
}
func (s *pubsubScaler) setStackdriverClient(ctx context.Context) error {
@ -292,7 +194,7 @@ func (s *pubsubScaler) getMetrics(ctx context.Context, metricType string) (float
}
resourceID, projectID := getResourceData(s)
query, err := s.client.BuildMQLQuery(
projectID, s.metadata.resourceType, metricType, resourceID, s.metadata.aggregation, s.metadata.timeHorizon,
projectID, s.metadata.resourceType, metricType, resourceID, s.metadata.Aggregation, s.metadata.TimeHorizon,
)
if err != nil {
return -1, err
@ -300,7 +202,7 @@ func (s *pubsubScaler) getMetrics(ctx context.Context, metricType string) (float
// Pubsub metrics are collected every 60 seconds so no need to aggregate them.
// See: https://cloud.google.com/monitoring/api/metrics_gcp#gcp-pubsub
return s.client.QueryMetrics(ctx, projectID, query, s.metadata.valueIfNull)
return s.client.QueryMetrics(ctx, projectID, query, s.metadata.ValueIfNull)
}
func getResourceData(s *pubsubScaler) (string, string) {

View File

@ -16,6 +16,7 @@ var testPubSubResolvedEnv = map[string]string{
}
type parsePubSubMetadataTestData struct {
testName string
authParams map[string]string
metadata map[string]string
isError bool
@ -35,57 +36,57 @@ type gcpPubSubSubscription struct {
}
var testPubSubMetadata = []parsePubSubMetadataTestData{
{map[string]string{}, map[string]string{}, true},
{"empty", map[string]string{}, map[string]string{}, true},
// all properly formed with deprecated field
{nil, map[string]string{"subscriptionName": "mysubscription", "subscriptionSize": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
{"all properly formed with deprecated field", nil, map[string]string{"subscriptionName": "mysubscription", "subscriptionSize": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
// all properly formed with subscriptionName
{nil, map[string]string{"subscriptionName": "mysubscription", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS", "activationValue": "5"}, false},
{"all properly formed with subscriptionName", nil, map[string]string{"subscriptionName": "mysubscription", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS", "activationValue": "5"}, false},
// all properly formed with oldest unacked message age mode
{nil, map[string]string{"subscriptionName": "mysubscription", "mode": "OldestUnackedMessageAge", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
{"all properly formed with oldest unacked message age mode", nil, map[string]string{"subscriptionName": "mysubscription", "mode": "OldestUnackedMessageAge", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
// missing subscriptionName
{nil, map[string]string{"subscriptionName": "", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, true},
{"missing subscriptionName", nil, map[string]string{"subscriptionName": "", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, true},
// missing credentials
{nil, map[string]string{"subscriptionName": "mysubscription", "value": "7", "credentialsFromEnv": ""}, true},
{"missing credentials", nil, map[string]string{"subscriptionName": "mysubscription", "value": "7", "credentialsFromEnv": ""}, true},
// malformed subscriptionSize
{nil, map[string]string{"subscriptionName": "mysubscription", "value": "AA", "credentialsFromEnv": "SAMPLE_CREDS"}, true},
{"malformed subscriptionSize", nil, map[string]string{"subscriptionName": "mysubscription", "value": "AA", "credentialsFromEnv": "SAMPLE_CREDS"}, true},
// malformed mode
{nil, map[string]string{"subscriptionName": "", "mode": "AA", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, true},
{"malformed mode", nil, map[string]string{"subscriptionName": "", "mode": "AA", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, true},
// malformed activationTargetValue
{nil, map[string]string{"subscriptionName": "mysubscription", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS", "activationValue": "AA"}, true},
{"malformed activationTargetValue", nil, map[string]string{"subscriptionName": "mysubscription", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS", "activationValue": "AA"}, true},
// Credentials from AuthParams
{map[string]string{"GoogleApplicationCredentials": "Creds"}, map[string]string{"subscriptionName": "mysubscription", "value": "7"}, false},
{"Credentials from AuthParams", map[string]string{"GoogleApplicationCredentials": "Creds"}, map[string]string{"subscriptionName": "mysubscription", "value": "7"}, false},
// Credentials from AuthParams with empty creds
{map[string]string{"GoogleApplicationCredentials": ""}, map[string]string{"subscriptionName": "mysubscription", "subscriptionSize": "7"}, true},
{"Credentials from AuthParams with empty creds", map[string]string{"GoogleApplicationCredentials": ""}, map[string]string{"subscriptionName": "mysubscription", "subscriptionSize": "7"}, true},
// with full link to subscription
{nil, map[string]string{"subscriptionName": "projects/myproject/subscriptions/mysubscription", "subscriptionSize": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
{"with full link to subscription", nil, map[string]string{"subscriptionName": "projects/myproject/subscriptions/mysubscription", "subscriptionSize": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
// with full (bad) link to subscription
{nil, map[string]string{"subscriptionName": "projects/myproject/mysubscription", "subscriptionSize": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
{"with full (bad) link to subscription", nil, map[string]string{"subscriptionName": "projects/myproject/mysubscription", "subscriptionSize": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
// properly formed float value and activationTargetValue
{nil, map[string]string{"subscriptionName": "mysubscription", "value": "7.1", "credentialsFromEnv": "SAMPLE_CREDS", "activationValue": "2.1"}, false},
{"properly formed float value and activationTargetValue", nil, map[string]string{"subscriptionName": "mysubscription", "value": "7.1", "credentialsFromEnv": "SAMPLE_CREDS", "activationValue": "2.1"}, false},
// All optional omitted
{nil, map[string]string{"subscriptionName": "mysubscription", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
{"All optional omitted", nil, map[string]string{"subscriptionName": "mysubscription", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
// value omitted when mode present
{nil, map[string]string{"subscriptionName": "mysubscription", "mode": "SubscriptionSize", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
{"value omitted when mode present", nil, map[string]string{"subscriptionName": "mysubscription", "mode": "SubscriptionSize", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
// all properly formed with topicName
{nil, map[string]string{"topicName": "mytopic", "mode": "MessageSizes", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
{"all properly formed with topicName", nil, map[string]string{"topicName": "mytopic", "mode": "MessageSizes", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
// with full link to topic
{nil, map[string]string{"topicName": "projects/myproject/topics/mytopic", "mode": "MessageSizes", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
{"with full link to topic", nil, map[string]string{"topicName": "projects/myproject/topics/mytopic", "mode": "MessageSizes", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
// with full (bad) link to topic
{nil, map[string]string{"topicName": "projects/myproject/mytopic", "mode": "MessageSizes", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
{"with full (bad) link to topic", nil, map[string]string{"topicName": "projects/myproject/mytopic", "mode": "MessageSizes", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
// both subscriptionName and topicName present
{nil, map[string]string{"subscriptionName": "mysubscription", "topicName": "mytopic", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, true},
{"both subscriptionName and topicName present", nil, map[string]string{"subscriptionName": "mysubscription", "topicName": "mytopic", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, true},
// both subscriptionName and topicName missing
{nil, map[string]string{"value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, true},
{"both subscriptionName and topicName missing", nil, map[string]string{"value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, true},
// both subscriptionSize and topicName present
{nil, map[string]string{"subscriptionSize": "7", "topicName": "mytopic", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, true},
{"both subscriptionSize and topicName present", nil, map[string]string{"subscriptionSize": "7", "topicName": "mytopic", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, true},
// both subscriptionName and subscriptionNameFromEnv present
{nil, map[string]string{"subscriptionName": "mysubscription", "subscriptionNameFromEnv": "MY_ENV_SUBSCRIPTION", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, true},
{"both subscriptionName and subscriptionNameFromEnv present", nil, map[string]string{"subscriptionName": "mysubscription", "subscriptionNameFromEnv": "MY_ENV_SUBSCRIPTION", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
// both topicName and topicNameFromEnv present
{nil, map[string]string{"topicName": "mytopic", "topicNameFromEnv": "MY_ENV_TOPIC", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, true},
{"both topicName and topicNameFromEnv present", nil, map[string]string{"topicName": "mytopic", "topicNameFromEnv": "MY_ENV_TOPIC", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
// subscriptionNameFromEnv present
{nil, map[string]string{"subscriptionNameFromEnv": "MY_ENV_SUBSCRIPTION", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
{"subscriptionNameFromEnv present", nil, map[string]string{"subscriptionNameFromEnv": "MY_ENV_SUBSCRIPTION", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
// topicNameFromEnv present
{nil, map[string]string{"topicNameFromEnv": "MY_ENV_TOPIC", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
{"topicNameFromEnv present", nil, map[string]string{"topicNameFromEnv": "MY_ENV_TOPIC", "value": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, false},
}
var gcpPubSubMetricIdentifiers = []gcpPubSubMetricIdentifier{
@ -111,59 +112,67 @@ var gcpSubscriptionDefaults = []gcpPubSubSubscription{
func TestPubSubParseMetadata(t *testing.T) {
for _, testData := range testPubSubMetadata {
_, err := parsePubSubMetadata(&scalersconfig.ScalerConfig{AuthParams: testData.authParams, TriggerMetadata: testData.metadata, ResolvedEnv: testPubSubResolvedEnv}, logr.Discard())
if err != nil && !testData.isError {
t.Error("Expected success but got error", err)
}
if testData.isError && err == nil {
t.Error("Expected error but got success")
}
t.Run(testData.testName, func(t *testing.T) {
_, err := parsePubSubMetadata(&scalersconfig.ScalerConfig{AuthParams: testData.authParams, TriggerMetadata: testData.metadata, ResolvedEnv: testPubSubResolvedEnv})
if err != nil && !testData.isError {
t.Error("Expected success but got error", err)
}
if testData.isError && err == nil {
t.Error("Expected error but got success")
}
})
}
}
func TestPubSubMetadataDefaultValues(t *testing.T) {
for _, testData := range gcpSubscriptionDefaults {
metaData, err := parsePubSubMetadata(&scalersconfig.ScalerConfig{AuthParams: testData.metadataTestData.authParams, TriggerMetadata: testData.metadataTestData.metadata, ResolvedEnv: testPubSubResolvedEnv}, logr.Discard())
if err != nil {
t.Error("Expected success but got error", err)
}
if pubSubModeSubscriptionSize != metaData.mode {
t.Errorf(`Expected mode "%s" but got "%s"`, pubSubModeSubscriptionSize, metaData.mode)
}
if pubSubDefaultValue != metaData.value {
t.Errorf(`Expected value "%d" but got "%f"`, pubSubDefaultValue, metaData.value)
}
t.Run(testData.metadataTestData.testName, func(t *testing.T) {
metaData, err := parsePubSubMetadata(&scalersconfig.ScalerConfig{AuthParams: testData.metadataTestData.authParams, TriggerMetadata: testData.metadataTestData.metadata, ResolvedEnv: testPubSubResolvedEnv})
if err != nil {
t.Error("Expected success but got error", err)
}
if pubSubDefaultModeSubscriptionSize != metaData.Mode {
t.Errorf(`Expected mode "%s" but got "%s"`, pubSubDefaultModeSubscriptionSize, metaData.Mode)
}
if pubSubDefaultValue != metaData.Value {
t.Errorf(`Expected value "%d" but got "%f"`, pubSubDefaultValue, metaData.Value)
}
})
}
}
func TestGcpPubSubGetMetricSpecForScaling(t *testing.T) {
for _, testData := range gcpPubSubMetricIdentifiers {
meta, err := parsePubSubMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadataTestData.metadata, ResolvedEnv: testPubSubResolvedEnv, TriggerIndex: testData.triggerIndex}, logr.Discard())
if err != nil {
t.Fatal("Could not parse metadata:", err)
}
mockGcpPubSubScaler := pubsubScaler{nil, "", meta, logr.Discard()}
t.Run(testData.metadataTestData.testName, func(t *testing.T) {
meta, err := parsePubSubMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadataTestData.metadata, ResolvedEnv: testPubSubResolvedEnv, TriggerIndex: testData.triggerIndex})
if err != nil {
t.Fatal("Could not parse metadata:", err)
}
mockGcpPubSubScaler := pubsubScaler{nil, "", meta, logr.Discard()}
metricSpec := mockGcpPubSubScaler.GetMetricSpecForScaling(context.Background())
metricName := metricSpec[0].External.Metric.Name
if metricName != testData.name {
t.Error("Wrong External metric source name:", metricName)
}
metricSpec := mockGcpPubSubScaler.GetMetricSpecForScaling(context.Background())
metricName := metricSpec[0].External.Metric.Name
if metricName != testData.name {
t.Error("Wrong External metric source name:", metricName)
}
})
}
}
func TestGcpPubSubResourceName(t *testing.T) {
for _, testData := range gcpResourceNameTests {
meta, err := parsePubSubMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadataTestData.metadata, ResolvedEnv: testPubSubResolvedEnv, TriggerIndex: testData.triggerIndex}, logr.Discard())
if err != nil {
t.Fatal("Could not parse metadata:", err)
}
mockGcpPubSubScaler := pubsubScaler{nil, "", meta, logr.Discard()}
resourceID, projectID := getResourceData(&mockGcpPubSubScaler)
t.Run(testData.metadataTestData.testName, func(t *testing.T) {
meta, err := parsePubSubMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadataTestData.metadata, ResolvedEnv: testPubSubResolvedEnv, TriggerIndex: testData.triggerIndex})
if err != nil {
t.Fatal("Could not parse metadata:", err)
}
mockGcpPubSubScaler := pubsubScaler{nil, "", meta, logr.Discard()}
resourceID, projectID := getResourceData(&mockGcpPubSubScaler)
if resourceID != testData.name || projectID != testData.projectID {
t.Error("Wrong Resource parsing:", resourceID, projectID)
}
if resourceID != testData.name || projectID != testData.projectID {
t.Error("Wrong Resource parsing:", resourceID, projectID)
}
})
}
}

View File

@ -15,10 +15,6 @@ import (
kedautil "github.com/kedacore/keda/v2/pkg/util"
)
const (
defaultStackdriverTargetValue = 5
)
type stackdriverScaler struct {
client *gcp.StackDriverClient
metricType v2.MetricTargetType
@ -27,13 +23,13 @@ type stackdriverScaler struct {
}
type stackdriverMetadata struct {
projectID string
filter string
targetValue float64
activationTargetValue float64
ProjectID string `keda:"name=projectId, order=triggerMetadata"`
Filter string `keda:"name=filter, order=triggerMetadata"`
TargetValue float64 `keda:"name=targetValue, order=triggerMetadata, default=5"`
ActivationTargetValue float64 `keda:"name=activationTargetValue, order=triggerMetadata, default=0"`
metricName string
valueIfNull *float64
filterDuration int64
ValueIfNull *float64 `keda:"name=valueIfNull, order=triggerMetadata, optional"`
FilterDuration int64 `keda:"name=filterDuration, order=triggerMetadata, optional"`
gcpAuthorization *gcp.AuthorizationMetadata
aggregation *monitoringpb.Aggregation
@ -68,67 +64,15 @@ func NewStackdriverScaler(ctx context.Context, config *scalersconfig.ScalerConfi
}
func parseStackdriverMetadata(config *scalersconfig.ScalerConfig, logger logr.Logger) (*stackdriverMetadata, error) {
meta := stackdriverMetadata{}
meta.targetValue = defaultStackdriverTargetValue
meta := &stackdriverMetadata{}
if val, ok := config.TriggerMetadata["projectId"]; ok {
if val == "" {
return nil, fmt.Errorf("no projectId name given")
}
meta.projectID = val
} else {
return nil, fmt.Errorf("no projectId name given")
if err := config.TypedConfig(meta); err != nil {
return nil, fmt.Errorf("error parsing Stackdriver metadata: %w", err)
}
if val, ok := config.TriggerMetadata["filter"]; ok {
if val == "" {
return nil, fmt.Errorf("no filter given")
}
meta.filter = val
} else {
return nil, fmt.Errorf("no filter given")
}
name := kedautil.NormalizeString(fmt.Sprintf("gcp-stackdriver-%s", meta.projectID))
name := kedautil.NormalizeString(fmt.Sprintf("gcp-stackdriver-%s", meta.ProjectID))
meta.metricName = GenerateMetricNameWithIndex(config.TriggerIndex, name)
if val, ok := config.TriggerMetadata["targetValue"]; ok {
targetValue, err := strconv.ParseFloat(val, 64)
if err != nil {
logger.Error(err, "Error parsing targetValue")
return nil, fmt.Errorf("error parsing targetValue: %w", err)
}
meta.targetValue = targetValue
}
meta.activationTargetValue = 0
if val, ok := config.TriggerMetadata["activationTargetValue"]; ok {
activationTargetValue, err := strconv.ParseFloat(val, 64)
if err != nil {
return nil, fmt.Errorf("activationTargetValue parsing error %w", err)
}
meta.activationTargetValue = activationTargetValue
}
if val, ok := config.TriggerMetadata["valueIfNull"]; ok && val != "" {
valueIfNull, err := strconv.ParseFloat(val, 64)
if err != nil {
return nil, fmt.Errorf("valueIfNull parsing error %w", err)
}
meta.valueIfNull = &valueIfNull
}
if val, ok := config.TriggerMetadata["filterDuration"]; ok {
filterDuration, err := strconv.ParseInt(val, 10, 64)
if err != nil {
return nil, fmt.Errorf("filterDuration parsing error %w", err)
}
meta.filterDuration = filterDuration
}
auth, err := gcp.GetGCPAuthorization(config)
if err != nil {
return nil, err
@ -141,7 +85,7 @@ func parseStackdriverMetadata(config *scalersconfig.ScalerConfig, logger logr.Lo
}
meta.aggregation = aggregation
return &meta, nil
return meta, nil
}
func parseAggregation(config *scalersconfig.ScalerConfig, logger logr.Logger) (*monitoringpb.Aggregation, error) {
@ -199,7 +143,7 @@ func (s *stackdriverScaler) GetMetricSpecForScaling(context.Context) []v2.Metric
Metric: v2.MetricIdentifier{
Name: s.metadata.metricName,
},
Target: GetMetricTargetMili(s.metricType, s.metadata.targetValue),
Target: GetMetricTargetMili(s.metricType, s.metadata.TargetValue),
}
// Create the metric spec for the HPA
@ -221,17 +165,17 @@ func (s *stackdriverScaler) GetMetricsAndActivity(ctx context.Context, metricNam
metric := GenerateMetricInMili(metricName, value)
return []external_metrics.ExternalMetricValue{metric}, value > s.metadata.activationTargetValue, nil
return []external_metrics.ExternalMetricValue{metric}, value > s.metadata.ActivationTargetValue, nil
}
// getMetrics gets metric type value from stackdriver api
func (s *stackdriverScaler) getMetrics(ctx context.Context) (float64, error) {
val, err := s.client.GetMetrics(ctx, s.metadata.filter, s.metadata.projectID, s.metadata.aggregation, s.metadata.valueIfNull, s.metadata.filterDuration)
val, err := s.client.GetMetrics(ctx, s.metadata.Filter, s.metadata.ProjectID, s.metadata.aggregation, s.metadata.ValueIfNull, s.metadata.FilterDuration)
if err == nil {
s.logger.V(1).Info(
fmt.Sprintf("Getting metrics for project %s, filter %s and aggregation %v. Result: %f",
s.metadata.projectID,
s.metadata.filter,
s.metadata.ProjectID,
s.metadata.Filter,
s.metadata.aggregation,
val))
}

View File

@ -2,10 +2,12 @@ package scalers
import (
"context"
"reflect"
"testing"
"github.com/go-logr/logr"
"github.com/kedacore/keda/v2/pkg/scalers/gcp"
"github.com/kedacore/keda/v2/pkg/scalers/scalersconfig"
)
@ -17,81 +19,194 @@ type parseStackdriverMetadataTestData struct {
authParams map[string]string
metadata map[string]string
isError bool
}
type gcpStackdriverMetricIdentifier struct {
metadataTestData *parseStackdriverMetadataTestData
triggerIndex int
name string
expected *stackdriverMetadata
comment string
}
var sdFilter = "metric.type=\"storage.googleapis.com/storage/object_count\" resource.type=\"gcs_bucket\""
var testStackdriverMetadata = []parseStackdriverMetadataTestData{
{map[string]string{}, map[string]string{}, true},
// all properly formed
{nil, map[string]string{"projectId": "myProject", "filter": sdFilter, "targetValue": "7", "credentialsFromEnv": "SAMPLE_CREDS", "activationTargetValue": "5"}, false},
// all required properly formed
{nil, map[string]string{"projectId": "myProject", "filter": sdFilter, "credentialsFromEnv": "SAMPLE_CREDS"}, false},
// missing projectId
{nil, map[string]string{"filter": sdFilter, "targetValue": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, true},
// missing filter
{nil, map[string]string{"projectId": "myProject", "targetValue": "7", "credentialsFromEnv": "SAMPLE_CREDS"}, true},
// missing credentials
{nil, map[string]string{"projectId": "myProject", "filter": sdFilter, "targetValue": "7"}, true},
// malformed targetValue
{nil, map[string]string{"projectId": "myProject", "filter": sdFilter, "targetValue": "aa", "credentialsFromEnv": "SAMPLE_CREDS"}, true},
// malformed activationTargetValue
{nil, map[string]string{"projectId": "myProject", "filter": sdFilter, "credentialsFromEnv": "SAMPLE_CREDS", "activationTargetValue": "a"}, true},
// Credentials from AuthParams
{map[string]string{"GoogleApplicationCredentials": "Creds"}, map[string]string{"projectId": "myProject", "filter": sdFilter}, false},
// Credentials from AuthParams with empty creds
{map[string]string{"GoogleApplicationCredentials": ""}, map[string]string{"projectId": "myProject", "filter": sdFilter}, true},
// With aggregation info
{nil, map[string]string{"projectId": "myProject", "filter": sdFilter, "credentialsFromEnv": "SAMPLE_CREDS", "alignmentPeriodSeconds": "120", "alignmentAligner": "sum", "alignmentReducer": "percentile_99"}, false},
// With minimal aggregation info
{nil, map[string]string{"projectId": "myProject", "filter": sdFilter, "credentialsFromEnv": "SAMPLE_CREDS", "alignmentPeriodSeconds": "120"}, false},
// With too short alignment period
{nil, map[string]string{"projectId": "myProject", "filter": sdFilter, "credentialsFromEnv": "SAMPLE_CREDS", "alignmentPeriodSeconds": "30"}, true},
// With bad alignment period
{nil, map[string]string{"projectId": "myProject", "filter": sdFilter, "credentialsFromEnv": "SAMPLE_CREDS", "alignmentPeriodSeconds": "a"}, true},
// properly formed float targetValue and activationTargetValue
{nil, map[string]string{"projectId": "myProject", "filter": sdFilter, "credentialsFromEnv": "SAMPLE_CREDS", "targetValue": "1.1", "activationTargetValue": "2.1"}, false},
// properly formed float valueIfNull
{nil, map[string]string{"projectId": "myProject", "filter": sdFilter, "credentialsFromEnv": "SAMPLE_CREDS", "targetValue": "1.1", "activationTargetValue": "2.1", "valueIfNull": "1.0"}, false},
// With bad valueIfNull
{nil, map[string]string{"projectId": "myProject", "filter": sdFilter, "credentialsFromEnv": "SAMPLE_CREDS", "targetValue": "1.1", "activationTargetValue": "2.1", "valueIfNull": "toto"}, true},
}
var gcpStackdriverMetricIdentifiers = []gcpStackdriverMetricIdentifier{
{&testStackdriverMetadata[1], 0, "s0-gcp-stackdriver-myProject"},
{&testStackdriverMetadata[1], 1, "s1-gcp-stackdriver-myProject"},
{
authParams: map[string]string{},
metadata: map[string]string{},
isError: true,
expected: nil,
comment: "error case - empty metadata",
},
{
authParams: nil,
metadata: map[string]string{
"projectId": "myProject",
"filter": sdFilter,
"targetValue": "7",
"credentialsFromEnv": "SAMPLE_CREDS",
"activationTargetValue": "5",
},
isError: false,
expected: &stackdriverMetadata{
ProjectID: "myProject",
Filter: sdFilter,
TargetValue: 7,
ActivationTargetValue: 5,
metricName: "s0-gcp-stackdriver-myProject",
gcpAuthorization: &gcp.AuthorizationMetadata{
GoogleApplicationCredentials: "{}",
PodIdentityProviderEnabled: false,
},
},
comment: "all properly formed",
},
{
authParams: nil,
metadata: map[string]string{
"projectId": "myProject",
"filter": sdFilter,
"credentialsFromEnv": "SAMPLE_CREDS",
},
isError: false,
expected: &stackdriverMetadata{
ProjectID: "myProject",
Filter: sdFilter,
TargetValue: 5,
ActivationTargetValue: 0,
metricName: "s0-gcp-stackdriver-myProject",
gcpAuthorization: &gcp.AuthorizationMetadata{
GoogleApplicationCredentials: "{}",
PodIdentityProviderEnabled: false,
},
},
comment: "required fields only with defaults",
},
{
authParams: nil,
metadata: map[string]string{
"projectId": "myProject",
"filter": sdFilter,
"credentialsFromEnv": "SAMPLE_CREDS",
"valueIfNull": "1.5",
},
isError: false,
expected: &stackdriverMetadata{
ProjectID: "myProject",
Filter: sdFilter,
TargetValue: 5,
ActivationTargetValue: 0,
metricName: "s0-gcp-stackdriver-myProject",
ValueIfNull: func() *float64 { v := 1.5; return &v }(),
gcpAuthorization: &gcp.AuthorizationMetadata{
GoogleApplicationCredentials: "{}",
PodIdentityProviderEnabled: false,
},
},
comment: "with valueIfNull configuration",
},
{
authParams: nil,
metadata: map[string]string{
"filter": sdFilter,
"credentialsFromEnv": "SAMPLE_CREDS",
},
isError: true,
expected: nil,
comment: "error case - missing projectId",
},
{
authParams: nil,
metadata: map[string]string{
"projectId": "myProject",
"credentialsFromEnv": "SAMPLE_CREDS",
},
isError: true,
expected: nil,
comment: "error case - missing filter",
},
{
authParams: nil,
metadata: map[string]string{
"projectId": "myProject",
"filter": sdFilter,
},
isError: true,
expected: nil,
comment: "error case - missing credentials",
},
}
func TestStackdriverParseMetadata(t *testing.T) {
for _, testData := range testStackdriverMetadata {
_, err := parseStackdriverMetadata(&scalersconfig.ScalerConfig{AuthParams: testData.authParams, TriggerMetadata: testData.metadata, ResolvedEnv: testStackdriverResolvedEnv}, logr.Discard())
if err != nil && !testData.isError {
t.Error("Expected success but got error", err)
}
if testData.isError && err == nil {
t.Error("Expected error but got success")
}
t.Run(testData.comment, func(t *testing.T) {
metadata, err := parseStackdriverMetadata(&scalersconfig.ScalerConfig{
AuthParams: testData.authParams,
TriggerMetadata: testData.metadata,
ResolvedEnv: testStackdriverResolvedEnv,
}, logr.Discard())
if err != nil && !testData.isError {
t.Errorf("Expected success but got error")
}
if testData.isError && err == nil {
t.Errorf("Expected error but got success")
}
if !testData.isError && !reflect.DeepEqual(testData.expected, metadata) {
t.Fatalf("Expected %#v but got %+#v", testData.expected, metadata)
}
})
}
}
var gcpStackdriverMetricIdentifiers = []struct {
comment string
triggerIndex int
metadata map[string]string
expectedName string
}{
{
comment: "basic metric name",
triggerIndex: 0,
metadata: map[string]string{
"projectId": "myProject",
"filter": sdFilter,
"credentialsFromEnv": "SAMPLE_CREDS",
},
expectedName: "s0-gcp-stackdriver-myProject",
},
{
comment: "metric name with different index",
triggerIndex: 1,
metadata: map[string]string{
"projectId": "myProject",
"filter": sdFilter,
"credentialsFromEnv": "SAMPLE_CREDS",
},
expectedName: "s1-gcp-stackdriver-myProject",
},
}
func TestGcpStackdriverGetMetricSpecForScaling(t *testing.T) {
for _, testData := range gcpStackdriverMetricIdentifiers {
meta, err := parseStackdriverMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadataTestData.metadata, ResolvedEnv: testStackdriverResolvedEnv, TriggerIndex: testData.triggerIndex}, logr.Discard())
if err != nil {
t.Fatal("Could not parse metadata:", err)
}
mockGcpStackdriverScaler := stackdriverScaler{nil, "", meta, logr.Discard()}
for _, test := range gcpStackdriverMetricIdentifiers {
t.Run(test.comment, func(t *testing.T) {
meta, err := parseStackdriverMetadata(&scalersconfig.ScalerConfig{
TriggerMetadata: test.metadata,
ResolvedEnv: testStackdriverResolvedEnv,
TriggerIndex: test.triggerIndex,
}, logr.Discard())
if err != nil {
t.Fatal("Could not parse metadata:", err)
}
metricSpec := mockGcpStackdriverScaler.GetMetricSpecForScaling(context.Background())
metricName := metricSpec[0].External.Metric.Name
if metricName != testData.name {
t.Error("Wrong External metric source name:", metricName)
}
mockScaler := stackdriverScaler{
metadata: meta,
logger: logr.Discard(),
}
metricSpec := mockScaler.GetMetricSpecForScaling(context.Background())
metricName := metricSpec[0].External.Metric.Name
if metricName != test.expectedName {
t.Errorf("Wrong metric name - got %s, want %s", metricName, test.expectedName)
}
})
}
}

View File

@ -4,7 +4,6 @@ import (
"context"
"errors"
"fmt"
"strconv"
"cloud.google.com/go/storage"
"github.com/go-logr/logr"
@ -18,13 +17,6 @@ import (
kedautil "github.com/kedacore/keda/v2/pkg/util"
)
const (
// Default for how many objects per a single scaled processor
defaultTargetObjectCount = 100
// A limit on iterating bucket objects
defaultMaxBucketItemsToScan = 1000
)
type gcsScaler struct {
client *storage.Client
bucket *storage.BucketHandle
@ -34,14 +26,30 @@ type gcsScaler struct {
}
type gcsMetadata struct {
bucketName string
gcpAuthorization *gcp.AuthorizationMetadata
maxBucketItemsToScan int64
metricName string
targetObjectCount int64
activationTargetObjectCount int64
blobDelimiter string
blobPrefix string
BucketName string `keda:"name=bucketName, order=triggerMetadata"`
TargetObjectCount int64 `keda:"name=targetObjectCount, order=triggerMetadata, default=100"`
ActivationTargetObjectCount int64 `keda:"name=activationTargetObjectCount, order=triggerMetadata, default=0"`
MaxBucketItemsToScan int64 `keda:"name=maxBucketItemsToScan, order=triggerMetadata, default=1000"`
BlobDelimiter string `keda:"name=blobDelimiter, order=triggerMetadata, optional"`
BlobPrefix string `keda:"name=blobPrefix, order=triggerMetadata, optional"`
gcpAuthorization *gcp.AuthorizationMetadata
metricName string
triggerIndex int
}
func (g *gcsMetadata) Validate() error {
if g.TargetObjectCount <= 0 {
return fmt.Errorf("targetObjectCount must be a positive number")
}
if g.ActivationTargetObjectCount < 0 {
return fmt.Errorf("activationTargetObjectCount must be a positive number")
}
if g.MaxBucketItemsToScan <= 0 {
return fmt.Errorf("maxBucketItemsToScan must be a positive number")
}
return nil
}
// NewGcsScaler creates a new gcsScaler
@ -53,7 +61,7 @@ func NewGcsScaler(config *scalersconfig.ScalerConfig) (Scaler, error) {
logger := InitializeLogger(config, "gcp_storage_scaler")
meta, err := parseGcsMetadata(config, logger)
meta, err := parseGcsMetadata(config)
if err != nil {
return nil, fmt.Errorf("error parsing GCP storage metadata: %w", err)
}
@ -77,9 +85,9 @@ func NewGcsScaler(config *scalersconfig.ScalerConfig) (Scaler, error) {
return nil, fmt.Errorf("storage.NewClient: %w", err)
}
bucket := client.Bucket(meta.bucketName)
bucket := client.Bucket(meta.BucketName)
if bucket == nil {
return nil, fmt.Errorf("failed to create a handle to bucket %s", meta.bucketName)
return nil, fmt.Errorf("failed to create a handle to bucket %s", meta.BucketName)
}
logger.Info(fmt.Sprintf("Metadata %v", meta))
@ -93,58 +101,14 @@ func NewGcsScaler(config *scalersconfig.ScalerConfig) (Scaler, error) {
}, nil
}
func parseGcsMetadata(config *scalersconfig.ScalerConfig, logger logr.Logger) (*gcsMetadata, error) {
meta := gcsMetadata{}
meta.targetObjectCount = defaultTargetObjectCount
meta.maxBucketItemsToScan = defaultMaxBucketItemsToScan
if val, ok := config.TriggerMetadata["bucketName"]; ok {
if val == "" {
logger.Error(nil, "no bucket name given")
return nil, fmt.Errorf("no bucket name given")
}
meta.bucketName = val
} else {
logger.Error(nil, "no bucket name given")
return nil, fmt.Errorf("no bucket name given")
func parseGcsMetadata(config *scalersconfig.ScalerConfig) (*gcsMetadata, error) {
meta := &gcsMetadata{triggerIndex: config.TriggerIndex}
if err := config.TypedConfig(meta); err != nil {
return nil, fmt.Errorf("error parsing gcs metadata: %w", err)
}
if val, ok := config.TriggerMetadata["targetObjectCount"]; ok {
targetObjectCount, err := strconv.ParseInt(val, 10, 64)
if err != nil {
logger.Error(err, "Error parsing targetObjectCount")
return nil, fmt.Errorf("error parsing targetObjectCount: %w", err)
}
meta.targetObjectCount = targetObjectCount
}
meta.activationTargetObjectCount = 0
if val, ok := config.TriggerMetadata["activationTargetObjectCount"]; ok {
activationTargetObjectCount, err := strconv.ParseInt(val, 10, 64)
if err != nil {
return nil, fmt.Errorf("activationTargetObjectCount parsing error %w", err)
}
meta.activationTargetObjectCount = activationTargetObjectCount
}
if val, ok := config.TriggerMetadata["maxBucketItemsToScan"]; ok {
maxBucketItemsToScan, err := strconv.ParseInt(val, 10, 64)
if err != nil {
logger.Error(err, "Error parsing maxBucketItemsToScan")
return nil, fmt.Errorf("error parsing maxBucketItemsToScan: %w", err)
}
meta.maxBucketItemsToScan = maxBucketItemsToScan
}
if val, ok := config.TriggerMetadata["blobDelimiter"]; ok {
meta.blobDelimiter = val
}
if val, ok := config.TriggerMetadata["blobPrefix"]; ok {
meta.blobPrefix = val
if err := meta.Validate(); err != nil {
return nil, err
}
auth, err := gcp.GetGCPAuthorization(config)
@ -153,10 +117,10 @@ func parseGcsMetadata(config *scalersconfig.ScalerConfig, logger logr.Logger) (*
}
meta.gcpAuthorization = auth
var metricName = kedautil.NormalizeString(fmt.Sprintf("gcp-storage-%s", meta.bucketName))
var metricName = kedautil.NormalizeString(fmt.Sprintf("gcp-storage-%s", meta.BucketName))
meta.metricName = GenerateMetricNameWithIndex(config.TriggerIndex, metricName)
return &meta, nil
return meta, nil
}
func (s *gcsScaler) Close(context.Context) error {
@ -172,27 +136,27 @@ func (s *gcsScaler) GetMetricSpecForScaling(context.Context) []v2.MetricSpec {
Metric: v2.MetricIdentifier{
Name: s.metadata.metricName,
},
Target: GetMetricTarget(s.metricType, s.metadata.targetObjectCount),
Target: GetMetricTarget(s.metricType, s.metadata.TargetObjectCount),
}
metricSpec := v2.MetricSpec{External: externalMetric, Type: externalMetricType}
return []v2.MetricSpec{metricSpec}
}
// GetMetricsAndActivity returns the number of items in the bucket (up to s.metadata.maxBucketItemsToScan)
// GetMetricsAndActivity returns the number of items in the bucket (up to s.metadata.MaxBucketItemsToScan)
func (s *gcsScaler) GetMetricsAndActivity(ctx context.Context, metricName string) ([]external_metrics.ExternalMetricValue, bool, error) {
items, err := s.getItemCount(ctx, s.metadata.maxBucketItemsToScan)
items, err := s.getItemCount(ctx, s.metadata.MaxBucketItemsToScan)
if err != nil {
return []external_metrics.ExternalMetricValue{}, false, err
}
metric := GenerateMetricInMili(metricName, float64(items))
return []external_metrics.ExternalMetricValue{metric}, items > s.metadata.activationTargetObjectCount, nil
return []external_metrics.ExternalMetricValue{metric}, items > s.metadata.ActivationTargetObjectCount, nil
}
// getItemCount gets the number of items in the bucket, up to maxCount
func (s *gcsScaler) getItemCount(ctx context.Context, maxCount int64) (int64, error) {
query := &storage.Query{Delimiter: s.metadata.blobDelimiter, Prefix: s.metadata.blobPrefix}
query := &storage.Query{Delimiter: s.metadata.BlobDelimiter, Prefix: s.metadata.BlobPrefix}
err := query.SetAttrSelection([]string{"Size"})
if err != nil {
s.logger.Error(err, "failed to set attribute selection")
@ -204,22 +168,22 @@ func (s *gcsScaler) getItemCount(ctx context.Context, maxCount int64) (int64, er
for count < maxCount {
item, err := it.Next()
if err == iterator.Done {
break
if err != nil {
if errors.Is(err, iterator.Done) {
break
}
if errors.Is(err, storage.ErrBucketNotExist) {
s.logger.Info("Bucket " + s.metadata.BucketName + " doesn't exist")
return 0, nil
}
s.logger.Error(err, "failed to enumerate items in bucket "+s.metadata.BucketName)
return count, err
}
// The folder is retrieved as an entity, so if size is 0
// we can skip it
if item.Size == 0 {
continue
}
if err != nil {
if errors.Is(err, storage.ErrBucketNotExist) {
s.logger.Info("Bucket " + s.metadata.bucketName + " doesn't exist")
return 0, nil
}
s.logger.Error(err, "failed to enumerate items in bucket "+s.metadata.bucketName)
return count, err
}
count++
}

View File

@ -54,7 +54,7 @@ var gcpGcsMetricIdentifiers = []gcpGcsMetricIdentifier{
func TestGcsParseMetadata(t *testing.T) {
for _, testData := range testGcsMetadata {
_, err := parseGcsMetadata(&scalersconfig.ScalerConfig{AuthParams: testData.authParams, TriggerMetadata: testData.metadata, ResolvedEnv: testGcsResolvedEnv}, logr.Discard())
_, err := parseGcsMetadata(&scalersconfig.ScalerConfig{AuthParams: testData.authParams, TriggerMetadata: testData.metadata, ResolvedEnv: testGcsResolvedEnv})
if err != nil && !testData.isError {
t.Error("Expected success but got error", err)
}
@ -66,7 +66,7 @@ func TestGcsParseMetadata(t *testing.T) {
func TestGcsGetMetricSpecForScaling(t *testing.T) {
for _, testData := range gcpGcsMetricIdentifiers {
meta, err := parseGcsMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadataTestData.metadata, ResolvedEnv: testGcsResolvedEnv, TriggerIndex: testData.triggerIndex}, logr.Discard())
meta, err := parseGcsMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadataTestData.metadata, ResolvedEnv: testGcsResolvedEnv, TriggerIndex: testData.triggerIndex})
if err != nil {
t.Fatal("Could not parse metadata:", err)
}

View File

@ -20,11 +20,10 @@ import (
)
const (
defaultTargetWorkflowQueueLength = 1
defaultGithubAPIURL = "https://api.github.com"
ORG = "org"
ENT = "ent"
REPO = "repo"
ORG = "org"
ENT = "ent"
REPO = "repo"
githubDefaultPerPage = 30
)
var reservedLabels = []string{"self-hosted", "linux", "x64"}
@ -41,19 +40,19 @@ type githubRunnerScaler struct {
}
type githubRunnerMetadata struct {
githubAPIURL string
owner string
runnerScope string
personalAccessToken *string
repos []string
labels []string
noDefaultLabels bool
enableEtags bool
targetWorkflowQueueLength int64
triggerIndex int
applicationID *int64
installationID *int64
applicationKey *string
GithubAPIURL string `keda:"name=githubApiURL, order=triggerMetadata;resolvedEnv, default=https://api.github.com"`
Owner string `keda:"name=owner, order=triggerMetadata;resolvedEnv"`
RunnerScope string `keda:"name=runnerScope, order=triggerMetadata;resolvedEnv, enum=org;ent;repo"`
PersonalAccessToken string `keda:"name=personalAccessToken, order=authParams, optional"`
Repos []string `keda:"name=repos, order=triggerMetadata;resolvedEnv, optional"`
Labels []string `keda:"name=labels, order=triggerMetadata;resolvedEnv, optional"`
NoDefaultLabels bool `keda:"name=noDefaultLabels, order=triggerMetadata;resolvedEnv, default=false"`
EnableEtags bool `keda:"name=enableEtags, order=triggerMetadata;resolvedEnv, default=false"`
TargetWorkflowQueueLength int64 `keda:"name=targetWorkflowQueueLength, order=triggerMetadata;resolvedEnv, default=1"`
TriggerIndex int
ApplicationID int64 `keda:"name=applicationID, order=triggerMetadata;resolvedEnv, optional"`
InstallationID int64 `keda:"name=installationID, order=triggerMetadata;resolvedEnv, optional"`
ApplicationKey string `keda:"name=appKey, order=authParams, optional"`
}
type WorkflowRuns struct {
@ -345,13 +344,13 @@ func NewGitHubRunnerScaler(config *scalersconfig.ScalerConfig) (Scaler, error) {
return nil, fmt.Errorf("error parsing GitHub Runner metadata: %w", err)
}
if meta.applicationID != nil && meta.installationID != nil && meta.applicationKey != nil {
if meta.ApplicationID != 0 && meta.InstallationID != 0 && meta.ApplicationKey != "" {
httpTrans := kedautil.CreateHTTPTransport(false)
hc, err := gha.New(httpTrans, *meta.applicationID, *meta.installationID, []byte(*meta.applicationKey))
hc, err := gha.New(httpTrans, meta.ApplicationID, meta.InstallationID, []byte(meta.ApplicationKey))
if err != nil {
return nil, fmt.Errorf("error creating GitHub App client: %w, \n appID: %d, instID: %d", err, meta.applicationID, meta.installationID)
return nil, fmt.Errorf("error creating GitHub App client: %w, \n appID: %d, instID: %d", err, meta.ApplicationID, meta.InstallationID)
}
hc.BaseURL = meta.githubAPIURL
hc.BaseURL = meta.GithubAPIURL
httpClient = &http.Client{Transport: hc}
}
@ -372,157 +371,46 @@ func NewGitHubRunnerScaler(config *scalersconfig.ScalerConfig) (Scaler, error) {
}, nil
}
// getValueFromMetaOrEnv returns the value of the given key from the metadata or the environment variables
func getValueFromMetaOrEnv(key string, metadata map[string]string, env map[string]string) (string, error) {
if val, ok := metadata[key]; ok && val != "" {
return val, nil
} else if val, ok := metadata[key+"FromEnv"]; ok && val != "" {
if envVal, ok := env[val]; ok && envVal != "" {
return envVal, nil
func (meta *githubRunnerMetadata) Validate() error {
if meta.ApplicationKey == "" && meta.PersonalAccessToken == "" {
return fmt.Errorf("no personalAccessToken or appKey given")
}
if meta.ApplicationID != 0 || meta.InstallationID != 0 || meta.ApplicationKey != "" {
if err := validateGitHubApp(meta); err != nil {
return err
}
return "", fmt.Errorf("%s %s env variable value is empty", key, val)
}
return "", fmt.Errorf("no %s given", key)
}
// getInt64ValueFromMetaOrEnv returns the value of the given key from the metadata or the environment variables
func getInt64ValueFromMetaOrEnv(key string, metadata map[string]string, env map[string]string) (int64, error) {
sInt, err := getValueFromMetaOrEnv(key, metadata, env)
if err != nil {
return -1, fmt.Errorf("error parsing %s: %w", key, err)
}
goodInt, err := strconv.ParseInt(sInt, 10, 64)
if err != nil {
return -1, fmt.Errorf("error parsing %s: %w", key, err)
}
return goodInt, nil
}
// getInt64ValueFromMetaOrEnv returns the value of the given key from the metadata or the environment variables
func getBoolValueFromMetaOrEnv(key string, metadata map[string]string, env map[string]string) (bool, error) {
sBool, err := getValueFromMetaOrEnv(key, metadata, env)
if err != nil {
return false, fmt.Errorf("error parsing %s: %w", key, err)
}
goodBool, err := strconv.ParseBool(sBool)
if err != nil {
return false, fmt.Errorf("error parsing %s: %w", key, err)
}
return goodBool, nil
return nil
}
func parseGitHubRunnerMetadata(config *scalersconfig.ScalerConfig) (*githubRunnerMetadata, error) {
meta := &githubRunnerMetadata{}
meta.targetWorkflowQueueLength = defaultTargetWorkflowQueueLength
if val, err := getValueFromMetaOrEnv("runnerScope", config.TriggerMetadata, config.ResolvedEnv); err == nil && val != "" {
meta.runnerScope = val
} else {
return nil, err
if err := config.TypedConfig(meta); err != nil {
return nil, fmt.Errorf("error parsing github runner metadata: %w", err)
}
if val, err := getValueFromMetaOrEnv("owner", config.TriggerMetadata, config.ResolvedEnv); err == nil && val != "" {
meta.owner = val
} else {
return nil, err
}
if val, err := getInt64ValueFromMetaOrEnv("targetWorkflowQueueLength", config.TriggerMetadata, config.ResolvedEnv); err == nil && val != -1 {
meta.targetWorkflowQueueLength = val
} else {
meta.targetWorkflowQueueLength = defaultTargetWorkflowQueueLength
}
if val, err := getValueFromMetaOrEnv("labels", config.TriggerMetadata, config.ResolvedEnv); err == nil && val != "" {
meta.labels = strings.Split(val, ",")
}
if val, err := getBoolValueFromMetaOrEnv("noDefaultLabels", config.TriggerMetadata, config.ResolvedEnv); err == nil {
meta.noDefaultLabels = val
} else {
meta.noDefaultLabels = false
}
if val, err := getBoolValueFromMetaOrEnv("enableEtags", config.TriggerMetadata, config.ResolvedEnv); err == nil {
meta.enableEtags = val
} else {
meta.enableEtags = false
}
if val, err := getValueFromMetaOrEnv("repos", config.TriggerMetadata, config.ResolvedEnv); err == nil && val != "" {
meta.repos = strings.Split(val, ",")
}
if val, err := getValueFromMetaOrEnv("githubApiURL", config.TriggerMetadata, config.ResolvedEnv); err == nil && val != "" {
meta.githubAPIURL = val
} else {
meta.githubAPIURL = defaultGithubAPIURL
}
if val, ok := config.AuthParams["personalAccessToken"]; ok && val != "" {
// Found the pat token in a parameter from TriggerAuthentication
meta.personalAccessToken = &val
}
if appID, instID, key, err := setupGitHubApp(config); err == nil {
meta.applicationID = appID
meta.installationID = instID
meta.applicationKey = key
} else {
return nil, err
}
if meta.applicationKey == nil && meta.personalAccessToken == nil {
return nil, fmt.Errorf("no personalAccessToken or appKey given")
}
meta.triggerIndex = config.TriggerIndex
meta.TriggerIndex = config.TriggerIndex
return meta, nil
}
func setupGitHubApp(config *scalersconfig.ScalerConfig) (*int64, *int64, *string, error) {
var appID *int64
var instID *int64
var appKey *string
appIDVal, appIDErr := getInt64ValueFromMetaOrEnv("applicationID", config.TriggerMetadata, config.ResolvedEnv)
if appIDErr == nil && appIDVal != -1 {
appID = &appIDVal
func validateGitHubApp(meta *githubRunnerMetadata) error {
if meta.ApplicationID == 0 {
return fmt.Errorf("no applicationID given")
}
instIDVal, instIDErr := getInt64ValueFromMetaOrEnv("installationID", config.TriggerMetadata, config.ResolvedEnv)
if instIDErr == nil && instIDVal != -1 {
instID = &instIDVal
if meta.InstallationID == 0 {
return fmt.Errorf("no installationID given")
}
if val, ok := config.AuthParams["appKey"]; ok && val != "" {
appKey = &val
if meta.ApplicationKey == "" {
return fmt.Errorf("no appKey given")
}
if (appID != nil || instID != nil || appKey != nil) &&
(appID == nil || instID == nil || appKey == nil) {
if appIDErr != nil {
return nil, nil, nil, appIDErr
}
if instIDErr != nil {
return nil, nil, nil, instIDErr
}
return nil, nil, nil, fmt.Errorf("no applicationKey given")
}
return appID, instID, appKey, nil
return nil
}
// getRepositories returns a list of repositories for a given organization, user or enterprise
func (s *githubRunnerScaler) getRepositories(ctx context.Context) ([]string, error) {
if s.metadata.repos != nil {
return s.metadata.repos, nil
if s.metadata.Repos != nil {
return s.metadata.Repos, nil
}
page := 1
@ -530,22 +418,20 @@ func (s *githubRunnerScaler) getRepositories(ctx context.Context) ([]string, err
for {
var url string
switch s.metadata.runnerScope {
case ORG:
url = fmt.Sprintf("%s/orgs/%s/repos?page=%s", s.metadata.githubAPIURL, s.metadata.owner, strconv.Itoa(page))
switch s.metadata.RunnerScope {
case ORG, ENT:
url = fmt.Sprintf("%s/orgs/%s/repos?page=%s", s.metadata.GithubAPIURL, s.metadata.Owner, strconv.Itoa(page))
case REPO:
url = fmt.Sprintf("%s/users/%s/repos?page=%s", s.metadata.githubAPIURL, s.metadata.owner, strconv.Itoa(page))
case ENT:
url = fmt.Sprintf("%s/orgs/%s/repos?page=%s", s.metadata.githubAPIURL, s.metadata.owner, strconv.Itoa(page))
url = fmt.Sprintf("%s/users/%s/repos?page=%s", s.metadata.GithubAPIURL, s.metadata.Owner, strconv.Itoa(page))
default:
return nil, fmt.Errorf("runnerScope %s not supported", s.metadata.runnerScope)
return nil, fmt.Errorf("runnerScope %s not supported", s.metadata.RunnerScope)
}
body, statusCode, err := s.getGithubRequest(ctx, url, s.metadata, s.httpClient)
if err != nil {
return nil, err
}
if statusCode == 304 && s.metadata.enableEtags {
if statusCode == 304 && s.metadata.EnableEtags {
if s.previousRepos != nil {
return s.previousRepos, nil
}
@ -565,14 +451,14 @@ func (s *githubRunnerScaler) getRepositories(ctx context.Context) ([]string, err
}
// GitHub returned less than 30 repos per page, so consider no repos left
if len(repos) < 30 {
if len(repos) < githubDefaultPerPage {
break
}
page++
}
if s.metadata.enableEtags {
if s.metadata.EnableEtags {
s.previousRepos = repoList
}
@ -588,11 +474,11 @@ func (s *githubRunnerScaler) getGithubRequest(ctx context.Context, url string, m
req.Header.Set("Accept", "application/vnd.github.v3+json")
req.Header.Set("X-GitHub-Api-Version", "2022-11-28")
if metadata.applicationID == nil && metadata.personalAccessToken != nil {
req.Header.Set("Authorization", "Bearer "+*metadata.personalAccessToken)
if metadata.ApplicationID == 0 && metadata.PersonalAccessToken != "" {
req.Header.Set("Authorization", "Bearer "+metadata.PersonalAccessToken)
}
if s.metadata.enableEtags {
if s.metadata.EnableEtags {
if etag, found := s.etags[url]; found {
req.Header.Set("If-None-Match", etag)
}
@ -610,7 +496,7 @@ func (s *githubRunnerScaler) getGithubRequest(ctx context.Context, url string, m
_ = r.Body.Close()
if r.StatusCode != 200 {
if r.StatusCode == 304 && s.metadata.enableEtags {
if r.StatusCode == 304 && s.metadata.EnableEtags {
s.logger.V(1).Info(fmt.Sprintf("The github rest api for the url: %s returned status %d %s", url, r.StatusCode, http.StatusText(r.StatusCode)))
return []byte{}, r.StatusCode, nil
}
@ -627,7 +513,7 @@ func (s *githubRunnerScaler) getGithubRequest(ctx context.Context, url string, m
return []byte{}, r.StatusCode, fmt.Errorf("the GitHub REST API returned error. url: %s status: %d response: %s", url, r.StatusCode, string(b))
}
if s.metadata.enableEtags {
if s.metadata.EnableEtags {
if etag := r.Header.Get("ETag"); etag != "" {
s.etags[url] = etag
}
@ -650,12 +536,12 @@ func stripDeadRuns(allWfrs []WorkflowRuns) []WorkflowRun {
// getWorkflowRunJobs returns a list of jobs for a given workflow run
func (s *githubRunnerScaler) getWorkflowRunJobs(ctx context.Context, workflowRunID int64, repoName string) ([]Job, error) {
url := fmt.Sprintf("%s/repos/%s/%s/actions/runs/%d/jobs?per_page=100", s.metadata.githubAPIURL, s.metadata.owner, repoName, workflowRunID)
url := fmt.Sprintf("%s/repos/%s/%s/actions/runs/%d/jobs?per_page=100", s.metadata.GithubAPIURL, s.metadata.Owner, repoName, workflowRunID)
body, statusCode, err := s.getGithubRequest(ctx, url, s.metadata, s.httpClient)
if err != nil {
return nil, err
}
if statusCode == 304 && s.metadata.enableEtags {
if statusCode == 304 && s.metadata.EnableEtags {
if s.previousJobs[repoName] != nil {
return s.previousJobs[repoName], nil
}
@ -669,7 +555,7 @@ func (s *githubRunnerScaler) getWorkflowRunJobs(ctx context.Context, workflowRun
return nil, err
}
if s.metadata.enableEtags {
if s.metadata.EnableEtags {
s.previousJobs[repoName] = jobs.Jobs
}
@ -678,14 +564,14 @@ func (s *githubRunnerScaler) getWorkflowRunJobs(ctx context.Context, workflowRun
// getWorkflowRuns returns a list of workflow runs for a given repository
func (s *githubRunnerScaler) getWorkflowRuns(ctx context.Context, repoName string, status string) (*WorkflowRuns, error) {
url := fmt.Sprintf("%s/repos/%s/%s/actions/runs?status=%s&per_page=100", s.metadata.githubAPIURL, s.metadata.owner, repoName, status)
url := fmt.Sprintf("%s/repos/%s/%s/actions/runs?status=%s&per_page=100", s.metadata.GithubAPIURL, s.metadata.Owner, repoName, status)
body, statusCode, err := s.getGithubRequest(ctx, url, s.metadata, s.httpClient)
if err != nil && statusCode == 404 {
return nil, nil
} else if err != nil {
return nil, err
}
if statusCode == 304 && s.metadata.enableEtags {
if statusCode == 304 && s.metadata.EnableEtags {
if s.previousWfrs[repoName][status] != nil {
return s.previousWfrs[repoName][status], nil
}
@ -699,7 +585,7 @@ func (s *githubRunnerScaler) getWorkflowRuns(ctx context.Context, repoName strin
return nil, err
}
if s.metadata.enableEtags {
if s.metadata.EnableEtags {
if _, repoFound := s.previousWfrs[repoName]; !repoFound {
s.previousWfrs[repoName] = map[string]*WorkflowRuns{status: &wfrs}
} else {
@ -771,7 +657,7 @@ func (s *githubRunnerScaler) GetWorkflowQueueLength(ctx context.Context) (int64,
return -1, err
}
for _, job := range jobs {
if (job.Status == "queued" || job.Status == "in_progress") && canRunnerMatchLabels(job.Labels, s.metadata.labels, s.metadata.noDefaultLabels) {
if (job.Status == "queued" || job.Status == "in_progress") && canRunnerMatchLabels(job.Labels, s.metadata.Labels, s.metadata.NoDefaultLabels) {
queueCount++
}
}
@ -790,15 +676,15 @@ func (s *githubRunnerScaler) GetMetricsAndActivity(ctx context.Context, metricNa
metric := GenerateMetricInMili(metricName, float64(queueLen))
return []external_metrics.ExternalMetricValue{metric}, queueLen >= s.metadata.targetWorkflowQueueLength, nil
return []external_metrics.ExternalMetricValue{metric}, queueLen >= s.metadata.TargetWorkflowQueueLength, nil
}
func (s *githubRunnerScaler) GetMetricSpecForScaling(_ context.Context) []v2.MetricSpec {
externalMetric := &v2.ExternalMetricSource{
Metric: v2.MetricIdentifier{
Name: GenerateMetricNameWithIndex(s.metadata.triggerIndex, kedautil.NormalizeString(fmt.Sprintf("github-runner-%s", s.metadata.owner))),
Name: GenerateMetricNameWithIndex(s.metadata.TriggerIndex, kedautil.NormalizeString(fmt.Sprintf("github-runner-%s", s.metadata.Owner))),
},
Target: GetMetricTarget(s.metricType, s.metadata.targetWorkflowQueueLength),
Target: GetMetricTarget(s.metricType, s.metadata.TargetWorkflowQueueLength),
}
metricSpec := v2.MetricSpec{External: externalMetric, Type: externalMetricType}
return []v2.MetricSpec{metricSpec}

View File

@ -55,31 +55,33 @@ var testAuthParams = map[string]string{
var testGitHubRunnerMetadata = []parseGitHubRunnerMetadataTestData{
// nothing passed
{"empty", map[string]string{}, true, true, "no runnerScope given"},
{"empty", map[string]string{}, true, true, "error parsing github runner metadata: missing required parameter \"owner\" in [triggerMetadata resolvedEnv]\nmissing required parameter \"runnerScope\" in [triggerMetadata resolvedEnv]"},
// properly formed
{"properly formed", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ORG, "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1"}, true, false, ""},
// properly formed with no labels and no repos
{"properly formed, no labels or repos", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": REPO, "owner": "ownername", "targetWorkflowQueueLength": "1"}, true, false, ""},
// string for int64
{"string for int64-1", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": REPO, "owner": "ownername", "targetWorkflowQueueLength": "a"}, true, false, ""},
{"string for int64-1", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": REPO, "owner": "ownername", "targetWorkflowQueueLength": "a"}, true, true, "error parsing github runner metadata: unable to set param \"targetWorkflowQueueLength\" value \"a\": unable to unmarshal to field type int64: invalid character 'a' looking for beginning of value"},
// formed from env
{"formed from env", map[string]string{"githubApiURLFromEnv": "GITHUB_API_URL", "runnerScopeFromEnv": "RUNNER_SCOPE", "ownerFromEnv": "OWNER", "reposFromEnv": "REPOS", "targetWorkflowQueueLength": "1"}, true, false, ""},
// missing runnerScope
{"missing runnerScope", map[string]string{"githubApiURL": "https://api.github.com", "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1"}, true, true, "no runnerScope given"},
{"missing runnerScope", map[string]string{"githubApiURL": "https://api.github.com", "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1"}, true, true, "error parsing github runner metadata: missing required parameter \"runnerScope\" in [triggerMetadata resolvedEnv]"},
// empty runnerScope
{"empty runnerScope", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": "", "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1"}, true, true, "no runnerScope given"},
{"empty runnerScope", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": "", "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1"}, true, true, "error parsing github runner metadata: missing required parameter \"runnerScope\" in [triggerMetadata resolvedEnv]"},
// invalid runnerScope
{"invalid runnerScope", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": "a", "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1"}, true, true, "error parsing github runner metadata: parameter \"runnerScope\" value \"a\" must be one of [org ent repo]"},
// missing owner
{"missing owner", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": REPO, "repos": "reponame", "targetWorkflowQueueLength": "1"}, true, true, "no owner given"},
{"missing owner", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": REPO, "repos": "reponame", "targetWorkflowQueueLength": "1"}, true, true, "error parsing github runner metadata: missing required parameter \"owner\" in [triggerMetadata resolvedEnv]"},
// empty owner
{"empty owner", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": REPO, "owner": "", "repos": "reponame", "targetWorkflowQueueLength": "1"}, true, true, "no owner given"},
{"empty owner", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": REPO, "owner": "", "repos": "reponame", "targetWorkflowQueueLength": "1"}, true, true, "error parsing github runner metadata: missing required parameter \"owner\" in [triggerMetadata resolvedEnv]"},
// empty token
{"empty targetWorkflowQueueLength", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": REPO, "owner": "ownername", "repos": "reponame"}, true, false, ""},
// missing installationID From Env
{"missing installationID Env", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ORG, "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1", "applicationIDFromEnv": "APP_ID"}, true, true, "error parsing installationID: no installationID given"},
{"missing installationID Env", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ORG, "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1", "applicationIDFromEnv": "APP_ID"}, true, true, "error parsing github runner metadata: no installationID given"},
// missing applicationID From Env
{"missing applicationID Env", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ORG, "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1", "installationIDFromEnv": "INST_ID"}, true, true, "error parsing applicationID: no applicationID given"},
{"missing applicationID Env", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ORG, "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1", "installationIDFromEnv": "INST_ID"}, true, true, "error parsing github runner metadata: no applicationID given"},
// nothing passed
{"empty, no envs", map[string]string{}, false, true, "no runnerScope given"},
{"empty, no envs", map[string]string{}, false, true, "error parsing github runner metadata: missing required parameter \"owner\" in [triggerMetadata resolvedEnv]\nmissing required parameter \"runnerScope\" in [triggerMetadata resolvedEnv]"},
// empty githubApiURL
{"empty githubApiURL, no envs", map[string]string{"githubApiURL": "", "runnerScope": ORG, "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1"}, false, false, ""},
// properly formed
@ -87,17 +89,17 @@ var testGitHubRunnerMetadata = []parseGitHubRunnerMetadataTestData{
// properly formed with no labels and no repos
{"properly formed, no envs, labels or repos", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ENT, "owner": "ownername", "targetWorkflowQueueLength": "1"}, false, false, ""},
// formed from env
{"formed from env, no envs", map[string]string{"githubApiURLFromEnv": "GITHUB_API_URL", "ownerFromEnv": "OWNER", "repos": "reponame", "targetWorkflowQueueLength": "1"}, false, true, "no runnerScope given"},
{"formed from env, no envs", map[string]string{"githubApiURLFromEnv": "GITHUB_API_URL", "ownerFromEnv": "OWNER", "repos": "reponame", "targetWorkflowQueueLength": "1"}, false, true, "error parsing github runner metadata: missing required parameter \"owner\" in [triggerMetadata resolvedEnv]\nmissing required parameter \"runnerScope\" in [triggerMetadata resolvedEnv]"},
// formed from default env
{"formed from default env, no envs", map[string]string{"owner": "ownername", "repos": "reponame", "targetWorkflowQueueLength": "1"}, false, true, "no runnerScope given"},
{"formed from default env, no envs", map[string]string{"owner": "ownername", "repos": "reponame", "targetWorkflowQueueLength": "1"}, false, true, "error parsing github runner metadata: missing required parameter \"runnerScope\" in [triggerMetadata resolvedEnv]"},
// missing runnerScope
{"missing runnerScope, no envs", map[string]string{"githubApiURL": "https://api.github.com", "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1"}, false, true, "no runnerScope given"},
{"missing runnerScope, no envs", map[string]string{"githubApiURL": "https://api.github.com", "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1"}, false, true, "error parsing github runner metadata: missing required parameter \"runnerScope\" in [triggerMetadata resolvedEnv]"},
// empty runnerScope
{"empty runnerScope, no envs", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": "", "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1"}, false, true, "no runnerScope given"},
{"empty runnerScope, no envs", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": "", "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1"}, false, true, "error parsing github runner metadata: missing required parameter \"runnerScope\" in [triggerMetadata resolvedEnv]"},
// empty owner
{"empty owner, no envs", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": REPO, "owner": "", "repos": "reponame", "targetWorkflowQueueLength": "1"}, false, true, "no owner given"},
{"empty owner, no envs", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": REPO, "owner": "", "repos": "reponame", "targetWorkflowQueueLength": "1"}, false, true, "error parsing github runner metadata: missing required parameter \"owner\" in [triggerMetadata resolvedEnv]"},
// missing owner
{"missing owner, no envs", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": REPO, "repos": "reponame", "targetWorkflowQueueLength": "1"}, false, true, "no owner given"},
{"missing owner, no envs", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": REPO, "repos": "reponame", "targetWorkflowQueueLength": "1"}, false, true, "error parsing github runner metadata: missing required parameter \"owner\" in [triggerMetadata resolvedEnv]"},
// missing labels, no envs
{"missing labels, no envs", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ORG, "owner": "ownername", "repos": "reponame,otherrepo", "targetWorkflowQueueLength": "1"}, false, false, ""},
// empty labels, no envs
@ -107,15 +109,15 @@ var testGitHubRunnerMetadata = []parseGitHubRunnerMetadataTestData{
// empty repos, no envs
{"empty repos, no envs", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ORG, "owner": "ownername", "labels": "golang", "repos": "", "targetWorkflowQueueLength": "1"}, false, false, ""},
// missing installationID
{"missing installationID", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ORG, "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1", "applicationID": "1"}, true, true, "error parsing installationID: no installationID given"},
{"missing installationID", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ORG, "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1", "applicationID": "1"}, true, true, "error parsing github runner metadata: no installationID given"},
// missing applicationID
{"missing applicationID", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ORG, "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1", "installationID": "1"}, true, true, "error parsing applicationID: no applicationID given"},
{"missing applicationID", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ORG, "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1", "installationID": "1"}, true, true, "error parsing github runner metadata: no applicationID given"},
// all good
{"missing applicationKey", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ORG, "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1", "applicationID": "1", "installationID": "1"}, true, true, "no applicationKey given"},
{"missing runnerScope Env", map[string]string{"githubApiURL": "https://api.github.com", "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1", "runnerScopeFromEnv": "EMPTY"}, true, true, "runnerScope EMPTY env variable value is empty"},
{"missing owner Env", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ORG, "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1", "ownerFromEnv": "EMPTY"}, true, true, "owner EMPTY env variable value is empty"},
{"wrong applicationID", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ORG, "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1", "applicationID": "id", "installationID": "1"}, true, true, "error parsing applicationID: strconv.ParseInt: parsing \"id\": invalid syntax"},
{"wrong installationID", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ORG, "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1", "applicationID": "1", "installationID": "id"}, true, true, "error parsing installationID: strconv.ParseInt: parsing \"id\": invalid syntax"},
{"missing applicationKey", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ORG, "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1", "applicationID": "1", "installationID": "1"}, true, true, "error parsing github runner metadata: no appKey given"},
{"missing runnerScope Env", map[string]string{"githubApiURL": "https://api.github.com", "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1", "runnerScopeFromEnv": "EMPTY"}, true, true, "error parsing github runner metadata: missing required parameter \"runnerScope\" in [triggerMetadata resolvedEnv]"},
{"missing owner Env", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ORG, "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1", "ownerFromEnv": "EMPTY"}, true, true, "error parsing github runner metadata: missing required parameter \"owner\" in [triggerMetadata resolvedEnv]"},
{"wrong applicationID", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ORG, "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1", "applicationID": "id", "installationID": "1"}, true, true, "error parsing github runner metadata: unable to set param \"applicationID\" value \"id\": unable to unmarshal to field type int64: invalid character 'i' looking for beginning of value\nno applicationID given"},
{"wrong installationID", map[string]string{"githubApiURL": "https://api.github.com", "runnerScope": ORG, "owner": "ownername", "repos": "reponame,otherrepo", "labels": "golang", "targetWorkflowQueueLength": "1", "applicationID": "1", "installationID": "id"}, true, true, "error parsing github runner metadata: unable to set param \"installationID\" value \"id\": unable to unmarshal to field type int64: invalid character 'i' looking for beginning of value\nno installationID given"},
}
func TestGitHubRunnerParseMetadata(t *testing.T) {
@ -144,11 +146,11 @@ func getGitHubTestMetaData(url string) *githubRunnerMetadata {
testpat := "testpat"
meta := githubRunnerMetadata{
githubAPIURL: url,
runnerScope: REPO,
owner: "testOwner",
personalAccessToken: &testpat,
targetWorkflowQueueLength: 1,
GithubAPIURL: url,
RunnerScope: REPO,
Owner: "testOwner",
PersonalAccessToken: testpat,
TargetWorkflowQueueLength: 1,
}
return &meta
@ -264,7 +266,7 @@ func TestNewGitHubRunnerScaler_QueueLength_NoRateLeft(t *testing.T) {
}
tRepo := []string{"test"}
mockGitHubRunnerScaler.metadata.repos = tRepo
mockGitHubRunnerScaler.metadata.Repos = tRepo
_, err := mockGitHubRunnerScaler.GetWorkflowQueueLength(context.Background())
@ -287,8 +289,8 @@ func TestNewGitHubRunnerScaler_QueueLength_SingleRepo(t *testing.T) {
httpClient: http.DefaultClient,
}
mockGitHubRunnerScaler.metadata.repos = []string{"test"}
mockGitHubRunnerScaler.metadata.labels = []string{"foo", "bar"}
mockGitHubRunnerScaler.metadata.Repos = []string{"test"}
mockGitHubRunnerScaler.metadata.Labels = []string{"foo", "bar"}
queueLen, err := mockGitHubRunnerScaler.GetWorkflowQueueLength(context.Background())
@ -311,8 +313,8 @@ func TestNewGitHubRunnerScaler_QueueLength_SingleRepo_ExtraRunnerLabels(t *testi
httpClient: http.DefaultClient,
}
mockGitHubRunnerScaler.metadata.repos = []string{"test"}
mockGitHubRunnerScaler.metadata.labels = []string{"foo", "bar", "other", "more"}
mockGitHubRunnerScaler.metadata.Repos = []string{"test"}
mockGitHubRunnerScaler.metadata.Labels = []string{"foo", "bar", "other", "more"}
queueLen, err := mockGitHubRunnerScaler.GetWorkflowQueueLength(context.Background())
@ -335,8 +337,8 @@ func TestNewGitHubRunnerScaler_QueueLength_SingleRepo_LessRunnerLabels(t *testin
httpClient: http.DefaultClient,
}
mockGitHubRunnerScaler.metadata.repos = []string{"test"}
mockGitHubRunnerScaler.metadata.labels = []string{"foo"}
mockGitHubRunnerScaler.metadata.Repos = []string{"test"}
mockGitHubRunnerScaler.metadata.Labels = []string{"foo"}
queueLen, err := mockGitHubRunnerScaler.GetWorkflowQueueLength(context.Background())
@ -358,9 +360,9 @@ func TestNewGitHubRunnerScaler_QueueLength_SingleRepo_WithScalerDefaultLabels_Wi
httpClient: http.DefaultClient,
}
mockGitHubRunnerScaler.metadata.repos = []string{"test"}
mockGitHubRunnerScaler.metadata.noDefaultLabels = false
mockGitHubRunnerScaler.metadata.labels = []string{"foo", "bar"}
mockGitHubRunnerScaler.metadata.Repos = []string{"test"}
mockGitHubRunnerScaler.metadata.NoDefaultLabels = false
mockGitHubRunnerScaler.metadata.Labels = []string{"foo", "bar"}
queueLen, err := mockGitHubRunnerScaler.GetWorkflowQueueLength(context.Background())
@ -383,9 +385,9 @@ func TestNewGitHubRunnerScaler_QueueLength_SingleRepo_WithScalerDefaultLabels_Wi
httpClient: http.DefaultClient,
}
mockGitHubRunnerScaler.metadata.repos = []string{"test"}
mockGitHubRunnerScaler.metadata.noDefaultLabels = false
mockGitHubRunnerScaler.metadata.labels = []string{"foo", "bar"}
mockGitHubRunnerScaler.metadata.Repos = []string{"test"}
mockGitHubRunnerScaler.metadata.NoDefaultLabels = false
mockGitHubRunnerScaler.metadata.Labels = []string{"foo", "bar"}
queueLen, err := mockGitHubRunnerScaler.GetWorkflowQueueLength(context.Background())
@ -408,9 +410,9 @@ func TestNewGitHubRunnerScaler_QueueLength_SingleRepo_WithoutScalerDefaultLabels
httpClient: http.DefaultClient,
}
mockGitHubRunnerScaler.metadata.repos = []string{"test"}
mockGitHubRunnerScaler.metadata.noDefaultLabels = true
mockGitHubRunnerScaler.metadata.labels = []string{"foo", "bar"}
mockGitHubRunnerScaler.metadata.Repos = []string{"test"}
mockGitHubRunnerScaler.metadata.NoDefaultLabels = true
mockGitHubRunnerScaler.metadata.Labels = []string{"foo", "bar"}
queueLen, err := mockGitHubRunnerScaler.GetWorkflowQueueLength(context.Background())
@ -433,9 +435,9 @@ func TestNewGitHubRunnerScaler_QueueLength_SingleRepo_WithoutScalerDefaultLabels
httpClient: http.DefaultClient,
}
mockGitHubRunnerScaler.metadata.repos = []string{"test"}
mockGitHubRunnerScaler.metadata.noDefaultLabels = true
mockGitHubRunnerScaler.metadata.labels = []string{"foo", "bar"}
mockGitHubRunnerScaler.metadata.Repos = []string{"test"}
mockGitHubRunnerScaler.metadata.NoDefaultLabels = true
mockGitHubRunnerScaler.metadata.Labels = []string{"foo", "bar"}
queueLen, err := mockGitHubRunnerScaler.GetWorkflowQueueLength(context.Background())
@ -483,9 +485,9 @@ func TestNewGitHubRunnerScaler_QueueLength_SingleRepo_WithNotModified(t *testing
httpClient: http.DefaultClient,
}
mockGitHubRunnerScaler.metadata.enableEtags = true
mockGitHubRunnerScaler.metadata.repos = []string{"test"}
mockGitHubRunnerScaler.metadata.labels = []string{"foo", "bar"}
mockGitHubRunnerScaler.metadata.EnableEtags = true
mockGitHubRunnerScaler.metadata.Repos = []string{"test"}
mockGitHubRunnerScaler.metadata.Labels = []string{"foo", "bar"}
mockGitHubRunnerScaler.previousJobs = previousJobs
mockGitHubRunnerScaler.previousWfrs = previousWfrs
@ -510,7 +512,7 @@ func TestNewGitHubRunnerScaler_404(t *testing.T) {
httpClient: http.DefaultClient,
}
mockGitHubRunnerScaler.metadata.labels = []string{"foo", "bar"}
mockGitHubRunnerScaler.metadata.Labels = []string{"foo", "bar"}
_, err := mockGitHubRunnerScaler.GetWorkflowQueueLength(context.Background())
@ -531,8 +533,8 @@ func TestNewGitHubRunnerScaler_BadConnection(t *testing.T) {
httpClient: http.DefaultClient,
}
mockGitHubRunnerScaler.metadata.repos = []string{"test"}
mockGitHubRunnerScaler.metadata.labels = []string{"foo", "bar"}
mockGitHubRunnerScaler.metadata.Repos = []string{"test"}
mockGitHubRunnerScaler.metadata.Labels = []string{"foo", "bar"}
_, err := mockGitHubRunnerScaler.GetWorkflowQueueLength(context.Background())
@ -554,8 +556,8 @@ func TestNewGitHubRunnerScaler_BadURL(t *testing.T) {
httpClient: http.DefaultClient,
}
mockGitHubRunnerScaler.metadata.repos = []string{"test"}
mockGitHubRunnerScaler.metadata.labels = []string{"foo", "bar"}
mockGitHubRunnerScaler.metadata.Repos = []string{"test"}
mockGitHubRunnerScaler.metadata.Labels = []string{"foo", "bar"}
_, err := mockGitHubRunnerScaler.GetWorkflowQueueLength(context.Background())
@ -579,7 +581,7 @@ func TestNewGitHubRunnerScaler_QueueLength_NoRunnerLabels(t *testing.T) {
httpClient: http.DefaultClient,
}
mockGitHubRunnerScaler.metadata.repos = []string{"test"}
mockGitHubRunnerScaler.metadata.Repos = []string{"test"}
queueLen, err := mockGitHubRunnerScaler.GetWorkflowQueueLength(context.Background())
@ -604,9 +606,9 @@ func TestNewGitHubRunnerScaler_QueueLength_MultiRepo_Assigned(t *testing.T) {
}
tRepo := []string{"test", "test2"}
mockGitHubRunnerScaler.metadata.repos = tRepo
mockGitHubRunnerScaler.metadata.runnerScope = ORG
mockGitHubRunnerScaler.metadata.labels = []string{"foo", "bar"}
mockGitHubRunnerScaler.metadata.Repos = tRepo
mockGitHubRunnerScaler.metadata.RunnerScope = ORG
mockGitHubRunnerScaler.metadata.Labels = []string{"foo", "bar"}
queueLen, err := mockGitHubRunnerScaler.GetWorkflowQueueLength(context.Background())
@ -632,9 +634,9 @@ func TestNewGitHubRunnerScaler_QueueLength_MultiRepo_Assigned_OneBad(t *testing.
}
tRepo := []string{"test", "test2", "BadRepo"}
mockGitHubRunnerScaler.metadata.repos = tRepo
mockGitHubRunnerScaler.metadata.runnerScope = ORG
mockGitHubRunnerScaler.metadata.labels = []string{"foo", "bar"}
mockGitHubRunnerScaler.metadata.Repos = tRepo
mockGitHubRunnerScaler.metadata.RunnerScope = ORG
mockGitHubRunnerScaler.metadata.Labels = []string{"foo", "bar"}
queueLen, err := mockGitHubRunnerScaler.GetWorkflowQueueLength(context.Background())
@ -659,7 +661,7 @@ func TestNewGitHubRunnerScaler_QueueLength_MultiRepo_PulledUserRepos(t *testing.
httpClient: http.DefaultClient,
}
mockGitHubRunnerScaler.metadata.labels = []string{"foo", "bar"}
mockGitHubRunnerScaler.metadata.Labels = []string{"foo", "bar"}
queueLen, err := mockGitHubRunnerScaler.GetWorkflowQueueLength(context.Background())
@ -683,7 +685,7 @@ func TestNewGitHubRunnerScaler_QueueLength_MultiRepo_PulledUserRepos_Exceeds30En
httpClient: http.DefaultClient,
}
mockGitHubRunnerScaler.metadata.labels = []string{"foo", "bar"}
mockGitHubRunnerScaler.metadata.Labels = []string{"foo", "bar"}
queueLen, err := mockGitHubRunnerScaler.GetWorkflowQueueLength(context.Background())
if err != nil {
@ -706,8 +708,8 @@ func TestNewGitHubRunnerScaler_QueueLength_MultiRepo_PulledOrgRepos(t *testing.T
httpClient: http.DefaultClient,
}
mockGitHubRunnerScaler.metadata.runnerScope = ORG
mockGitHubRunnerScaler.metadata.labels = []string{"foo", "bar"}
mockGitHubRunnerScaler.metadata.RunnerScope = ORG
mockGitHubRunnerScaler.metadata.Labels = []string{"foo", "bar"}
queueLen, err := mockGitHubRunnerScaler.GetWorkflowQueueLength(context.Background())
@ -731,8 +733,8 @@ func TestNewGitHubRunnerScaler_QueueLength_MultiRepo_PulledEntRepos(t *testing.T
httpClient: http.DefaultClient,
}
mockGitHubRunnerScaler.metadata.runnerScope = ENT
mockGitHubRunnerScaler.metadata.labels = []string{"foo", "bar"}
mockGitHubRunnerScaler.metadata.RunnerScope = ENT
mockGitHubRunnerScaler.metadata.Labels = []string{"foo", "bar"}
queueLen, err := mockGitHubRunnerScaler.GetWorkflowQueueLength(context.Background())
@ -756,7 +758,7 @@ func TestNewGitHubRunnerScaler_QueueLength_MultiRepo_PulledBadRepos(t *testing.T
httpClient: http.DefaultClient,
}
mockGitHubRunnerScaler.metadata.runnerScope = "bad"
mockGitHubRunnerScaler.metadata.RunnerScope = "bad"
_, err := mockGitHubRunnerScaler.GetWorkflowQueueLength(context.Background())

View File

@ -3,7 +3,6 @@ package scalers
import (
"context"
"fmt"
"strconv"
"time"
"github.com/Huawei/gophercloud"
@ -18,14 +17,6 @@ import (
kedautil "github.com/kedacore/keda/v2/pkg/util"
)
const (
defaultCloudeyeMetricCollectionTime = 300
defaultCloudeyeMetricFilter = "average"
defaultCloudeyeMetricPeriod = "300"
defaultHuaweiCloud = "myhuaweicloud.com"
)
type huaweiCloudeyeScaler struct {
metricType v2.MetricTargetType
metadata *huaweiCloudeyeMetadata
@ -33,42 +24,40 @@ type huaweiCloudeyeScaler struct {
}
type huaweiCloudeyeMetadata struct {
namespace string
metricsName string
dimensionName string
dimensionValue string
targetMetricValue float64
activationTargetMetricValue float64
metricCollectionTime int64
metricFilter string
metricPeriod string
huaweiAuthorization huaweiAuthorizationMetadata
triggerIndex int
Namespace string `keda:"name=namespace, order=triggerMetadata"`
MetricsName string `keda:"name=metricName, order=triggerMetadata"`
DimensionName string `keda:"name=dimensionName, order=triggerMetadata"`
DimensionValue string `keda:"name=dimensionValue, order=triggerMetadata"`
TargetMetricValue float64 `keda:"name=targetMetricValue, order=triggerMetadata"`
ActivationTargetMetricValue float64 `keda:"name=activationTargetMetricValue, order=triggerMetadata, default=0"`
MinMetricValue float64 `keda:"name=minMetricValue, order=triggerMetadata, optional, deprecatedAnnounce=The 'minMetricValue' setting is DEPRECATED and will be removed in v2.20 - Use 'activationTargetMetricValue' instead"`
MetricCollectionTime int64 `keda:"name=metricCollectionTime, order=triggerMetadata, default=300"`
MetricFilter string `keda:"name=metricFilter, order=triggerMetadata, enum=average;max;min;sum, default=average"`
MetricPeriod string `keda:"name=metricPeriod, order=triggerMetadata, default=300"`
HuaweiAuthorization huaweiAuthorizationMetadata
}
type huaweiAuthorizationMetadata struct {
IdentityEndpoint string
IdentityEndpoint string `keda:"name=IdentityEndpoint, order=authParams"`
ProjectID string `keda:"name=ProjectID, order=authParams"`
DomainID string `keda:"name=DomainID, order=authParams"`
Region string `keda:"name=Region, order=authParams"`
Domain string `keda:"name=Domain, order=authParams"`
Cloud string `keda:"name=Cloud, order=authParams, default=myhuaweicloud.com"`
AccessKey string `keda:"name=AccessKey, order=authParams"`
SecretKey string `keda:"name=SecretKey, order=authParams"`
}
// user project id
ProjectID string
DomainID string
// region
Region string
// Cloud name
Domain string
// Cloud name
Cloud string
AccessKey string // Access Key
SecretKey string // Secret key
func (h *huaweiCloudeyeMetadata) Validate() error {
if h.MinMetricValue != 0 && h.ActivationTargetMetricValue == 0 {
h.ActivationTargetMetricValue = h.MinMetricValue
}
return nil
}
// NewHuaweiCloudeyeScaler creates a new huaweiCloudeyeScaler
@ -80,7 +69,7 @@ func NewHuaweiCloudeyeScaler(config *scalersconfig.ScalerConfig) (Scaler, error)
logger := InitializeLogger(config, "huawei_cloudeye_scaler")
meta, err := parseHuaweiCloudeyeMetadata(config, logger)
meta, err := parseHuaweiCloudeyeMetadata(config) // Removed logger parameter
if err != nil {
return nil, fmt.Errorf("error parsing Cloudeye metadata: %w", err)
}
@ -92,150 +81,12 @@ func NewHuaweiCloudeyeScaler(config *scalersconfig.ScalerConfig) (Scaler, error)
}, nil
}
func parseHuaweiCloudeyeMetadata(config *scalersconfig.ScalerConfig, logger logr.Logger) (*huaweiCloudeyeMetadata, error) {
meta := huaweiCloudeyeMetadata{}
meta.metricCollectionTime = defaultCloudeyeMetricCollectionTime
meta.metricFilter = defaultCloudeyeMetricFilter
meta.metricPeriod = defaultCloudeyeMetricPeriod
if val, ok := config.TriggerMetadata["namespace"]; ok && val != "" {
meta.namespace = val
} else {
return nil, fmt.Errorf("namespace not given")
}
if val, ok := config.TriggerMetadata["metricName"]; ok && val != "" {
meta.metricsName = val
} else {
return nil, fmt.Errorf("metric Name not given")
}
if val, ok := config.TriggerMetadata["dimensionName"]; ok && val != "" {
meta.dimensionName = val
} else {
return nil, fmt.Errorf("dimension Name not given")
}
if val, ok := config.TriggerMetadata["dimensionValue"]; ok && val != "" {
meta.dimensionValue = val
} else {
return nil, fmt.Errorf("dimension Value not given")
}
if val, ok := config.TriggerMetadata["targetMetricValue"]; ok && val != "" {
targetMetricValue, err := strconv.ParseFloat(val, 64)
if err != nil {
logger.Error(err, "Error parsing targetMetricValue metadata")
} else {
meta.targetMetricValue = targetMetricValue
}
} else {
return nil, fmt.Errorf("target Metric Value not given")
}
meta.activationTargetMetricValue = 0
if val, ok := config.TriggerMetadata["activationTargetMetricValue"]; ok && val != "" {
activationTargetMetricValue, err := strconv.ParseFloat(val, 64)
if err != nil {
logger.Error(err, "Error parsing activationTargetMetricValue metadata")
}
meta.activationTargetMetricValue = activationTargetMetricValue
}
if val, ok := config.TriggerMetadata["minMetricValue"]; ok && val != "" {
minMetricValue, err := strconv.ParseFloat(val, 64)
if err != nil {
logger.Error(err, "Error parsing minMetricValue metadata")
} else {
logger.Error(err, "minMetricValue is deprecated and will be removed in next versions, please use activationTargetMetricValue instead")
meta.activationTargetMetricValue = minMetricValue
}
} else {
return nil, fmt.Errorf("min Metric Value not given")
}
if val, ok := config.TriggerMetadata["metricCollectionTime"]; ok && val != "" {
metricCollectionTime, err := strconv.Atoi(val)
if err != nil {
logger.Error(err, "Error parsing metricCollectionTime metadata")
} else {
meta.metricCollectionTime = int64(metricCollectionTime)
}
}
if val, ok := config.TriggerMetadata["metricFilter"]; ok && val != "" {
meta.metricFilter = val
}
if val, ok := config.TriggerMetadata["metricPeriod"]; ok && val != "" {
_, err := strconv.Atoi(val)
if err != nil {
logger.Error(err, "Error parsing metricPeriod metadata")
} else {
meta.metricPeriod = val
}
}
auth, err := gethuaweiAuthorization(config.AuthParams)
if err != nil {
return nil, err
}
meta.huaweiAuthorization = auth
func parseHuaweiCloudeyeMetadata(config *scalersconfig.ScalerConfig) (*huaweiCloudeyeMetadata, error) {
meta := &huaweiCloudeyeMetadata{}
meta.triggerIndex = config.TriggerIndex
return &meta, nil
}
func gethuaweiAuthorization(authParams map[string]string) (huaweiAuthorizationMetadata, error) {
meta := huaweiAuthorizationMetadata{}
if authParams["IdentityEndpoint"] != "" {
meta.IdentityEndpoint = authParams["IdentityEndpoint"]
} else {
return meta, fmt.Errorf("identityEndpoint doesn't exist in the authParams")
}
if authParams["ProjectID"] != "" {
meta.ProjectID = authParams["ProjectID"]
} else {
return meta, fmt.Errorf("projectID doesn't exist in the authParams")
}
if authParams["DomainID"] != "" {
meta.DomainID = authParams["DomainID"]
} else {
return meta, fmt.Errorf("domainID doesn't exist in the authParams")
}
if authParams["Region"] != "" {
meta.Region = authParams["Region"]
} else {
return meta, fmt.Errorf("region doesn't exist in the authParams")
}
if authParams["Domain"] != "" {
meta.Domain = authParams["Domain"]
} else {
return meta, fmt.Errorf("domain doesn't exist in the authParams")
}
if authParams["Cloud"] != "" {
meta.Cloud = authParams["Cloud"]
} else {
meta.Cloud = defaultHuaweiCloud
}
if authParams["AccessKey"] != "" {
meta.AccessKey = authParams["AccessKey"]
} else {
return meta, fmt.Errorf("accessKey doesn't exist in the authParams")
}
if authParams["SecretKey"] != "" {
meta.SecretKey = authParams["SecretKey"]
} else {
return meta, fmt.Errorf("secretKey doesn't exist in the authParams")
if err := config.TypedConfig(meta); err != nil {
return nil, fmt.Errorf("error parsing huawei cloudeye metadata: %w", err)
}
return meta, nil
@ -250,15 +101,15 @@ func (s *huaweiCloudeyeScaler) GetMetricsAndActivity(_ context.Context, metricNa
}
metric := GenerateMetricInMili(metricName, metricValue)
return []external_metrics.ExternalMetricValue{metric}, metricValue > s.metadata.activationTargetMetricValue, nil
return []external_metrics.ExternalMetricValue{metric}, metricValue > s.metadata.ActivationTargetMetricValue, nil
}
func (s *huaweiCloudeyeScaler) GetMetricSpecForScaling(context.Context) []v2.MetricSpec {
externalMetric := &v2.ExternalMetricSource{
Metric: v2.MetricIdentifier{
Name: GenerateMetricNameWithIndex(s.metadata.triggerIndex, kedautil.NormalizeString(fmt.Sprintf("huawei-cloudeye-%s", s.metadata.metricsName))),
Name: GenerateMetricNameWithIndex(s.metadata.triggerIndex, kedautil.NormalizeString(fmt.Sprintf("huawei-cloudeye-%s", s.metadata.MetricsName))),
},
Target: GetMetricTargetMili(s.metricType, s.metadata.targetMetricValue),
Target: GetMetricTargetMili(s.metricType, s.metadata.TargetMetricValue),
}
metricSpec := v2.MetricSpec{External: externalMetric, Type: externalMetricType}
return []v2.MetricSpec{metricSpec}
@ -270,14 +121,14 @@ func (s *huaweiCloudeyeScaler) Close(context.Context) error {
func (s *huaweiCloudeyeScaler) GetCloudeyeMetrics() (float64, error) {
options := aksk.AKSKOptions{
IdentityEndpoint: s.metadata.huaweiAuthorization.IdentityEndpoint,
ProjectID: s.metadata.huaweiAuthorization.ProjectID,
AccessKey: s.metadata.huaweiAuthorization.AccessKey,
SecretKey: s.metadata.huaweiAuthorization.SecretKey,
Region: s.metadata.huaweiAuthorization.Region,
Domain: s.metadata.huaweiAuthorization.Domain,
DomainID: s.metadata.huaweiAuthorization.DomainID,
Cloud: s.metadata.huaweiAuthorization.Cloud,
IdentityEndpoint: s.metadata.HuaweiAuthorization.IdentityEndpoint,
ProjectID: s.metadata.HuaweiAuthorization.ProjectID,
AccessKey: s.metadata.HuaweiAuthorization.AccessKey,
SecretKey: s.metadata.HuaweiAuthorization.SecretKey,
Region: s.metadata.HuaweiAuthorization.Region,
Domain: s.metadata.HuaweiAuthorization.Domain,
DomainID: s.metadata.HuaweiAuthorization.DomainID,
Cloud: s.metadata.HuaweiAuthorization.Cloud,
}
provider, err := openstack.AuthenticatedClient(options)
@ -299,20 +150,20 @@ func (s *huaweiCloudeyeScaler) GetCloudeyeMetrics() (float64, error) {
opts := metricdata.BatchQueryOpts{
Metrics: []metricdata.Metric{
{
Namespace: s.metadata.namespace,
Namespace: s.metadata.Namespace,
Dimensions: []map[string]string{
{
"name": s.metadata.dimensionName,
"value": s.metadata.dimensionValue,
"name": s.metadata.DimensionName,
"value": s.metadata.DimensionValue,
},
},
MetricName: s.metadata.metricsName,
MetricName: s.metadata.MetricsName,
},
},
From: time.Now().Truncate(time.Minute).Add(time.Second*-1*time.Duration(s.metadata.metricCollectionTime)).UnixNano() / 1e6,
From: time.Now().Truncate(time.Minute).Add(time.Second*-1*time.Duration(s.metadata.MetricCollectionTime)).UnixNano() / 1e6,
To: time.Now().Truncate(time.Minute).UnixNano() / 1e6,
Period: s.metadata.metricPeriod,
Filter: s.metadata.metricFilter,
Period: s.metadata.MetricPeriod,
Filter: s.metadata.MetricFilter,
}
metricdatas, err := metricdata.BatchQuery(sc, opts).ExtractMetricDatas()
@ -330,7 +181,7 @@ func (s *huaweiCloudeyeScaler) GetCloudeyeMetrics() (float64, error) {
var metricValue float64
if len(metricdatas[0].Datapoints) > 0 {
v, ok := metricdatas[0].Datapoints[0][s.metadata.metricFilter].(float64)
v, ok := metricdatas[0].Datapoints[0][s.metadata.MetricFilter].(float64)
if ok {
metricValue = v
} else {

View File

@ -134,15 +134,6 @@ var testHuaweiCloudeyeMetadata = []parseHuaweiCloudeyeMetadataTestData{
testHuaweiAuthenticationWithCloud,
true,
"metadata miss targetMetricValue"},
{map[string]string{
"namespace": "SYS.ELB",
"dimensionName": "lbaas_instance_id",
"dimensionValue": "5e052238-0346-xxb0-86ea-92d9f33e29d2",
"metricName": "mb_l7_qps",
"targetMetricValue": "100"},
testHuaweiAuthenticationWithCloud,
true,
"metadata miss minMetricValue"},
{map[string]string{
"namespace": "SYS.ELB",
"dimensionName": "lbaas_instance_id",
@ -153,6 +144,16 @@ var testHuaweiCloudeyeMetadata = []parseHuaweiCloudeyeMetadataTestData{
testHuaweiAuthenticationWithCloud,
true,
"invalid activationTargetMetricValue"},
{map[string]string{
"namespace": "SYS.ELB",
"dimensionName": "lbaas_instance_id",
"dimensionValue": "5e052238-0346-xxb0-86ea-92d9f33e29d2",
"metricName": "mb_l7_qps",
"targetMetricValue": "100",
"activationTargetMetricValue": "5"},
testHuaweiAuthenticationWithCloud,
false,
"using activationTargetMetricValue"},
}
var huaweiCloudeyeMetricIdentifiers = []huaweiCloudeyeMetricIdentifier{
@ -162,7 +163,7 @@ var huaweiCloudeyeMetricIdentifiers = []huaweiCloudeyeMetricIdentifier{
func TestHuaweiCloudeyeParseMetadata(t *testing.T) {
for _, testData := range testHuaweiCloudeyeMetadata {
_, err := parseHuaweiCloudeyeMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadata, AuthParams: testData.authParams}, logr.Discard())
_, err := parseHuaweiCloudeyeMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadata, AuthParams: testData.authParams})
if err != nil && !testData.isError {
t.Errorf("%s: Expected success but got error %s", testData.comment, err)
}
@ -174,11 +175,11 @@ func TestHuaweiCloudeyeParseMetadata(t *testing.T) {
func TestHuaweiCloudeyeGetMetricSpecForScaling(t *testing.T) {
for _, testData := range huaweiCloudeyeMetricIdentifiers {
meta, err := parseHuaweiCloudeyeMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadataTestData.metadata, AuthParams: testData.metadataTestData.authParams, TriggerIndex: testData.triggerIndex}, logr.Discard())
meta, err := parseHuaweiCloudeyeMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadataTestData.metadata, AuthParams: testData.metadataTestData.authParams, TriggerIndex: testData.triggerIndex})
if err != nil {
t.Fatal("Could not parse metadata:", err)
}
mockHuaweiCloudeyeScaler := huaweiCloudeyeScaler{"", meta, logr.Discard()}
mockHuaweiCloudeyeScaler := huaweiCloudeyeScaler{metricType: "", metadata: meta, logger: logr.Discard()}
metricSpec := mockHuaweiCloudeyeScaler.GetMetricSpecForScaling(context.Background())
metricName := metricSpec[0].External.Metric.Name

View File

@ -34,7 +34,7 @@ type ibmmqMetadata struct {
Username string `keda:"name=username, order=authParams;resolvedEnv;triggerMetadata"`
Password string `keda:"name=password, order=authParams;resolvedEnv;triggerMetadata"`
UnsafeSsl bool `keda:"name=unsafeSsl, order=triggerMetadata, default=false"`
TLS bool `keda:"name=tls, order=triggerMetadata, default=false, deprecatedAnnounce=The 'tls' setting is DEPRECATED and will be removed in v2.18 - Use 'unsafeSsl' instead"`
TLS bool `keda:"name=tls, order=triggerMetadata, default=false, deprecated=The 'tls' setting is DEPRECATED and is removed in v2.18 - Use 'unsafeSsl' instead"`
CA string `keda:"name=ca, order=authParams, optional"`
Cert string `keda:"name=cert, order=authParams, optional"`
Key string `keda:"name=key, order=authParams, optional"`
@ -76,11 +76,6 @@ func (m *ibmmqMetadata) Validate() error {
return fmt.Errorf("both cert and key must be provided when using TLS")
}
// TODO: DEPRECATED to be removed in v2.18
if m.TLS && m.UnsafeSsl {
return fmt.Errorf("'tls' and 'unsafeSsl' are both specified. Please use only 'unsafeSsl'")
}
return nil
}
@ -97,11 +92,6 @@ func NewIBMMQScaler(config *scalersconfig.ScalerConfig) (Scaler, error) {
return nil, fmt.Errorf("error parsing IBM MQ metadata: %w", err)
}
// TODO: DEPRECATED to be removed in v2.18
if meta.TLS {
meta.UnsafeSsl = meta.TLS
}
httpClient := kedautil.CreateHTTPClient(config.GlobalHTTPTimeout, meta.UnsafeSsl)
if meta.Cert != "" && meta.Key != "" {

View File

@ -17,7 +17,7 @@ type parseLiiklusMetadataTestData struct {
name string
metadata map[string]string
ExpectedErr error
ExpectedMetatada *liiklusMetadata
ExpectedMetadata *liiklusMetadata
}
type liiklusMetricIdentifier struct {
@ -34,7 +34,7 @@ var parseLiiklusMetadataTestDataset = []parseLiiklusMetadataTestData{
"missing required parameter \"address\" in [triggerMetadata]\n" +
"missing required parameter \"topic\" in [triggerMetadata]\n" +
"missing required parameter \"group\" in [triggerMetadata]"),
ExpectedMetatada: nil,
ExpectedMetadata: nil,
},
{
name: "Empty address",
@ -42,20 +42,20 @@ var parseLiiklusMetadataTestDataset = []parseLiiklusMetadataTestData{
ExpectedErr: fmt.Errorf("error parsing liiklus metadata: " +
"missing required parameter \"address\" in [triggerMetadata]\n" +
"missing required parameter \"group\" in [triggerMetadata]"),
ExpectedMetatada: nil,
ExpectedMetadata: nil,
},
{
name: "Empty group",
metadata: map[string]string{"topic": "foo", "address": "using-mock"},
ExpectedErr: fmt.Errorf("error parsing liiklus metadata: " +
"missing required parameter \"group\" in [triggerMetadata]"),
ExpectedMetatada: nil,
ExpectedMetadata: nil,
},
{
name: "Valid",
metadata: map[string]string{"topic": "foo", "address": "using-mock", "group": "mygroup"},
ExpectedErr: nil,
ExpectedMetatada: &liiklusMetadata{
ExpectedMetadata: &liiklusMetadata{
LagThreshold: 10,
ActivationLagThreshold: 0,
Address: "using-mock",
@ -69,13 +69,13 @@ var parseLiiklusMetadataTestDataset = []parseLiiklusMetadataTestData{
name: "Invalid activationLagThreshold",
metadata: map[string]string{"topic": "foo", "address": "using-mock", "group": "mygroup", "activationLagThreshold": "invalid"},
ExpectedErr: fmt.Errorf("error parsing liiklus metadata: unable to set param \"activationLagThreshold\" value \"invalid\": unable to unmarshal to field type int64: invalid character 'i' looking for beginning of value"),
ExpectedMetatada: nil,
ExpectedMetadata: nil,
},
{
name: "Custom lagThreshold",
metadata: map[string]string{"topic": "foo", "address": "using-mock", "group": "mygroup", "lagThreshold": "20"},
ExpectedErr: nil,
ExpectedMetatada: &liiklusMetadata{
ExpectedMetadata: &liiklusMetadata{
LagThreshold: 20,
ActivationLagThreshold: 0,
Address: "using-mock",
@ -111,24 +111,24 @@ func TestLiiklusParseMetadata(t *testing.T) {
if err != nil {
t.Errorf("Expected success but got error %v", err)
}
if testData.ExpectedMetatada != nil {
if testData.ExpectedMetatada.Address != meta.Address {
t.Errorf("Expected address %q but got %q", testData.ExpectedMetatada.Address, meta.Address)
if testData.ExpectedMetadata != nil {
if testData.ExpectedMetadata.Address != meta.Address {
t.Errorf("Expected address %q but got %q", testData.ExpectedMetadata.Address, meta.Address)
}
if meta.Group != testData.ExpectedMetatada.Group {
t.Errorf("Expected group %q but got %q", testData.ExpectedMetatada.Group, meta.Group)
if meta.Group != testData.ExpectedMetadata.Group {
t.Errorf("Expected group %q but got %q", testData.ExpectedMetadata.Group, meta.Group)
}
if meta.Topic != testData.ExpectedMetatada.Topic {
t.Errorf("Expected topic %q but got %q", testData.ExpectedMetatada.Topic, meta.Topic)
if meta.Topic != testData.ExpectedMetadata.Topic {
t.Errorf("Expected topic %q but got %q", testData.ExpectedMetadata.Topic, meta.Topic)
}
if meta.LagThreshold != testData.ExpectedMetatada.LagThreshold {
t.Errorf("Expected threshold %d but got %d", testData.ExpectedMetatada.LagThreshold, meta.LagThreshold)
if meta.LagThreshold != testData.ExpectedMetadata.LagThreshold {
t.Errorf("Expected threshold %d but got %d", testData.ExpectedMetadata.LagThreshold, meta.LagThreshold)
}
if meta.ActivationLagThreshold != testData.ExpectedMetatada.ActivationLagThreshold {
t.Errorf("Expected activation threshold %d but got %d", testData.ExpectedMetatada.ActivationLagThreshold, meta.ActivationLagThreshold)
if meta.ActivationLagThreshold != testData.ExpectedMetadata.ActivationLagThreshold {
t.Errorf("Expected activation threshold %d but got %d", testData.ExpectedMetadata.ActivationLagThreshold, meta.ActivationLagThreshold)
}
if meta.GroupVersion != testData.ExpectedMetatada.GroupVersion {
t.Errorf("Expected group version %d but got %d", testData.ExpectedMetatada.GroupVersion, meta.GroupVersion)
if meta.GroupVersion != testData.ExpectedMetadata.GroupVersion {
t.Errorf("Expected group version %d but got %d", testData.ExpectedMetadata.GroupVersion, meta.GroupVersion)
}
}
})

View File

@ -44,7 +44,7 @@ type metricsAPIScalerMetadata struct {
enableAPIKeyAuth bool
method string // way of providing auth key, either "header" (default) or "query"
// keyParamName is either header key or query param used for passing apikey
// default header is "X-API-KEY", defaul query param is "api_key"
// default header is "X-API-KEY", default query param is "api_key"
keyParamName string
apiKey string
@ -178,71 +178,75 @@ func parseMetricsAPIMetadata(config *scalersconfig.ScalerConfig) (*metricsAPISca
return nil, fmt.Errorf("no valueLocation given in metadata")
}
authMode, ok := config.TriggerMetadata["authMode"]
// no authMode specified
if !ok {
return &meta, nil
}
authType := authentication.Type(strings.TrimSpace(authMode))
switch authType {
case authentication.APIKeyAuthType:
if len(config.AuthParams["apiKey"]) == 0 {
return nil, errors.New("no apikey provided")
}
meta.apiKey = config.AuthParams["apiKey"]
// default behaviour is header. only change if query param requested
meta.method = "header"
meta.enableAPIKeyAuth = true
if config.TriggerMetadata["method"] == methodValueQuery {
meta.method = methodValueQuery
}
if len(config.TriggerMetadata["keyParamName"]) > 0 {
meta.keyParamName = config.TriggerMetadata["keyParamName"]
}
case authentication.BasicAuthType:
if len(config.AuthParams["username"]) == 0 {
return nil, errors.New("no username given")
}
meta.username = config.AuthParams["username"]
// password is optional. For convenience, many application implements basic auth with
// username as apikey and password as empty
meta.password = config.AuthParams["password"]
meta.enableBaseAuth = true
case authentication.TLSAuthType:
if len(config.AuthParams["ca"]) == 0 {
return nil, errors.New("no ca given")
}
if len(config.AuthParams["cert"]) == 0 {
return nil, errors.New("no cert given")
}
meta.cert = config.AuthParams["cert"]
if len(config.AuthParams["key"]) == 0 {
return nil, errors.New("no key given")
}
meta.key = config.AuthParams["key"]
meta.enableTLS = true
case authentication.BearerAuthType:
if len(config.AuthParams["token"]) == 0 {
return nil, errors.New("no token provided")
}
meta.bearerToken = config.AuthParams["token"]
meta.enableBearerAuth = true
default:
return nil, fmt.Errorf("err incorrect value for authMode is given: %s", authMode)
// Check for multiple authentication methods
authModes := strings.Split(config.TriggerMetadata["authMode"], ",")
for _, authMode := range authModes {
authType := authentication.Type(strings.TrimSpace(authMode))
switch authType {
case authentication.APIKeyAuthType:
if len(config.AuthParams["apiKey"]) == 0 {
return nil, errors.New("no apikey provided")
}
meta.apiKey = config.AuthParams["apiKey"]
// default behaviour is header. only change if query param requested
meta.method = "header"
meta.enableAPIKeyAuth = true
if config.TriggerMetadata["method"] == methodValueQuery {
meta.method = methodValueQuery
}
if len(config.TriggerMetadata["keyParamName"]) > 0 {
meta.keyParamName = config.TriggerMetadata["keyParamName"]
}
case authentication.BasicAuthType:
if len(config.AuthParams["username"]) == 0 {
return nil, errors.New("no username given")
}
meta.username = config.AuthParams["username"]
// password is optional. For convenience, many application implements basic auth with
// username as apikey and password as empty
meta.password = config.AuthParams["password"]
meta.enableBaseAuth = true
case authentication.TLSAuthType:
if len(config.AuthParams["ca"]) == 0 {
return nil, errors.New("no ca given")
}
if len(config.AuthParams["cert"]) == 0 {
return nil, errors.New("no cert given")
}
meta.cert = config.AuthParams["cert"]
if len(config.AuthParams["key"]) == 0 {
return nil, errors.New("no key given")
}
meta.key = config.AuthParams["key"]
meta.enableTLS = true
case authentication.BearerAuthType:
if len(config.AuthParams["token"]) == 0 {
return nil, errors.New("no token provided")
}
meta.bearerToken = config.AuthParams["token"]
meta.enableBearerAuth = true
case "":
// Skip empty auth type (can happen when splitting comma-separated list)
continue
default:
return nil, fmt.Errorf("err incorrect value for authMode is given: %s", authMode)
}
}
// Handle CA certificate separately to allow it to be used with other auth methods
if len(config.AuthParams["ca"]) > 0 {
meta.ca = config.AuthParams["ca"]
}
return &meta, nil
}
@ -480,56 +484,46 @@ func (s *metricsAPIScaler) GetMetricsAndActivity(ctx context.Context, metricName
}
func getMetricAPIServerRequest(ctx context.Context, meta *metricsAPIScalerMetadata) (*http.Request, error) {
var req *http.Request
var err error
var requestURL string
switch {
case meta.enableAPIKeyAuth:
if meta.method == methodValueQuery {
url, _ := neturl.Parse(meta.url)
queryString := url.Query()
if len(meta.keyParamName) == 0 {
queryString.Set("api_key", meta.apiKey)
} else {
queryString.Set(meta.keyParamName, meta.apiKey)
}
url.RawQuery = queryString.Encode()
req, err = http.NewRequestWithContext(ctx, "GET", url.String(), nil)
if err != nil {
return nil, err
}
// Handle API Key as query parameter if needed
if meta.enableAPIKeyAuth && meta.method == methodValueQuery {
url, _ := neturl.Parse(meta.url)
queryString := url.Query()
if len(meta.keyParamName) == 0 {
queryString.Set("api_key", meta.apiKey)
} else {
// default behaviour is to use header method
req, err = http.NewRequestWithContext(ctx, "GET", meta.url, nil)
if err != nil {
return nil, err
}
if len(meta.keyParamName) == 0 {
req.Header.Add("X-API-KEY", meta.apiKey)
} else {
req.Header.Add(meta.keyParamName, meta.apiKey)
}
}
case meta.enableBaseAuth:
req, err = http.NewRequestWithContext(ctx, "GET", meta.url, nil)
if err != nil {
return nil, err
queryString.Set(meta.keyParamName, meta.apiKey)
}
url.RawQuery = queryString.Encode()
requestURL = url.String()
} else {
requestURL = meta.url
}
// Create the request
req, err := http.NewRequestWithContext(ctx, "GET", requestURL, nil)
if err != nil {
return nil, err
}
// Add API Key as header if needed
if meta.enableAPIKeyAuth && meta.method != methodValueQuery {
if len(meta.keyParamName) == 0 {
req.Header.Add("X-API-KEY", meta.apiKey)
} else {
req.Header.Add(meta.keyParamName, meta.apiKey)
}
}
// Add Basic Auth if enabled
if meta.enableBaseAuth {
req.SetBasicAuth(meta.username, meta.password)
case meta.enableBearerAuth:
req, err = http.NewRequestWithContext(ctx, "GET", meta.url, nil)
if err != nil {
return nil, err
}
}
// Add Bearer token if enabled
if meta.enableBearerAuth {
req.Header.Add("Authorization", fmt.Sprintf("Bearer %s", meta.bearerToken))
default:
req, err = http.NewRequestWithContext(ctx, "GET", meta.url, nil)
if err != nil {
return nil, err
}
}
return req, nil

View File

@ -5,6 +5,7 @@ import (
"fmt"
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
@ -75,6 +76,8 @@ var testMetricsAPIAuthMetadata = []metricAPIAuthMetadataTestData{
{map[string]string{"url": "http://dummy:1230/api/v1/", "valueLocation": "metric", "targetValue": "42", "unsafeSsl": "false"}, map[string]string{}, false},
// failed unsafeSsl non bool
{map[string]string{"url": "http://dummy:1230/api/v1/", "valueLocation": "metric", "targetValue": "42", "unsafeSsl": "yes"}, map[string]string{}, true},
// success with both apiKey and TLS authentication
{map[string]string{"url": "http://dummy:1230/api/v1/", "valueLocation": "metric", "targetValue": "42", "authMode": "apiKey,tls"}, map[string]string{"apiKey": "apiikey", "ca": "caaa", "cert": "ceert", "key": "keey"}, false},
}
func TestParseMetricsAPIMetadata(t *testing.T) {
@ -187,16 +190,52 @@ func TestMetricAPIScalerAuthParams(t *testing.T) {
}
if err == nil {
if (meta.enableAPIKeyAuth && !(testData.metadata["authMode"] == "apiKey")) ||
(meta.enableBaseAuth && !(testData.metadata["authMode"] == "basic")) ||
(meta.enableTLS && !(testData.metadata["authMode"] == "tls")) ||
(meta.enableBearerAuth && !(testData.metadata["authMode"] == "bearer")) {
t.Error("wrong auth mode detected")
authModes := strings.Split(testData.metadata["authMode"], ",")
// Check if each enabled auth method is present in the authModes
if meta.enableAPIKeyAuth && !containsAuthMode(authModes, "apiKey") {
t.Error("API Key auth enabled but not in authMode")
}
if meta.enableBaseAuth && !containsAuthMode(authModes, "basic") {
t.Error("Basic auth enabled but not in authMode")
}
if meta.enableTLS && !containsAuthMode(authModes, "tls") {
t.Error("TLS auth enabled but not in authMode")
}
if meta.enableBearerAuth && !containsAuthMode(authModes, "bearer") {
t.Error("Bearer auth enabled but not in authMode")
}
// Check if each auth mode in authModes is enabled
for _, mode := range authModes {
mode = strings.TrimSpace(mode)
if mode == "apiKey" && !meta.enableAPIKeyAuth {
t.Error("apiKey in authMode but not enabled")
}
if mode == "basic" && !meta.enableBaseAuth {
t.Error("basic in authMode but not enabled")
}
if mode == "tls" && !meta.enableTLS {
t.Error("tls in authMode but not enabled")
}
if mode == "bearer" && !meta.enableBearerAuth {
t.Error("bearer in authMode but not enabled")
}
}
}
}
}
// Helper function to check if an auth mode is in the list
func containsAuthMode(modes []string, mode string) bool {
for _, m := range modes {
if strings.TrimSpace(m) == mode {
return true
}
}
return false
}
func TestBearerAuth(t *testing.T) {
authentication := map[string]string{
"token": "secure-token",

View File

@ -3,30 +3,19 @@ package scalers
import (
"context"
"database/sql"
"errors"
"fmt"
"net"
"net/url"
"strconv"
// mssql driver required for this scaler
_ "github.com/denisenkom/go-mssqldb"
"github.com/go-logr/logr"
// Import the MS SQL driver so it can register itself with database/sql
_ "github.com/microsoft/go-mssqldb"
v2 "k8s.io/api/autoscaling/v2"
"k8s.io/metrics/pkg/apis/external_metrics"
"github.com/kedacore/keda/v2/pkg/scalers/scalersconfig"
)
var (
// ErrMsSQLNoQuery is returned when "query" is missing from the config.
ErrMsSQLNoQuery = errors.New("no query given")
// ErrMsSQLNoTargetValue is returned when "targetValue" is missing from the config.
ErrMsSQLNoTargetValue = errors.New("no targetValue given")
)
// mssqlScaler exposes a data pointer to mssqlMetadata and sql.DB connection
type mssqlScaler struct {
metricType v2.MetricTargetType
metadata *mssqlMetadata
@ -34,42 +23,27 @@ type mssqlScaler struct {
logger logr.Logger
}
// mssqlMetadata defines metadata used by KEDA to query a Microsoft SQL database
type mssqlMetadata struct {
// The connection string used to connect to the MSSQL database.
// Both URL syntax (sqlserver://host?database=dbName) and OLEDB syntax is supported.
// +optional
connectionString string
// The username credential for connecting to the MSSQL instance, if not specified in the connection string.
// +optional
username string
// The password credential for connecting to the MSSQL instance, if not specified in the connection string.
// +optional
password string
// The hostname of the MSSQL instance endpoint, if not specified in the connection string.
// +optional
host string
// The port number of the MSSQL instance endpoint, if not specified in the connection string.
// +optional
port int
// The name of the database to query, if not specified in the connection string.
// +optional
database string
// The T-SQL query to run against the target database - e.g. SELECT COUNT(*) FROM table.
// +required
query string
// The threshold that is used as targetAverageValue in the Horizontal Pod Autoscaler.
// +required
targetValue float64
// The threshold that is used in activation phase
// +optional
activationTargetValue float64
// The index of the scaler inside the ScaledObject
// +internal
triggerIndex int
ConnectionString string `keda:"name=connectionString, order=authParams;resolvedEnv, optional"`
Username string `keda:"name=username, order=authParams;triggerMetadata, optional"`
Password string `keda:"name=password, order=authParams;resolvedEnv, optional"`
Host string `keda:"name=host, order=authParams;triggerMetadata, optional"`
Port int `keda:"name=port, order=authParams;triggerMetadata, optional"`
Database string `keda:"name=database, order=authParams;triggerMetadata, optional"`
Query string `keda:"name=query, order=triggerMetadata"`
TargetValue float64 `keda:"name=targetValue, order=triggerMetadata"`
ActivationTargetValue float64 `keda:"name=activationTargetValue, order=triggerMetadata, default=0"`
TriggerIndex int
}
func (m *mssqlMetadata) Validate() error {
if m.ConnectionString == "" && m.Host == "" {
return fmt.Errorf("must provide either connectionstring or host")
}
return nil
}
// NewMSSQLScaler creates a new mssql scaler
func NewMSSQLScaler(config *scalersconfig.ScalerConfig) (Scaler, error) {
metricType, err := GetMetricTargetType(config)
if err != nil {
@ -80,158 +54,92 @@ func NewMSSQLScaler(config *scalersconfig.ScalerConfig) (Scaler, error) {
meta, err := parseMSSQLMetadata(config)
if err != nil {
return nil, fmt.Errorf("error parsing mssql metadata: %w", err)
return nil, err
}
conn, err := newMSSQLConnection(meta, logger)
scaler := &mssqlScaler{
metricType: metricType,
metadata: meta,
logger: logger,
}
conn, err := newMSSQLConnection(scaler)
if err != nil {
return nil, fmt.Errorf("error establishing mssql connection: %w", err)
}
return &mssqlScaler{
metricType: metricType,
metadata: meta,
connection: conn,
logger: logger,
}, nil
scaler.connection = conn
return scaler, nil
}
// parseMSSQLMetadata takes a ScalerConfig and returns a mssqlMetadata or an error if the config is invalid
func parseMSSQLMetadata(config *scalersconfig.ScalerConfig) (*mssqlMetadata, error) {
meta := mssqlMetadata{}
// Query
if val, ok := config.TriggerMetadata["query"]; ok {
meta.query = val
} else {
return nil, ErrMsSQLNoQuery
meta := &mssqlMetadata{}
meta.TriggerIndex = config.TriggerIndex
if err := config.TypedConfig(meta); err != nil {
return nil, err
}
// Target query value
if val, ok := config.TriggerMetadata["targetValue"]; ok {
targetValue, err := strconv.ParseFloat(val, 64)
if err != nil {
return nil, fmt.Errorf("targetValue parsing error %w", err)
}
meta.targetValue = targetValue
} else {
if config.AsMetricSource {
meta.targetValue = 0
} else {
return nil, ErrMsSQLNoTargetValue
}
if !config.AsMetricSource && meta.TargetValue == 0 {
return nil, fmt.Errorf("no targetValue given")
}
// Activation target value
meta.activationTargetValue = 0
if val, ok := config.TriggerMetadata["activationTargetValue"]; ok {
activationTargetValue, err := strconv.ParseFloat(val, 64)
if err != nil {
return nil, fmt.Errorf("activationTargetValue parsing error %w", err)
}
meta.activationTargetValue = activationTargetValue
}
// Connection string, which can either be provided explicitly or via the helper fields
switch {
case config.AuthParams["connectionString"] != "":
meta.connectionString = config.AuthParams["connectionString"]
case config.TriggerMetadata["connectionStringFromEnv"] != "":
meta.connectionString = config.ResolvedEnv[config.TriggerMetadata["connectionStringFromEnv"]]
default:
meta.connectionString = ""
var err error
host, err := GetFromAuthOrMeta(config, "host")
if err != nil {
return nil, err
}
meta.host = host
var paramPort string
paramPort, _ = GetFromAuthOrMeta(config, "port")
if paramPort != "" {
port, err := strconv.Atoi(paramPort)
if err != nil {
return nil, fmt.Errorf("port parsing error %w", err)
}
meta.port = port
}
meta.username, _ = GetFromAuthOrMeta(config, "username")
// database is optional in SQL s
meta.database, _ = GetFromAuthOrMeta(config, "database")
if config.AuthParams["password"] != "" {
meta.password = config.AuthParams["password"]
} else if config.TriggerMetadata["passwordFromEnv"] != "" {
meta.password = config.ResolvedEnv[config.TriggerMetadata["passwordFromEnv"]]
}
}
meta.triggerIndex = config.TriggerIndex
return &meta, nil
return meta, nil
}
// newMSSQLConnection returns a new, opened SQL connection for the provided mssqlMetadata
func newMSSQLConnection(meta *mssqlMetadata, logger logr.Logger) (*sql.DB, error) {
connStr := getMSSQLConnectionString(meta)
func newMSSQLConnection(s *mssqlScaler) (*sql.DB, error) {
connStr := getMSSQLConnectionString(s)
db, err := sql.Open("sqlserver", connStr)
if err != nil {
logger.Error(err, fmt.Sprintf("Found error opening mssql: %s", err))
s.logger.Error(err, "Found error opening mssql")
return nil, err
}
err = db.Ping()
if err != nil {
logger.Error(err, fmt.Sprintf("Found error pinging mssql: %s", err))
s.logger.Error(err, "Found error pinging mssql")
return nil, err
}
return db, nil
}
// getMSSQLConnectionString returns a connection string from a mssqlMetadata
func getMSSQLConnectionString(meta *mssqlMetadata) string {
var connStr string
if meta.connectionString != "" {
connStr = meta.connectionString
} else {
query := url.Values{}
if meta.database != "" {
query.Add("database", meta.database)
}
connectionURL := &url.URL{Scheme: "sqlserver", RawQuery: query.Encode()}
if meta.username != "" {
if meta.password != "" {
connectionURL.User = url.UserPassword(meta.username, meta.password)
} else {
connectionURL.User = url.User(meta.username)
}
}
if meta.port > 0 {
connectionURL.Host = net.JoinHostPort(meta.host, fmt.Sprintf("%d", meta.port))
} else {
connectionURL.Host = meta.host
}
connStr = connectionURL.String()
func getMSSQLConnectionString(s *mssqlScaler) string {
meta := s.metadata
if meta.ConnectionString != "" {
return meta.ConnectionString
}
return connStr
query := url.Values{}
if meta.Database != "" {
query.Add("database", meta.Database)
}
connectionURL := &url.URL{Scheme: "sqlserver", RawQuery: query.Encode()}
if meta.Username != "" {
if meta.Password != "" {
connectionURL.User = url.UserPassword(meta.Username, meta.Password)
} else {
connectionURL.User = url.User(meta.Username)
}
}
if meta.Port > 0 {
connectionURL.Host = net.JoinHostPort(meta.Host, fmt.Sprintf("%d", meta.Port))
} else {
connectionURL.Host = meta.Host
}
return connectionURL.String()
}
// GetMetricSpecForScaling returns the MetricSpec for the Horizontal Pod Autoscaler
func (s *mssqlScaler) GetMetricSpecForScaling(context.Context) []v2.MetricSpec {
externalMetric := &v2.ExternalMetricSource{
Metric: v2.MetricIdentifier{
Name: GenerateMetricNameWithIndex(s.metadata.triggerIndex, "mssql"),
Name: GenerateMetricNameWithIndex(s.metadata.TriggerIndex, "mssql"),
},
Target: GetMetricTargetMili(s.metricType, s.metadata.targetValue),
Target: GetMetricTargetMili(s.metricType, s.metadata.TargetValue),
}
metricSpec := v2.MetricSpec{
@ -241,7 +149,6 @@ func (s *mssqlScaler) GetMetricSpecForScaling(context.Context) []v2.MetricSpec {
return []v2.MetricSpec{metricSpec}
}
// GetMetricsAndActivity returns a value for a supported metric or an error if there is a problem getting the metric
func (s *mssqlScaler) GetMetricsAndActivity(ctx context.Context, metricName string) ([]external_metrics.ExternalMetricValue, bool, error) {
num, err := s.getQueryResult(ctx)
if err != nil {
@ -250,13 +157,13 @@ func (s *mssqlScaler) GetMetricsAndActivity(ctx context.Context, metricName stri
metric := GenerateMetricInMili(metricName, num)
return []external_metrics.ExternalMetricValue{metric}, num > s.metadata.activationTargetValue, nil
return []external_metrics.ExternalMetricValue{metric}, num > s.metadata.ActivationTargetValue, nil
}
// getQueryResult returns the result of the scaler query
func (s *mssqlScaler) getQueryResult(ctx context.Context) (float64, error) {
var value float64
err := s.connection.QueryRowContext(ctx, s.metadata.query).Scan(&value)
err := s.connection.QueryRowContext(ctx, s.metadata.Query).Scan(&value)
switch {
case err == sql.ErrNoRows:
value = 0
@ -268,7 +175,6 @@ func (s *mssqlScaler) getQueryResult(ctx context.Context) (float64, error) {
return value, nil
}
// Close closes the mssql database connections
func (s *mssqlScaler) Close(context.Context) error {
err := s.connection.Close()
if err != nil {

View File

@ -2,183 +2,159 @@ package scalers
import (
"context"
"errors"
"testing"
"github.com/stretchr/testify/assert"
"github.com/kedacore/keda/v2/pkg/scalers/scalersconfig"
)
type mssqlTestData struct {
// test inputs
metadata map[string]string
resolvedEnv map[string]string
authParams map[string]string
// expected outputs
expectedMetricName string
type parseMSSQLMetadataTestData struct {
name string
metadata map[string]string
resolvedEnv map[string]string
authParams map[string]string
expectedError string
expectedConnectionString string
expectedError error
expectedMetricName string
}
type mssqlMetricIdentifier struct {
metadataTestData *mssqlTestData
triggerIndex int
name string
}
var testMssqlMetadata = []mssqlTestData{
// direct connection string input
var testMSSQLMetadata = []parseMSSQLMetadataTestData{
{
name: "Direct connection string input",
metadata: map[string]string{"query": "SELECT 1", "targetValue": "1"},
resolvedEnv: map[string]string{},
authParams: map[string]string{"connectionString": "sqlserver://localhost"},
expectedConnectionString: "sqlserver://localhost",
},
// direct connection string input with activationTargetValue
{
name: "Direct connection string input with activationTargetValue",
metadata: map[string]string{"query": "SELECT 1", "targetValue": "1", "activationTargetValue": "20"},
resolvedEnv: map[string]string{},
authParams: map[string]string{"connectionString": "sqlserver://localhost"},
expectedConnectionString: "sqlserver://localhost",
},
// direct connection string input, OLEDB format
{
name: "Direct connection string input, OLEDB format",
metadata: map[string]string{"query": "SELECT 1", "targetValue": "1"},
resolvedEnv: map[string]string{},
authParams: map[string]string{"connectionString": "Server=example.database.windows.net;port=1433;Database=AdventureWorks;Persist Security Info=False;User ID=user1;Password=Password#1;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"},
expectedConnectionString: "Server=example.database.windows.net;port=1433;Database=AdventureWorks;Persist Security Info=False;User ID=user1;Password=Password#1;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;",
},
// connection string input via environment variables
{
name: "Connection string input via environment variables",
metadata: map[string]string{"query": "SELECT 1", "targetValue": "1", "connectionStringFromEnv": "test_connection_string"},
resolvedEnv: map[string]string{"test_connection_string": "sqlserver://localhost?database=AdventureWorks"},
authParams: map[string]string{},
expectedConnectionString: "sqlserver://localhost?database=AdventureWorks",
},
// connection string generated from minimal required metadata
{
name: "Connection string generated from minimal required metadata",
metadata: map[string]string{"query": "SELECT 1", "targetValue": "1", "host": "127.0.0.1"},
resolvedEnv: map[string]string{},
authParams: map[string]string{},
expectedMetricName: "mssql",
expectedConnectionString: "sqlserver://127.0.0.1",
},
// connection string generated from full metadata
{
name: "Connection string generated from full metadata",
metadata: map[string]string{"query": "SELECT 1", "targetValue": "1", "host": "example.database.windows.net", "username": "user1", "passwordFromEnv": "test_password", "port": "1433", "database": "AdventureWorks"},
resolvedEnv: map[string]string{"test_password": "Password#1"},
authParams: map[string]string{},
expectedConnectionString: "sqlserver://user1:Password%231@example.database.windows.net:1433?database=AdventureWorks",
},
// variation of previous: no port, password from authParams, metricName from database name
{
name: "Variation of previous: no port, password from authParams, metricName from database name",
metadata: map[string]string{"query": "SELECT 1", "targetValue": "1", "host": "example.database.windows.net", "username": "user2", "database": "AdventureWorks"},
resolvedEnv: map[string]string{},
authParams: map[string]string{"password": "Password#2"},
expectedMetricName: "mssql",
expectedConnectionString: "sqlserver://user2:Password%232@example.database.windows.net?database=AdventureWorks",
},
// connection string generated from full authParams
{
name: "Connection string generated from full authParams",
metadata: map[string]string{"query": "SELECT 1", "targetValue": "1"},
resolvedEnv: map[string]string{},
authParams: map[string]string{"password": "Password#2", "host": "example.database.windows.net", "username": "user2", "database": "AdventureWorks", "port": "1433"},
expectedMetricName: "mssql",
expectedConnectionString: "sqlserver://user2:Password%232@example.database.windows.net:1433?database=AdventureWorks",
},
// variation of previous: no database name, metricName from host
{
name: "Variation of previous: no database name, metricName from host",
metadata: map[string]string{"query": "SELECT 1", "targetValue": "1", "host": "example.database.windows.net", "username": "user3"},
resolvedEnv: map[string]string{},
authParams: map[string]string{"password": "Password#3"},
expectedMetricName: "mssql",
expectedConnectionString: "sqlserver://user3:Password%233@example.database.windows.net",
},
// Error: missing query
{
name: "Error: missing query",
metadata: map[string]string{"targetValue": "1"},
resolvedEnv: map[string]string{},
authParams: map[string]string{"connectionString": "sqlserver://localhost"},
expectedError: ErrMsSQLNoQuery,
expectedError: "missing required parameter \"query\" in [triggerMetadata]",
},
// Error: missing targetValue
{
name: "Error: missing targetValue",
metadata: map[string]string{"query": "SELECT 1"},
resolvedEnv: map[string]string{},
authParams: map[string]string{"connectionString": "sqlserver://localhost"},
expectedError: ErrMsSQLNoTargetValue,
expectedError: "missing required parameter \"targetValue\" in [triggerMetadata]",
},
// Error: missing host
{
name: "Error: missing host",
metadata: map[string]string{"query": "SELECT 1", "targetValue": "1"},
resolvedEnv: map[string]string{},
authParams: map[string]string{},
expectedError: ErrScalerConfigMissingField,
expectedError: "must provide either connectionstring or host",
},
}
var mssqlMetricIdentifiers = []mssqlMetricIdentifier{
{&testMssqlMetadata[0], 0, "s0-mssql"},
{&testMssqlMetadata[1], 1, "s1-mssql"},
}
func TestMSSQLMetadataParsing(t *testing.T) {
for _, testData := range testMssqlMetadata {
var config = scalersconfig.ScalerConfig{
ResolvedEnv: testData.resolvedEnv,
TriggerMetadata: testData.metadata,
AuthParams: testData.authParams,
}
outputMetadata, err := parseMSSQLMetadata(&config)
if err != nil {
if testData.expectedError == nil {
t.Errorf("Unexpected error parsing input metadata: %v", err)
} else if !errors.Is(err, testData.expectedError) {
t.Errorf("Expected error '%v' but got '%v'", testData.expectedError, err)
func TestParseMSSQLMetadata(t *testing.T) {
for _, testData := range testMSSQLMetadata {
t.Run(testData.name, func(t *testing.T) {
config := &scalersconfig.ScalerConfig{
TriggerMetadata: testData.metadata,
ResolvedEnv: testData.resolvedEnv,
AuthParams: testData.authParams,
}
continue
}
meta, err := parseMSSQLMetadata(config)
expectedQuery := "SELECT 1"
if outputMetadata.query != expectedQuery {
t.Errorf("Wrong query. Expected '%s' but got '%s'", expectedQuery, outputMetadata.query)
}
var expectedTargetValue float64 = 1
if outputMetadata.targetValue != expectedTargetValue {
t.Errorf("Wrong targetValue. Expected %f but got %f", expectedTargetValue, outputMetadata.targetValue)
}
outputConnectionString := getMSSQLConnectionString(outputMetadata)
if testData.expectedConnectionString != outputConnectionString {
t.Errorf("Wrong connection string. Expected '%s' but got '%s'", testData.expectedConnectionString, outputConnectionString)
}
if testData.expectedError != "" {
assert.EqualError(t, err, testData.expectedError)
} else {
assert.NoError(t, err)
assert.NotNil(t, meta)
}
})
}
}
func TestMSSQLGetMetricSpecForScaling(t *testing.T) {
for _, testData := range mssqlMetricIdentifiers {
ctx := context.Background()
var config = scalersconfig.ScalerConfig{
ResolvedEnv: testData.metadataTestData.resolvedEnv,
TriggerMetadata: testData.metadataTestData.metadata,
AuthParams: testData.metadataTestData.authParams,
TriggerIndex: testData.triggerIndex,
}
meta, err := parseMSSQLMetadata(&config)
if err != nil {
t.Fatal("Could not parse metadata:", err)
}
for _, testData := range testMSSQLMetadata {
t.Run(testData.name, func(t *testing.T) {
if testData.expectedError != "" {
return
}
mockMssqlScaler := mssqlScaler{
metadata: meta,
}
metricSpec := mockMssqlScaler.GetMetricSpecForScaling(ctx)
metricName := metricSpec[0].External.Metric.Name
if metricName != testData.name {
t.Error("Wrong External metric source name:", metricName, testData.name)
}
meta, err := parseMSSQLMetadata(&scalersconfig.ScalerConfig{
TriggerMetadata: testData.metadata,
ResolvedEnv: testData.resolvedEnv,
AuthParams: testData.authParams,
})
assert.NoError(t, err)
mockMSSQLScaler := mssqlScaler{
metadata: meta,
}
metricSpec := mockMSSQLScaler.GetMetricSpecForScaling(context.Background())
assert.NotNil(t, metricSpec)
assert.Equal(t, 1, len(metricSpec))
assert.Contains(t, metricSpec[0].External.Metric.Name, "mssql")
})
}
}

View File

@ -7,11 +7,12 @@ import (
"net/http"
"net/http/httptest"
"net/url"
"strings"
"testing"
"time"
"github.com/go-logr/logr"
"github.com/stretchr/testify/assert"
"go.uber.org/atomic"
v2 "k8s.io/api/autoscaling/v2"
"github.com/kedacore/keda/v2/pkg/scalers/scalersconfig"
@ -106,6 +107,81 @@ var nsqMetricIdentifiers = []nsqMetricIdentifier{
{&parseNSQMetadataTestDataset[0], 1, "s1-nsq-topic-channel", "AverageValue"},
}
// Create mock handlers that return fixed responses
func createMockNSQdHandler(depth int64, statsError bool) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if statsError {
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/json")
response := fmt.Sprintf(`{"topics":[{"topic_name":"topic","channels":[{"channel_name":"channel","depth":%d}]}]}`, depth)
http.ServeContent(w, r, "", time.Time{}, strings.NewReader(response))
}
}
func createMockLookupdHandler(hostname, port string, lookupError bool) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if lookupError {
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/json")
response := fmt.Sprintf(`{"producers":[{"broadcast_address":"%s","http_port":%s}]}`, hostname, port)
http.ServeContent(w, r, "", time.Time{}, strings.NewReader(response))
}
}
func createMockNSQdDepthHandler(statsError, channelPaused bool) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if statsError {
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/json")
var response string
if channelPaused {
response = `{"topics":[{"topic_name":"topic", "depth":250, "channels":[{"channel_name":"channel", "depth":100, "paused":true}]}]}`
} else {
response = `{"topics":[{"topic_name":"topic", "depth":250, "channels":[{"channel_name":"channel", "depth":100}]}]}`
}
http.ServeContent(w, r, "", time.Time{}, strings.NewReader(response))
}
}
func createMockLookupdDepthHandler(hostname, port string, lookupError, topicNotExist, producersNotExist bool) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if lookupError {
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/json")
var response string
switch {
case topicNotExist:
response = `{"message": "TOPIC_NOT_FOUND"}`
case producersNotExist:
response = `{"producers":[]}`
default:
response = fmt.Sprintf(`{"producers":[{"broadcast_address":"%s","http_port":%s}]}`, hostname, port)
}
http.ServeContent(w, r, "", time.Time{}, strings.NewReader(response))
}
}
func createMockServerWithResponse(statusCode int, response string) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if statusCode != http.StatusOK {
http.Error(w, "Internal Server Error", statusCode)
return
}
w.Header().Set("Content-Type", "application/json")
http.ServeContent(w, r, "", time.Time{}, strings.NewReader(response))
}
}
func TestNSQParseMetadata(t *testing.T) {
for _, testData := range parseNSQMetadataTestDataset {
config := scalersconfig.ScalerConfig{TriggerMetadata: testData.metadata}
@ -162,21 +238,13 @@ func TestNSQGetMetricsAndActivity(t *testing.T) {
},
}
for _, tc := range testCases {
mockNSQdServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
// nosemgrep: no-fprintf-to-responsewriter
fmt.Fprintf(w, `{"topics":[{"topic_name":"topic","channels":[{"channel_name":"channel","depth":%d}]}]}`, tc.expectedDepth)
}))
mockNSQdServer := httptest.NewServer(createMockNSQdHandler(tc.expectedDepth, tc.statsError))
defer mockNSQdServer.Close()
parsedNSQdURL, err := url.Parse(mockNSQdServer.URL)
assert.Nil(t, err)
mockNSQLookupdServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
// nosemgrep: no-fprintf-to-responsewriter
fmt.Fprintf(w, `{"producers":[{"broadcast_address":"%s","http_port":%s}]}`, parsedNSQdURL.Hostname(), parsedNSQdURL.Port())
}))
mockNSQLookupdServer := httptest.NewServer(createMockLookupdHandler(parsedNSQdURL.Hostname(), parsedNSQdURL.Port(), tc.lookupError))
defer mockNSQLookupdServer.Close()
parsedNSQLookupdURL, err := url.Parse(mockNSQLookupdServer.URL)
@ -184,11 +252,13 @@ func TestNSQGetMetricsAndActivity(t *testing.T) {
nsqlookupdHost := net.JoinHostPort(parsedNSQLookupdURL.Hostname(), parsedNSQLookupdURL.Port())
activationThreshold := fmt.Sprintf("%d", tc.activationdDepthThreshold)
config := scalersconfig.ScalerConfig{TriggerMetadata: map[string]string{
"nsqLookupdHTTPAddresses": nsqlookupdHost,
"topic": "topic",
"channel": "channel",
"activationDepthThreshold": fmt.Sprintf("%d", tc.activationdDepthThreshold),
"activationDepthThreshold": activationThreshold,
}}
meta, err := parseNSQMetadata(&config)
assert.Nil(t, err)
@ -281,45 +351,13 @@ func TestNSQGetTopicChannelDepth(t *testing.T) {
}
for _, tc := range testCases {
mockNSQdServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if tc.statsError {
w.WriteHeader(http.StatusInternalServerError)
return
}
if tc.channelPaused {
w.WriteHeader(http.StatusOK)
fmt.Fprint(w, `{"topics":[{"topic_name":"topic", "depth":250, "channels":[{"channel_name":"channel", "depth":100, "paused":true}]}]}`)
return
}
w.WriteHeader(http.StatusOK)
fmt.Fprint(w, `{"topics":[{"topic_name":"topic", "depth":250, "channels":[{"channel_name":"channel", "depth":100}]}]}`)
}))
mockNSQdServer := httptest.NewServer(createMockNSQdDepthHandler(tc.statsError, tc.channelPaused))
defer mockNSQdServer.Close()
parsedNSQdURL, err := url.Parse(mockNSQdServer.URL)
assert.Nil(t, err)
mockNSQLookupdServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if tc.lookupError {
w.WriteHeader(http.StatusInternalServerError)
return
}
if tc.topicNotExist {
w.WriteHeader(http.StatusOK)
fmt.Fprint(w, `{"message": "TOPIC_NOT_FOUND"}`)
return
}
if tc.producersNotExist {
w.WriteHeader(http.StatusOK)
fmt.Fprint(w, `{"producers":[]}`)
return
}
w.WriteHeader(http.StatusOK)
// nosemgrep: no-fprintf-to-responsewriter
fmt.Fprintf(w, `{"producers":[{"broadcast_address":"%s","http_port":%s}]}`, parsedNSQdURL.Hostname(), parsedNSQdURL.Port())
}))
mockNSQLookupdServer := httptest.NewServer(createMockLookupdDepthHandler(parsedNSQdURL.Hostname(), parsedNSQdURL.Port(), tc.lookupError, tc.topicNotExist, tc.producersNotExist))
defer mockNSQLookupdServer.Close()
parsedNSQLookupdURL, err := url.Parse(mockNSQLookupdServer.URL)
@ -341,81 +379,74 @@ func TestNSQGetTopicChannelDepth(t *testing.T) {
}
func TestNSQGetTopicProducers(t *testing.T) {
type statusAndResponse struct {
status int
response string
}
type testCase struct {
statusAndResponses []statusAndResponse
expectedNSQdHosts []string
isError bool
description string
responses []string
expectedNSQdHosts []string
errorAtIndex int
description string
}
testCases := []testCase{
{
statusAndResponses: []statusAndResponse{
{http.StatusOK, `{"producers":[], "channels":[]}`},
},
responses: []string{`{"producers":[], "channels":[]}`},
expectedNSQdHosts: []string{},
errorAtIndex: -1,
description: "No producers or channels",
},
{
statusAndResponses: []statusAndResponse{
{http.StatusOK, `{"producers":[{"broadcast_address":"nsqd-0","http_port":4161}]}`},
},
responses: []string{`{"producers":[{"broadcast_address":"nsqd-0","http_port":4161}]}`},
expectedNSQdHosts: []string{"nsqd-0:4161"},
errorAtIndex: -1,
description: "Single nsqd host",
},
{
statusAndResponses: []statusAndResponse{
{http.StatusOK, `{"producers":[{"broadcast_address":"nsqd-0","http_port":4161}, {"broadcast_address":"nsqd-1","http_port":4161}]}`},
{http.StatusOK, `{"producers":[{"broadcast_address":"nsqd-2","http_port":8161}]}`},
},
expectedNSQdHosts: []string{"nsqd-0:4161", "nsqd-1:4161", "nsqd-2:8161"},
description: "Multiple nsqd hosts",
},
{
statusAndResponses: []statusAndResponse{
{http.StatusOK, `{"producers":[{"broadcast_address":"nsqd-0","http_port":4161}]}`},
{http.StatusOK, `{"producers":[{"broadcast_address":"nsqd-0","http_port":4161}]}`},
},
responses: []string{`{"producers":[{"broadcast_address":"nsqd-0","http_port":4161}]}`},
expectedNSQdHosts: []string{"nsqd-0:4161"},
errorAtIndex: -1,
description: "De-dupe nsqd hosts",
},
{
statusAndResponses: []statusAndResponse{
{http.StatusOK, `{"producers":[{"broadcast_address":"nsqd-0","http_port":4161}]}`},
{http.StatusInternalServerError, ""},
},
isError: true,
description: "At least one host responded with error",
responses: []string{`{"producers":[{"broadcast_address":"nsqd-0","http_port":4161}]}`},
expectedNSQdHosts: []string{},
errorAtIndex: 0,
description: "At least one host responded with error",
},
}
for _, tc := range testCases {
callCount := atomic.NewInt32(-1)
mockServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
callCount.Inc()
w.WriteHeader(tc.statusAndResponses[callCount.Load()].status)
// nosemgrep: no-fprintf-to-responsewriter
fmt.Fprint(w, tc.statusAndResponses[callCount.Load()].response)
}))
defer mockServer.Close()
parsedURL, err := url.Parse(mockServer.URL)
assert.Nil(t, err)
var nsqLookupdHosts []string
nsqLookupdHost := net.JoinHostPort(parsedURL.Hostname(), parsedURL.Port())
for i := 0; i < len(tc.statusAndResponses); i++ {
nsqLookupdHosts = append(nsqLookupdHosts, nsqLookupdHost)
// Create separate mock servers for each response
for i, response := range tc.responses {
shouldError := tc.errorAtIndex == i
resp := response
errFlag := shouldError
var handler http.HandlerFunc
if errFlag {
handler = func(w http.ResponseWriter, r *http.Request) {
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
}
} else {
handler = func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
http.ServeContent(w, r, "", time.Time{}, strings.NewReader(resp))
}
}
mockServer := httptest.NewServer(handler)
defer mockServer.Close()
parsedURL, err := url.Parse(mockServer.URL)
assert.Nil(t, err)
nsqLookupdHosts = append(nsqLookupdHosts, net.JoinHostPort(parsedURL.Hostname(), parsedURL.Port()))
}
s := nsqScaler{httpClient: http.DefaultClient, scheme: "http", metadata: nsqMetadata{NSQLookupdHTTPAddresses: nsqLookupdHosts}}
nsqdHosts, err := s.getTopicProducers(context.Background(), "topic")
if err != nil && tc.isError {
if tc.errorAtIndex >= 0 {
assert.NotNil(t, err)
continue
}
@ -465,11 +496,7 @@ func TestNSQGetLookup(t *testing.T) {
s := nsqScaler{httpClient: http.DefaultClient, scheme: "http"}
for _, tc := range testCases {
mockServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(tc.serverStatus)
// nosemgrep: no-fprintf-to-responsewriter
fmt.Fprint(w, tc.serverResponse)
}))
mockServer := httptest.NewServer(createMockServerWithResponse(tc.serverStatus, tc.serverResponse))
defer mockServer.Close()
parsedURL, err := url.Parse(mockServer.URL)
@ -494,110 +521,107 @@ func TestNSQGetLookup(t *testing.T) {
}
func TestNSQAggregateDepth(t *testing.T) {
type statusAndResponse struct {
status int
response string
}
type testCase struct {
statusAndResponses []statusAndResponse
expectedDepth int64
isError bool
description string
responses []string
expectedDepth int64
errorAtIndex int
description string
}
testCases := []testCase{
{
statusAndResponses: []statusAndResponse{
{http.StatusOK, `{"topics":null}`},
},
responses: []string{`{"topics":null}`},
expectedDepth: 0,
isError: false,
errorAtIndex: -1,
description: "Topic does not exist",
},
{
statusAndResponses: []statusAndResponse{
{http.StatusOK, `{"topics":[{"topic_name":"topic", "depth":250, "channels":[]}]}`},
},
responses: []string{`{"topics":[{"topic_name":"topic", "depth":250, "channels":[]}]}`},
expectedDepth: 250,
isError: false,
errorAtIndex: -1,
description: "Topic exists with no channels",
},
{
statusAndResponses: []statusAndResponse{
{http.StatusOK, `{"topics":[{"topic_name":"topic", "depth":250, "channels":[{"channel_name":"other_channel", "depth":100}]}]}`},
},
responses: []string{`{"topics":[{"topic_name":"topic", "depth":250, "channels":[{"channel_name":"other_channel", "depth":100}]}]}`},
expectedDepth: 250,
isError: false,
errorAtIndex: -1,
description: "Topic exists with different channels",
},
{
statusAndResponses: []statusAndResponse{
{http.StatusOK, `{"topics":[{"topic_name":"topic", "depth":250, "channels":[{"channel_name":"channel", "depth":100}]}]}`},
},
responses: []string{`{"topics":[{"topic_name":"topic", "depth":250, "channels":[{"channel_name":"channel", "depth":100}]}]}`},
expectedDepth: 100,
isError: false,
errorAtIndex: -1,
description: "Topic and channel exist",
},
{
statusAndResponses: []statusAndResponse{
{http.StatusOK, `{"topics":[{"topic_name":"topic", "depth":250, "channels":[{"channel_name":"channel", "depth":100, "paused":true}]}]}`},
},
responses: []string{`{"topics":[{"topic_name":"topic", "depth":250, "channels":[{"channel_name":"channel", "depth":100, "paused":true}]}]}`},
expectedDepth: 0,
isError: false,
errorAtIndex: -1,
description: "Channel is paused",
},
{
statusAndResponses: []statusAndResponse{
{http.StatusOK, `{"topics":[{"topic_name":"topic", "depth":250, "channels":[{"channel_name":"channel", "depth":100}]}]}`},
{http.StatusOK, `{"topics":[{"topic_name":"topic", "depth":250, "channels":[{"channel_name":"channel", "depth":50}]}]}`},
responses: []string{
`{"topics":[{"topic_name":"topic", "depth":250, "channels":[{"channel_name":"channel", "depth":100}]}]}`,
`{"topics":[{"topic_name":"topic", "depth":250, "channels":[{"channel_name":"channel", "depth":50}]}]}`,
},
expectedDepth: 150,
isError: false,
errorAtIndex: -1,
description: "Sum multiple depth values",
},
{
statusAndResponses: []statusAndResponse{
{http.StatusOK, `{"topics":[{"topic_name":"topic", "depth":500, "channels":[]}]}`},
{http.StatusOK, `{"topics":[{"topic_name":"topic", "depth":400, "channels":[{"channel_name":"other_channel", "depth":300}]}]}`},
{http.StatusOK, `{"topics":[{"topic_name":"topic", "depth":200, "channels":[{"channel_name":"channel", "depth":100}]}]}`},
responses: []string{
`{"topics":[{"topic_name":"topic", "depth":500, "channels":[]}]}`,
`{"topics":[{"topic_name":"topic", "depth":400, "channels":[{"channel_name":"other_channel", "depth":300}]}]}`,
`{"topics":[{"topic_name":"topic", "depth":200, "channels":[{"channel_name":"channel", "depth":100}]}]}`,
},
expectedDepth: 1000,
isError: false,
errorAtIndex: -1,
description: "Channel doesn't exist on all nsqd hosts",
},
{
statusAndResponses: []statusAndResponse{
{http.StatusOK, `{"topics":[{"topic_name":"topic", "depth":250, "channels":[{"channel_name":"channel", "depth":100}]}]}`},
{http.StatusInternalServerError, ""},
responses: []string{
`{"topics":[{"topic_name":"topic", "depth":250, "channels":[{"channel_name":"channel", "depth":100}]}]}`,
"",
},
expectedDepth: -1,
isError: true,
errorAtIndex: 1,
description: "At least one host responded with error",
},
}
s := nsqScaler{httpClient: http.DefaultClient, scheme: "http"}
for _, tc := range testCases {
callCount := atomic.NewInt32(-1)
mockServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
callCount.Inc()
w.WriteHeader(tc.statusAndResponses[callCount.Load()].status)
// nosemgrep: no-fprintf-to-responsewriter
fmt.Fprint(w, tc.statusAndResponses[callCount.Load()].response)
}))
defer mockServer.Close()
parsedURL, err := url.Parse(mockServer.URL)
assert.Nil(t, err)
var nsqdHosts []string
nsqdHost := net.JoinHostPort(parsedURL.Hostname(), parsedURL.Port())
for i := 0; i < len(tc.statusAndResponses); i++ {
nsqdHosts = append(nsqdHosts, nsqdHost)
// Create separate mock servers for each response
for i, response := range tc.responses {
shouldError := tc.errorAtIndex == i
resp := response
errFlag := shouldError
var handler http.HandlerFunc
if errFlag {
handler = func(w http.ResponseWriter, r *http.Request) {
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
}
} else {
handler = func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
http.ServeContent(w, r, "", time.Time{}, strings.NewReader(resp))
}
}
mockServer := httptest.NewServer(handler)
defer mockServer.Close()
parsedURL, err := url.Parse(mockServer.URL)
assert.Nil(t, err)
nsqdHosts = append(nsqdHosts, net.JoinHostPort(parsedURL.Hostname(), parsedURL.Port()))
}
depth, err := s.aggregateDepth(context.Background(), nsqdHosts, "topic", "channel")
if err != nil && tc.isError {
if tc.errorAtIndex >= 0 {
assert.NotNil(t, err)
continue
}
@ -641,11 +665,7 @@ func TestNSQGetStats(t *testing.T) {
s := nsqScaler{httpClient: http.DefaultClient, scheme: "http"}
for _, tc := range testCases {
mockServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(tc.serverStatus)
// nosemgrep: no-fprintf-to-responsewriter
fmt.Fprint(w, tc.serverResponse)
}))
mockServer := httptest.NewServer(createMockServerWithResponse(tc.serverStatus, tc.serverResponse))
defer mockServer.Close()
parsedURL, err := url.Parse(mockServer.URL)

View File

@ -31,7 +31,7 @@ type Client struct {
// HTTPClient is the client used for launching HTTP requests.
HTTPClient *http.Client
// authMetadata contains the properties needed for retrieving an authentication token, renew it, and dinamically discover services public URLs from Keystone.
// authMetadata contains the properties needed for retrieving an authentication token, renew it, and dynamically discover services public URLs from Keystone.
authMetadata *KeystoneAuthRequest
}
@ -137,7 +137,7 @@ func (client *Client) IsTokenValid(ctx context.Context) (bool, error) {
return true, nil
}
// RenewToken retrives another token from Keystone
// RenewToken retrieves another token from Keystone
func (client *Client) RenewToken(ctx context.Context) error {
token, err := client.authMetadata.getToken(ctx)
@ -243,7 +243,7 @@ func (keystone *KeystoneAuthRequest) RequestClient(ctx context.Context, projectP
}
if err != nil {
return client, fmt.Errorf("scaler could not find the service URL dinamically. Either provide it in the scaler parameters or check your OpenStack configuration: %w", err)
return client, fmt.Errorf("scaler could not find the service URL dynamically. Either provide it in the scaler parameters or check your OpenStack configuration: %w", err)
}
client.URL = serviceURL
@ -297,7 +297,7 @@ func (keystone *KeystoneAuthRequest) getToken(ctx context.Context) (string, erro
return "", fmt.Errorf("%s", string(errBody))
}
// getCatalog retrives the OpenStack catalog according to the current authorization
// getCatalog retrieves the OpenStack catalog according to the current authorization
func (keystone *KeystoneAuthRequest) getCatalog(ctx context.Context, token string) ([]service, error) {
var httpClient = kedautil.CreateHTTPClient(keystone.HTTPClientTimeout, false)
@ -331,7 +331,7 @@ func (keystone *KeystoneAuthRequest) getCatalog(ctx context.Context, token strin
err := json.NewDecoder(resp.Body).Decode(&keystoneCatalog)
if err != nil {
return nil, fmt.Errorf("error parsing the catalog resquest response body: %w", err)
return nil, fmt.Errorf("error parsing the catalog request response body: %w", err)
}
return keystoneCatalog.Catalog, nil
@ -361,7 +361,7 @@ func (keystone *KeystoneAuthRequest) getServiceURL(ctx context.Context, token st
}
if len(serviceCatalog) == 0 {
return "", fmt.Errorf("no catalog provided based upon the current authorization. Service URL cannot be dinamically retrieved")
return "", fmt.Errorf("no catalog provided based upon the current authorization. Service URL cannot be dynamically retrieved")
}
for _, serviceType := range serviceTypes {

View File

@ -105,7 +105,7 @@ func NewOpenstackMetricScaler(ctx context.Context, config *scalersconfig.ScalerC
metricsClient, err = keystoneAuth.RequestClient(ctx)
if err != nil {
logger.Error(err, "Fail to retrieve new keystone clinet for openstack metrics scaler")
logger.Error(err, "Fail to retrieve new keystone client for openstack metrics scaler")
return nil, err
}
@ -124,7 +124,7 @@ func parseOpenstackMetricMetadata(config *scalersconfig.ScalerConfig, logger log
if val, ok := triggerMetadata["metricsURL"]; ok && val != "" {
meta.metricsURL = val
} else {
logger.Error(fmt.Errorf("no metrics url could be read"), "Error readig metricsURL")
logger.Error(fmt.Errorf("no metrics url could be read"), "Error reading metricsURL")
return nil, fmt.Errorf("no metrics url was declared")
}
@ -145,7 +145,7 @@ func parseOpenstackMetricMetadata(config *scalersconfig.ScalerConfig, logger log
if val, ok := triggerMetadata["granularity"]; ok && val != "" {
granularity, err := strconv.Atoi(val)
if err != nil {
logger.Error(err, "Error converting granulality information %s", err.Error)
logger.Error(err, "Error converting granularity information %s", err.Error)
return nil, err
}
meta.granularity = granularity
@ -251,7 +251,7 @@ func (s *openstackMetricScaler) Close(context.Context) error {
return nil
}
// Gets measureament from API as float64, converts it to int and return the value.
// Gets measurement from API as float64, converts it to int and return the value.
func (s *openstackMetricScaler) readOpenstackMetrics(ctx context.Context) (float64, error) {
var metricURL = s.metadata.metricsURL
@ -284,7 +284,7 @@ func (s *openstackMetricScaler) readOpenstackMetrics(ctx context.Context) (float
granularity := 0 // We start with granularity with value 2 cause gnocchi APIm which is used by openstack, consider a time window, and we want to get the last value
if s.metadata.granularity <= 0 {
s.logger.Error(fmt.Errorf("granularity value is less than 1"), "Minimum accepatble value expected for ganularity is 1.")
s.logger.Error(fmt.Errorf("granularity value is less than 1"), "Minimum acceptable value expected for granularity is 1.")
return defaultValueWhenError, fmt.Errorf("granularity value is less than 1")
}

View File

@ -39,7 +39,7 @@ var openstackMetricAuthMetadataTestData = []parseOpenstackMetricAuthMetadataTest
{authMetadata: map[string]string{"appCredentialID": "my-app-credential-id", "appCredentialSecret": "my-app-credential-secret", "authURL": "http://localhost:5000/v3/"}},
}
var invalidOpenstackMetricMetadaTestData = []parseOpenstackMetricMetadataTestData{
var invalidOpenstackMetricMetadataTestData = []parseOpenstackMetricMetadataTestData{
// Missing metrics url
{metadata: map[string]string{"metricID": "003bb589-166d-439d-8c31-cbf098d863de", "aggregationMethod": "mean", "granularity": "300", "threshold": "1250"}},
@ -131,15 +131,15 @@ func TestOpenstackMetricsGetMetricsForSpecScaling(t *testing.T) {
func TestOpenstackMetricsGetMetricsForSpecScalingInvalidMetaData(t *testing.T) {
testCases := []openstackMetricScalerMetricIdentifier{
{nil, &invalidOpenstackMetricMetadaTestData[0], &openstackMetricAuthMetadataTestData[0], 0, "s0-Missing metrics url"},
{nil, &invalidOpenstackMetricMetadaTestData[1], &openstackMetricAuthMetadataTestData[0], 1, "s1-Empty metrics url"},
{nil, &invalidOpenstackMetricMetadaTestData[2], &openstackMetricAuthMetadataTestData[0], 2, "s2-Missing metricID"},
{nil, &invalidOpenstackMetricMetadaTestData[3], &openstackMetricAuthMetadataTestData[0], 3, "s3-Empty metricID"},
{nil, &invalidOpenstackMetricMetadaTestData[4], &openstackMetricAuthMetadataTestData[0], 4, "s4-Missing aggregation method"},
{nil, &invalidOpenstackMetricMetadaTestData[5], &openstackMetricAuthMetadataTestData[0], 5, "s5-Missing granularity"},
{nil, &invalidOpenstackMetricMetadaTestData[6], &openstackMetricAuthMetadataTestData[0], 6, "s6-Missing threshold"},
{nil, &invalidOpenstackMetricMetadaTestData[7], &openstackMetricAuthMetadataTestData[0], 7, "s7-Missing threshold"},
{nil, &invalidOpenstackMetricMetadaTestData[8], &openstackMetricAuthMetadataTestData[0], 8, "s8-Missing threshold"},
{nil, &invalidOpenstackMetricMetadataTestData[0], &openstackMetricAuthMetadataTestData[0], 0, "s0-Missing metrics url"},
{nil, &invalidOpenstackMetricMetadataTestData[1], &openstackMetricAuthMetadataTestData[0], 1, "s1-Empty metrics url"},
{nil, &invalidOpenstackMetricMetadataTestData[2], &openstackMetricAuthMetadataTestData[0], 2, "s2-Missing metricID"},
{nil, &invalidOpenstackMetricMetadataTestData[3], &openstackMetricAuthMetadataTestData[0], 3, "s3-Empty metricID"},
{nil, &invalidOpenstackMetricMetadataTestData[4], &openstackMetricAuthMetadataTestData[0], 4, "s4-Missing aggregation method"},
{nil, &invalidOpenstackMetricMetadataTestData[5], &openstackMetricAuthMetadataTestData[0], 5, "s5-Missing granularity"},
{nil, &invalidOpenstackMetricMetadataTestData[6], &openstackMetricAuthMetadataTestData[0], 6, "s6-Missing threshold"},
{nil, &invalidOpenstackMetricMetadataTestData[7], &openstackMetricAuthMetadataTestData[0], 7, "s7-Missing threshold"},
{nil, &invalidOpenstackMetricMetadataTestData[8], &openstackMetricAuthMetadataTestData[0], 8, "s8-Missing threshold"},
}
for _, testData := range testCases {

View File

@ -12,7 +12,7 @@ import (
"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/go-logr/logr"
_ "github.com/jackc/pgx/v5/stdlib" // PostreSQL drive required for this scaler
_ "github.com/jackc/pgx/v5/stdlib" // PostgreSQL drive required for this scaler
v2 "k8s.io/api/autoscaling/v2"
"k8s.io/metrics/pkg/apis/external_metrics"

View File

@ -14,7 +14,7 @@ type parsePostgreSQLMetadataTestData struct {
metadata map[string]string
}
var testPostgreSQLMetdata = []parsePostgreSQLMetadataTestData{
var testPostgreSQLMetadata = []parsePostgreSQLMetadataTestData{
// connection with username and password
{metadata: map[string]string{"query": "test_query", "targetQueryValue": "5", "connectionFromEnv": "test_connection_string"}},
// connection with username
@ -40,11 +40,11 @@ type postgreSQLMetricIdentifier struct {
}
var postgreSQLMetricIdentifiers = []postgreSQLMetricIdentifier{
{&testPostgreSQLMetdata[0], map[string]string{"test_connection_string": "postgresql://localhost:5432"}, nil, 0, "s0-postgresql"},
{&testPostgreSQLMetdata[1], map[string]string{"test_connection_string2": "postgresql://test@localhost"}, nil, 1, "s1-postgresql"},
{&testPostgreSQLMetadata[0], map[string]string{"test_connection_string": "postgresql://localhost:5432"}, nil, 0, "s0-postgresql"},
{&testPostgreSQLMetadata[1], map[string]string{"test_connection_string2": "postgresql://test@localhost"}, nil, 1, "s1-postgresql"},
}
func TestPosgresSQLGetMetricSpecForScaling(t *testing.T) {
func TestPostgreSQLGetMetricSpecForScaling(t *testing.T) {
for _, testData := range postgreSQLMetricIdentifiers {
meta, _, err := parsePostgreSQLMetadata(logr.Discard(), &scalersconfig.ScalerConfig{ResolvedEnv: testData.resolvedEnv, TriggerMetadata: testData.metadataTestData.metadata, AuthParams: testData.authParam, TriggerIndex: testData.scaleIndex})
if err != nil {
@ -78,7 +78,7 @@ var testPostgreSQLConnectionstring = []postgreSQLConnectionStringTestData{
{metadata: map[string]string{"query": "test_query", "targetQueryValue": "5", "host": "host1,host2", "port": "1234", "dbName": "testDb", "userName": "user", "sslmode": "required"}, connectionString: "host=host1,host2 port=1234 user=user dbname=testDb sslmode=required password="},
}
func TestPosgresSQLConnectionStringGeneration(t *testing.T) {
func TestPostgreSQLConnectionStringGeneration(t *testing.T) {
for _, testData := range testPostgreSQLConnectionstring {
meta, _, err := parsePostgreSQLMetadata(logr.Discard(), &scalersconfig.ScalerConfig{ResolvedEnv: testData.resolvedEnv, TriggerMetadata: testData.metadata, AuthParams: testData.authParam, TriggerIndex: 0})
if err != nil {
@ -96,7 +96,7 @@ var testPodIdentityAzureWorkloadPostgreSQLConnectionstring = []postgreSQLConnect
{metadata: map[string]string{"query": "test_query", "targetQueryValue": "5", "host": "localhost", "port": "1234", "dbName": "testDb", "userName": "user", "sslmode": "required"}, connectionString: "host=localhost port=1234 user=user dbname=testDb sslmode=required %PASSWORD%"},
}
func TestPodIdentityAzureWorkloadPosgresSQLConnectionStringGeneration(t *testing.T) {
func TestPodIdentityAzureWorkloadPostgreSQLConnectionStringGeneration(t *testing.T) {
identityID := "IDENTITY_ID_CORRESPONDING_TO_USERNAME_FIELD"
for _, testData := range testPodIdentityAzureWorkloadPostgreSQLConnectionstring {
meta, _, err := parsePostgreSQLMetadata(logr.Discard(), &scalersconfig.ScalerConfig{ResolvedEnv: testData.resolvedEnv, TriggerMetadata: testData.metadata, PodIdentity: kedav1alpha1.AuthPodIdentity{Provider: kedav1alpha1.PodIdentityProviderAzureWorkload, IdentityID: &identityID}, AuthParams: testData.authParam, TriggerIndex: 0})
@ -153,7 +153,7 @@ var testPostgresMetadata = []parsePostgresMetadataTestData{
},
}
func TestParsePosgresSQLMetadata(t *testing.T) {
func TestParsePostgreSQLMetadata(t *testing.T) {
for _, testData := range testPostgresMetadata {
_, _, err := parsePostgreSQLMetadata(logr.Discard(), &scalersconfig.ScalerConfig{ResolvedEnv: testData.resolvedEnv, TriggerMetadata: testData.metadata, AuthParams: testData.authParams})
if err != nil && !testData.raisesError {

View File

@ -9,6 +9,7 @@ import (
"net"
"net/http"
"net/http/httptest"
"sync"
"testing"
"time"
@ -64,6 +65,7 @@ var apiStub = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r
type server struct {
pb.UnimplementedMlEngineServiceServer
mu sync.Mutex
grpcSrv *grpc.Server
listener net.Listener
port int
@ -71,12 +73,22 @@ type server struct {
}
func (s *server) GetPredictMetric(_ context.Context, _ *pb.ReqGetPredictMetric) (res *pb.ResGetPredictMetric, err error) {
s.mu.Lock()
s.val = int64(rand.Intn(30000-10000) + 10000)
predictVal := s.val
s.mu.Unlock()
return &pb.ResGetPredictMetric{
ResultMetric: s.val,
ResultMetric: predictVal,
}, nil
}
func (s *server) getPort() int {
s.mu.Lock()
defer s.mu.Unlock()
return s.port
}
func (s *server) start() <-chan error {
errCh := make(chan error, 1)
@ -84,32 +96,37 @@ func (s *server) start() <-chan error {
defer close(errCh)
var (
err error
err error
port int
)
s.port, err = freeport.GetFreePort()
port, err = freeport.GetFreePort()
if err != nil {
log.Fatalf("Could not get free port for init mock grpc server: %s", err)
}
serverURL := fmt.Sprintf("0.0.0.0:%d", s.port)
if s.listener == nil {
var err error
s.listener, err = net.Listen("tcp4", serverURL)
s.mu.Lock()
s.port = port
s.mu.Unlock()
if err != nil {
log.Println("starting grpc server with error")
serverURL := fmt.Sprintf("0.0.0.0:%d", port)
errCh <- err
return
}
var listener net.Listener
listener, err = net.Listen("tcp4", serverURL)
if err != nil {
log.Println("starting grpc server with error")
errCh <- err
return
}
log.Printf("🚀 starting mock grpc server. On host 0.0.0.0, with port: %d", s.port)
s.mu.Lock()
s.listener = listener
s.mu.Unlock()
if err := s.grpcSrv.Serve(s.listener); err != nil {
log.Printf("🚀 starting mock grpc server. On host 0.0.0.0, with port: %d", port)
if err := s.grpcSrv.Serve(listener); err != nil {
log.Println(err, "serving grpc server with error")
errCh <- err
return
}
@ -120,7 +137,15 @@ func (s *server) start() <-chan error {
func (s *server) stop() error {
s.grpcSrv.GracefulStop()
return libsSrv.CheckNetErrClosing(s.listener.Close())
s.mu.Lock()
listener := s.listener
s.mu.Unlock()
if listener != nil {
return libsSrv.CheckNetErrClosing(listener.Close())
}
return nil
}
func runMockGrpcPredictServer() (*server, *grpc.Server) {
@ -211,13 +236,14 @@ var predictKubeMetricIdentifiers = []predictKubeMetricIdentifier{
func TestPredictKubeGetMetricSpecForScaling(t *testing.T) {
mockPredictServer, grpcServer := runMockGrpcPredictServer()
defer func() {
_ = mockPredictServer.stop()
grpcServer.GracefulStop()
}()
mlEngineHost = "0.0.0.0"
mlEnginePort = mockPredictServer.port
mlEnginePort = mockPredictServer.getPort()
for _, testData := range predictKubeMetricIdentifiers {
mockPredictKubeScaler, err := NewPredictKubeScaler(
@ -251,7 +277,7 @@ func TestPredictKubeGetMetrics(t *testing.T) {
}()
mlEngineHost = "0.0.0.0"
mlEnginePort = mockPredictServer.port
mlEnginePort = mockPredictServer.getPort()
for _, testData := range predictKubeMetricIdentifiers {
mockPredictKubeScaler, err := NewPredictKubeScaler(
@ -266,8 +292,13 @@ func TestPredictKubeGetMetrics(t *testing.T) {
result, _, err := mockPredictKubeScaler.GetMetricsAndActivity(context.Background(), predictKubeMetricPrefix)
assert.NoError(t, err)
assert.Equal(t, len(result), 1)
assert.Equal(t, result[0].Value, *resource.NewMilliQuantity(mockPredictServer.val*1000, resource.DecimalSI))
t.Logf("get: %v, want: %v, predictMetric: %d", result[0].Value, *resource.NewQuantity(mockPredictServer.val, resource.DecimalSI), mockPredictServer.val)
mockPredictServer.mu.Lock()
predictVal := mockPredictServer.val
mockPredictServer.mu.Unlock()
assert.Equal(t, result[0].Value, *resource.NewMilliQuantity(predictVal*1000, resource.DecimalSI))
t.Logf("get: %v, want: %v, predictMetric: %d", result[0].Value, *resource.NewQuantity(predictVal, resource.DecimalSI), predictVal)
}
}

View File

@ -42,15 +42,15 @@ type prometheusMetadata struct {
PrometheusAuth *authentication.Config `keda:"optional"`
ServerAddress string `keda:"name=serverAddress, order=triggerMetadata"`
Query string `keda:"name=query, order=triggerMetadata"`
QueryParameters map[string]string `keda:"name=queryParameters, order=triggerMetadata, optional"`
QueryParameters map[string]string `keda:"name=queryParameters, order=triggerMetadata, optional"`
Threshold float64 `keda:"name=threshold, order=triggerMetadata"`
ActivationThreshold float64 `keda:"name=activationThreshold, order=triggerMetadata, optional"`
Namespace string `keda:"name=namespace, order=triggerMetadata, optional"`
CustomHeaders map[string]string `keda:"name=customHeaders, order=triggerMetadata, optional"`
IgnoreNullValues bool `keda:"name=ignoreNullValues, order=triggerMetadata, default=true"`
UnsafeSSL bool `keda:"name=unsafeSsl, order=triggerMetadata, optional"`
AwsRegion string `keda:"name=awsRegion, order=triggerMetadata;authParams, optional"`
Timeout int `keda:"name=timeout, order=triggerMetadata, optional"` // custom HTTP client timeout
ActivationThreshold float64 `keda:"name=activationThreshold, order=triggerMetadata, optional"`
Namespace string `keda:"name=namespace, order=triggerMetadata, optional"`
CustomHeaders map[string]string `keda:"name=customHeaders, order=triggerMetadata, optional"`
IgnoreNullValues bool `keda:"name=ignoreNullValues, order=triggerMetadata, default=true"`
UnsafeSSL bool `keda:"name=unsafeSsl, order=triggerMetadata, optional"`
AwsRegion string `keda:"name=awsRegion, order=triggerMetadata;authParams, optional"`
Timeout time.Duration `keda:"name=timeout, order=triggerMetadata, optional"` // custom HTTP client timeout
}
type promQueryResult struct {
@ -82,7 +82,7 @@ func NewPrometheusScaler(config *scalersconfig.ScalerConfig) (Scaler, error) {
// handle HTTP client timeout
httpClientTimeout := config.GlobalHTTPTimeout
if meta.Timeout > 0 {
httpClientTimeout = time.Duration(meta.Timeout) * time.Millisecond
httpClientTimeout = meta.Timeout * time.Millisecond
}
httpClient := kedautil.CreateHTTPClient(httpClientTimeout, meta.UnsafeSSL)
@ -155,11 +155,6 @@ func parsePrometheusMetadata(config *scalersconfig.ScalerConfig) (meta *promethe
return nil, err
}
// validate the timeout
if meta.Timeout < 0 {
return nil, fmt.Errorf("timeout must be greater than 0: %d", meta.Timeout)
}
return meta, nil
}
@ -289,7 +284,7 @@ func (s *prometheusScaler) ExecutePromQuery(ctx context.Context) (float64, error
if s.metadata.IgnoreNullValues {
return 0, nil
}
err := fmt.Errorf("promtheus query returns %f", v)
err := fmt.Errorf("prometheus query returns %f", v)
s.logger.Error(err, "Error converting prometheus value")
return -1, err
}

View File

@ -3,11 +3,9 @@ package scalers
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
@ -22,26 +20,6 @@ import (
kedautil "github.com/kedacore/keda/v2/pkg/util"
)
type pulsarScaler struct {
metadata pulsarMetadata
httpClient *http.Client
logger logr.Logger
}
type pulsarMetadata struct {
adminURL string
topic string
subscription string
msgBacklogThreshold int64
activationMsgBacklogThreshold int64
pulsarAuth *authentication.AuthMeta
statsURL string
metricName string
triggerIndex int
}
const (
pulsarMetricType = "External"
defaultMsgBacklogThreshold = 10
@ -50,6 +28,33 @@ const (
pulsarAuthModeHeader = "X-Pulsar-Auth-Method-Name"
)
type pulsarScaler struct {
metadata *pulsarMetadata
httpClient *http.Client
logger logr.Logger
}
type pulsarMetadata struct {
AdminURL string `keda:"name=adminURL, order=triggerMetadata;resolvedEnv"`
Topic string `keda:"name=topic, order=triggerMetadata;resolvedEnv"`
Subscription string `keda:"name=subscription, order=triggerMetadata;resolvedEnv"`
MsgBacklogThreshold int64 `keda:"name=msgBacklogThreshold, order=triggerMetadata, default=10"`
ActivationMsgBacklogThreshold int64 `keda:"name=activationMsgBacklogThreshold, order=triggerMetadata, default=0"`
IsPartitionedTopic bool `keda:"name=isPartitionedTopic, order=triggerMetadata, default=false"`
TLS string `keda:"name=tls, order=triggerMetadata, optional"`
// OAuth fields
OauthTokenURI string `keda:"name=oauthTokenURI, order=triggerMetadata, optional"`
Scope string `keda:"name=scope, order=triggerMetadata, optional"`
ClientID string `keda:"name=clientID, order=triggerMetadata, optional"`
EndpointParams string `keda:"name=EndpointParams, order=triggerMetadata, optional"`
pulsarAuth *authentication.AuthMeta
statsURL string
metricName string
triggerIndex int
}
type pulsarSubscription struct {
Msgrateout float64 `json:"msgRateOut"`
Msgthroughputout float64 `json:"msgThroughputOut"`
@ -95,10 +100,80 @@ type pulsarStats struct {
Deduplicationstatus string `json:"deduplicationStatus"`
}
// buildStatsURL constructs the stats URL based on topic and partitioned flag
func (m *pulsarMetadata) buildStatsURL() {
topic := strings.ReplaceAll(m.Topic, "persistent://", "")
if m.IsPartitionedTopic {
m.statsURL = m.AdminURL + "/admin/v2/persistent/" + topic + "/partitioned-stats"
} else {
m.statsURL = m.AdminURL + "/admin/v2/persistent/" + topic + "/stats"
}
}
// buildMetricName constructs the metric name
func (m *pulsarMetadata) buildMetricName() {
m.metricName = fmt.Sprintf("%s-%s-%s", "pulsar", m.Topic, m.Subscription)
}
// handleBackwardsCompatibility handles backwards compatibility for TLS configuration
func (m *pulsarMetadata) handleBackwardsCompatibility(config *scalersconfig.ScalerConfig) {
// For backwards compatibility, we need to map "tls: enable" to auth modes
if m.TLS == enable && (config.AuthParams["cert"] != "" || config.AuthParams["key"] != "") {
if authModes, authModesOk := config.TriggerMetadata[authentication.AuthModesKey]; authModesOk {
config.TriggerMetadata[authentication.AuthModesKey] = fmt.Sprintf("%s,%s", authModes, authentication.TLSAuthType)
} else {
config.TriggerMetadata[authentication.AuthModesKey] = string(authentication.TLSAuthType)
}
}
}
// setupAuthentication configures authentication for the pulsar scaler
func (m *pulsarMetadata) setupAuthentication(config *scalersconfig.ScalerConfig) error {
auth, err := authentication.GetAuthConfigs(config.TriggerMetadata, config.AuthParams)
if err != nil {
return fmt.Errorf("error parsing authentication: %w", err)
}
if auth != nil && auth.EnableOAuth {
if err := m.configureOAuth(auth); err != nil {
return err
}
}
m.pulsarAuth = auth
return nil
}
// configureOAuth configures OAuth settings
func (m *pulsarMetadata) configureOAuth(auth *authentication.AuthMeta) error {
if auth.OauthTokenURI == "" {
auth.OauthTokenURI = m.OauthTokenURI
}
if auth.Scopes == nil {
auth.Scopes = authentication.ParseScope(m.Scope)
}
if auth.ClientID == "" {
auth.ClientID = m.ClientID
}
// client_secret is not required for mtls OAuth(RFC8705)
// set secret to random string to work around the Go OAuth lib
if auth.ClientSecret == "" {
auth.ClientSecret = time.Now().String()
}
if auth.EndpointParams == nil {
v, err := authentication.ParseEndpointParams(m.EndpointParams)
if err != nil {
return fmt.Errorf("error parsing EndpointParams: %s", m.EndpointParams)
}
auth.EndpointParams = v
}
return nil
}
// NewPulsarScaler creates a new PulsarScaler
func NewPulsarScaler(config *scalersconfig.ScalerConfig) (Scaler, error) {
logger := InitializeLogger(config, "pulsar_scaler")
pulsarMetadata, err := parsePulsarMetadata(config, logger)
pulsarMetadata, err := parsePulsarMetadata(config)
if err != nil {
return nil, fmt.Errorf("error parsing pulsar metadata: %w", err)
}
@ -118,7 +193,7 @@ func NewPulsarScaler(config *scalersconfig.ScalerConfig) (Scaler, error) {
// The pulsar broker redirects HTTP calls to other brokers and expects the Authorization header
client.CheckRedirect = func(req *http.Request, via []*http.Request) error {
if len(via) != 0 && via[0].Response.StatusCode == http.StatusTemporaryRedirect {
addAuthHeaders(req, &pulsarMetadata)
addAuthHeaders(req, pulsarMetadata)
}
return nil
}
@ -132,103 +207,21 @@ func NewPulsarScaler(config *scalersconfig.ScalerConfig) (Scaler, error) {
}, nil
}
func parsePulsarMetadata(config *scalersconfig.ScalerConfig, _ logr.Logger) (pulsarMetadata, error) {
meta := pulsarMetadata{}
switch {
case config.TriggerMetadata["adminURLFromEnv"] != "":
meta.adminURL = config.ResolvedEnv[config.TriggerMetadata["adminURLFromEnv"]]
case config.TriggerMetadata["adminURL"] != "":
meta.adminURL = config.TriggerMetadata["adminURL"]
default:
return meta, errors.New("no adminURL given")
func parsePulsarMetadata(config *scalersconfig.ScalerConfig) (*pulsarMetadata, error) {
meta := &pulsarMetadata{triggerIndex: config.TriggerIndex}
if err := config.TypedConfig(meta); err != nil {
return nil, fmt.Errorf("error parsing pulsar metadata: %w", err)
}
switch {
case config.TriggerMetadata["topicFromEnv"] != "":
meta.topic = config.ResolvedEnv[config.TriggerMetadata["topicFromEnv"]]
case config.TriggerMetadata["topic"] != "":
meta.topic = config.TriggerMetadata["topic"]
default:
return meta, errors.New("no topic given")
meta.buildStatsURL()
meta.buildMetricName()
meta.handleBackwardsCompatibility(config)
if err := meta.setupAuthentication(config); err != nil {
return nil, err
}
topic := strings.ReplaceAll(meta.topic, "persistent://", "")
if config.TriggerMetadata["isPartitionedTopic"] == stringTrue {
meta.statsURL = meta.adminURL + "/admin/v2/persistent/" + topic + "/partitioned-stats"
} else {
meta.statsURL = meta.adminURL + "/admin/v2/persistent/" + topic + "/stats"
}
switch {
case config.TriggerMetadata["subscriptionFromEnv"] != "":
meta.subscription = config.ResolvedEnv[config.TriggerMetadata["subscriptionFromEnv"]]
case config.TriggerMetadata["subscription"] != "":
meta.subscription = config.TriggerMetadata["subscription"]
default:
return meta, errors.New("no subscription given")
}
meta.metricName = fmt.Sprintf("%s-%s-%s", "pulsar", meta.topic, meta.subscription)
meta.activationMsgBacklogThreshold = 0
if val, ok := config.TriggerMetadata["activationMsgBacklogThreshold"]; ok {
activationMsgBacklogThreshold, err := strconv.ParseInt(val, 10, 64)
if err != nil {
return meta, fmt.Errorf("activationMsgBacklogThreshold parsing error %w", err)
}
meta.activationMsgBacklogThreshold = activationMsgBacklogThreshold
}
meta.msgBacklogThreshold = defaultMsgBacklogThreshold
if val, ok := config.TriggerMetadata["msgBacklogThreshold"]; ok {
t, err := strconv.ParseInt(val, 10, 64)
if err != nil {
return meta, fmt.Errorf("error parsing %s: %w", "msgBacklogThreshold", err)
}
meta.msgBacklogThreshold = t
}
// For backwards compatibility, we need to map "tls: enable" to
if tls, ok := config.TriggerMetadata["tls"]; ok {
if tls == enable && (config.AuthParams["cert"] != "" || config.AuthParams["key"] != "") {
if authModes, authModesOk := config.TriggerMetadata[authentication.AuthModesKey]; authModesOk {
config.TriggerMetadata[authentication.AuthModesKey] = fmt.Sprintf("%s,%s", authModes, authentication.TLSAuthType)
} else {
config.TriggerMetadata[authentication.AuthModesKey] = string(authentication.TLSAuthType)
}
}
}
auth, err := authentication.GetAuthConfigs(config.TriggerMetadata, config.AuthParams)
if err != nil {
return meta, fmt.Errorf("error parsing %s: %w", "msgBacklogThreshold", err)
}
if auth != nil && auth.EnableOAuth {
if auth.OauthTokenURI == "" {
auth.OauthTokenURI = config.TriggerMetadata["oauthTokenURI"]
}
if auth.Scopes == nil {
auth.Scopes = authentication.ParseScope(config.TriggerMetadata["scope"])
}
if auth.ClientID == "" {
auth.ClientID = config.TriggerMetadata["clientID"]
}
// client_secret is not required for mtls OAuth(RFC8705)
// set secret to random string to work around the Go OAuth lib
if auth.ClientSecret == "" {
auth.ClientSecret = time.Now().String()
}
if auth.EndpointParams == nil {
v, err := authentication.ParseEndpointParams(config.TriggerMetadata["EndpointParams"])
if err != nil {
return meta, fmt.Errorf("error parsing EndpointParams: %s", config.TriggerMetadata["EndpointParams"])
}
auth.EndpointParams = v
}
}
meta.pulsarAuth = auth
meta.triggerIndex = config.TriggerIndex
return meta, nil
}
@ -251,7 +244,7 @@ func (s *pulsarScaler) GetStats(ctx context.Context) (*pulsarStats, error) {
}
client = config.Client(context.Background())
}
addAuthHeaders(req, &s.metadata)
addAuthHeaders(req, s.metadata)
res, err := client.Do(req)
if err != nil {
@ -291,7 +284,7 @@ func (s *pulsarScaler) getMsgBackLog(ctx context.Context) (int64, bool, error) {
return 0, false, nil
}
v, found := stats.Subscriptions[s.metadata.subscription]
v, found := stats.Subscriptions[s.metadata.Subscription]
return v.Msgbacklog, found, nil
}
@ -304,16 +297,16 @@ func (s *pulsarScaler) GetMetricsAndActivity(ctx context.Context, metricName str
}
if !found {
return nil, false, fmt.Errorf("have not subscription found! %s", s.metadata.subscription)
return nil, false, fmt.Errorf("have not subscription found! %s", s.metadata.Subscription)
}
metric := GenerateMetricInMili(metricName, float64(msgBacklog))
return []external_metrics.ExternalMetricValue{metric}, msgBacklog > s.metadata.activationMsgBacklogThreshold, nil
return []external_metrics.ExternalMetricValue{metric}, msgBacklog > s.metadata.ActivationMsgBacklogThreshold, nil
}
func (s *pulsarScaler) GetMetricSpecForScaling(context.Context) []v2.MetricSpec {
targetMetricValue := resource.NewQuantity(s.metadata.msgBacklogThreshold, resource.DecimalSI)
targetMetricValue := resource.NewQuantity(s.metadata.MsgBacklogThreshold, resource.DecimalSI)
externalMetric := &v2.ExternalMetricSource{
Metric: v2.MetricIdentifier{

View File

@ -116,8 +116,7 @@ var pulsarMetricIdentifiers = []pulsarMetricIdentifier{
func TestParsePulsarMetadata(t *testing.T) {
for _, testData := range parsePulsarMetadataTestDataset {
logger := InitializeLogger(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadata, AuthParams: validPulsarWithAuthParams}, "test_pulsar_scaler")
meta, err := parsePulsarMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadata, AuthParams: validPulsarWithAuthParams}, logger)
meta, err := parsePulsarMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadata, AuthParams: validPulsarWithAuthParams})
if err != nil && !testData.isError {
t.Error("Expected success but got error", err)
@ -126,41 +125,39 @@ func TestParsePulsarMetadata(t *testing.T) {
t.Error("Expected error but got success")
}
if meta.adminURL != testData.adminURL {
t.Errorf("Expected adminURL %s but got %s\n", testData.adminURL, meta.adminURL)
}
if !testData.isError && meta != nil {
if meta.AdminURL != testData.adminURL {
t.Errorf("Expected adminURL %s but got %s\n", testData.adminURL, meta.AdminURL)
}
if !testData.isError {
if testData.isPartitionedTopic {
if !strings.HasSuffix(meta.statsURL, "/partitioned-stats") {
t.Errorf("Expected statsURL to end with /partitioned-stats but got %s\n", meta.statsURL)
}
} else {
} else if meta.statsURL != "" {
if !strings.HasSuffix(meta.statsURL, "/stats") {
t.Errorf("Expected statsURL to end with /stats but got %s\n", meta.statsURL)
}
}
}
if meta.topic != testData.topic {
t.Errorf("Expected topic %s but got %s\n", testData.topic, meta.topic)
}
if meta.subscription != testData.subscription {
t.Errorf("Expected subscription %s but got %s\n", testData.subscription, meta.subscription)
}
var testDataMsgBacklogThreshold int64
if val, ok := testData.metadata["msgBacklogThreshold"]; ok {
testDataMsgBacklogThreshold, err = strconv.ParseInt(val, 10, 64)
if err != nil {
t.Errorf("error parseing msgBacklogThreshold: %v", err)
if meta.Topic != testData.topic {
t.Errorf("Expected topic %s but got %s\n", testData.topic, meta.Topic)
}
if meta.Subscription != testData.subscription {
t.Errorf("Expected subscription %s but got %s\n", testData.subscription, meta.Subscription)
}
var testDataMsgBacklogThreshold int64 = defaultMsgBacklogThreshold
if val, ok := testData.metadata["msgBacklogThreshold"]; ok {
testDataMsgBacklogThreshold, err = strconv.ParseInt(val, 10, 64)
if err != nil {
t.Errorf("error parsing msgBacklogThreshold: %v", err)
}
}
if meta.MsgBacklogThreshold != testDataMsgBacklogThreshold {
t.Errorf("Expected msgBacklogThreshold %d but got %d\n", testDataMsgBacklogThreshold, meta.MsgBacklogThreshold)
}
} else {
testDataMsgBacklogThreshold = defaultMsgBacklogThreshold
}
if meta.msgBacklogThreshold != testDataMsgBacklogThreshold && testDataMsgBacklogThreshold != defaultMsgBacklogThreshold {
t.Errorf("Expected msgBacklogThreshold %s but got %d\n", testData.metadata["msgBacklogThreshold"], meta.msgBacklogThreshold)
}
authParams := validPulsarWithoutAuthParams
@ -168,7 +165,7 @@ func TestParsePulsarMetadata(t *testing.T) {
authParams = validPulsarWithAuthParams
}
meta, err = parsePulsarMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadata, AuthParams: authParams}, logger)
meta, err = parsePulsarMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadata, AuthParams: authParams})
if err != nil && !testData.isError {
t.Error("Expected success but got error", err)
@ -177,16 +174,18 @@ func TestParsePulsarMetadata(t *testing.T) {
t.Error("Expected error but got success")
}
if meta.adminURL != testData.adminURL {
t.Errorf("Expected adminURL %s but got %s\n", testData.adminURL, meta.adminURL)
}
if !testData.isError && meta != nil {
if meta.AdminURL != testData.adminURL {
t.Errorf("Expected adminURL %s but got %s\n", testData.adminURL, meta.AdminURL)
}
if meta.topic != testData.topic {
t.Errorf("Expected topic %s but got %s\n", testData.topic, meta.topic)
}
if meta.Topic != testData.topic {
t.Errorf("Expected topic %s but got %s\n", testData.topic, meta.Topic)
}
if meta.subscription != testData.subscription {
t.Errorf("Expected subscription %s but got %s\n", testData.subscription, meta.subscription)
if meta.Subscription != testData.subscription {
t.Errorf("Expected subscription %s but got %s\n", testData.subscription, meta.Subscription)
}
}
}
}
@ -207,8 +206,7 @@ func compareScope(scopes []string, scopeStr string) bool {
func TestPulsarAuthParams(t *testing.T) {
for _, testData := range parsePulsarMetadataTestAuthTLSDataset {
logger := InitializeLogger(&scalersconfig.ScalerConfig{TriggerMetadata: testData.triggerMetadata, AuthParams: testData.authParams}, "test_pulsar_scaler")
meta, err := parsePulsarMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.triggerMetadata, AuthParams: testData.authParams}, logger)
meta, err := parsePulsarMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.triggerMetadata, AuthParams: testData.authParams})
if err != nil && !testData.isError {
t.Error("Expected success but got error", testData.authParams, err)
@ -217,7 +215,7 @@ func TestPulsarAuthParams(t *testing.T) {
t.Error("Expected error but got success")
}
if meta.pulsarAuth == nil {
if meta == nil || meta.pulsarAuth == nil {
t.Log("meta.pulsarAuth is nil, skipping rest of validation of", testData)
continue
}
@ -267,8 +265,7 @@ func TestPulsarAuthParams(t *testing.T) {
func TestPulsarOAuthParams(t *testing.T) {
for _, testData := range parsePulsarMetadataTestAuthTLSDataset {
logger := InitializeLogger(&scalersconfig.ScalerConfig{TriggerMetadata: testData.triggerMetadata, AuthParams: testData.authParams}, "test_pulsar_scaler")
meta, err := parsePulsarMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.triggerMetadata, AuthParams: testData.authParams}, logger)
meta, err := parsePulsarMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.triggerMetadata, AuthParams: testData.authParams})
if err != nil && !testData.isError {
t.Error("Expected success but got error", testData.authParams, err)
@ -277,7 +274,7 @@ func TestPulsarOAuthParams(t *testing.T) {
t.Error("Expected error but got success")
}
if meta.pulsarAuth == nil {
if meta == nil || meta.pulsarAuth == nil {
t.Log("meta.pulsarAuth is nil, skipping rest of validation of", testData)
continue
}
@ -322,8 +319,7 @@ func TestPulsarOAuthParams(t *testing.T) {
func TestPulsarGetMetricSpecForScaling(t *testing.T) {
for _, testData := range pulsarMetricIdentifiers {
logger := InitializeLogger(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadataTestData.metadata, AuthParams: validWithAuthParams}, "test_pulsar_scaler")
meta, err := parsePulsarMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadataTestData.metadata, AuthParams: validWithAuthParams}, logger)
meta, err := parsePulsarMetadata(&scalersconfig.ScalerConfig{TriggerMetadata: testData.metadataTestData.metadata, AuthParams: validPulsarWithAuthParams})
if err != nil {
if testData.metadataTestData.isError {
continue

Some files were not shown because too many files have changed in this diff Show More