Compare commits

...

20 Commits

Author SHA1 Message Date
Tristan Smagghe d032b8b4f1
feat: add fallbackTargetRef to CRD and required changes (#1280)
Signed-off-by: yyewolf <yyewolf@gmail.com>
2025-08-07 11:05:24 +02:00
Jorge Turrado Ferrero 9afe55b5fc
chore: Bump go version and deps (#1305)
* chore: Bump go version and deps

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>

* Bump golangci

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>

* Bump golangci

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>

---------

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>
2025-07-25 11:34:31 +00:00
Alexander Pykavy bf355649c6
Implement HTTPScaledObject scoped timeout (#1285)
* Implement HTTPScaledObject scoped timeout

Signed-off-by: Alexander Pykavy <aleksandrpykavyj@gmail.com>

* Add tests for HTTPScaledObject scoped timeout

Signed-off-by: Alexander Pykavy <aleksandrpykavyj@gmail.com>

---------

Signed-off-by: Alexander Pykavy <aleksandrpykavyj@gmail.com>
2025-07-11 07:41:43 +00:00
ilia-medvedev-codefresh 29a6c2b509
feat(interceptor): Add possibility to skip tls verification for upstreams (#1307)
* Feat(interceptor): Add possibility to skip tls verification for upstreams

Signed-off-by: Ilia Medvedev <ilia.medvedev@codefresh.io>

* Update readme

Signed-off-by: Ilia Medvedev <ilia.medvedev@codefresh.io>

* Update CHANGELOG.md

Signed-off-by: ilia-medvedev-codefresh <ilia.medvedev@codefresh.io>

* run goimports

Signed-off-by: Ilia Medvedev <ilia.medvedev@codefresh.io>

---------

Signed-off-by: Ilia Medvedev <ilia.medvedev@codefresh.io>
Signed-off-by: ilia-medvedev-codefresh <ilia.medvedev@codefresh.io>
2025-07-10 13:34:49 +00:00
Jirka Kremser dc863c6fcd
Add a way to turn off the profiling for all three http add-on components (#1308)
* Add a way to turn off the profiling for all three http add-on components

Signed-off-by: Jirka Kremser <jiri.kremser@gmail.com>

* linter - The linter 'exportloopref' is deprecated (since v1.60.2) due to: Since Go1.22 (loopvar) this linter is no longer relevant. Replaced by copyloopvar.

Signed-off-by: Jirka Kremser <jiri.kremser@gmail.com>

---------

Signed-off-by: Jirka Kremser <jiri.kremser@gmail.com>
2025-06-17 13:32:02 +02:00
Zbynek Roubalik 6a6adfb7ac
scaler: allow using HSO and SO with different names (#1299)
Signed-off-by: Zbynek Roubalik <zroubalik@gmail.com>
2025-05-21 19:12:36 +00:00
Zbynek Roubalik 2317f75346
e2e: fix nginx installation via helm (#1300)
Signed-off-by: Zbynek Roubalik <zroubalik@gmail.com>
2025-05-21 14:11:54 +00:00
Zbynek Roubalik fe41713ec3
chore: get the correct HTTPScaledObject reference in the scaler (#1296)
Signed-off-by: Zbynek Roubalik <zroubalik@gmail.com>
2025-05-20 08:00:21 +02:00
Zbynek Roubalik 17b2af021d
chore: reformat scaler code to match KEDA go style (#1295)
Signed-off-by: Zbynek Roubalik <zroubalik@gmail.com>
2025-05-19 21:53:26 +02:00
bhussain91 d2bed33270
feat: Add support for tracing (#1021)
Signed-off-by: Bilal Hussain <bilal.hussain@10xbanking.com>
Signed-off-by: bhussain91 <161487948+bhussain91@users.noreply.github.com>
2025-05-19 10:51:45 +02:00
Jan Wozniak d891e6e5bd
e2e tests: use ghcr registry for otel images (#1284)
Signed-off-by: Jan Wozniak <wozniak.jan@gmail.com>
2025-04-26 18:02:59 +02:00
dependabot[bot] 30e1694baf
chore(deps): bump the all-updates group with 3 updates (#1275)
Bumps the all-updates group with 3 updates: [github.com/onsi/ginkgo/v2](https://github.com/onsi/ginkgo), [github.com/onsi/gomega](https://github.com/onsi/gomega) and google.golang.org/protobuf.


Updates `github.com/onsi/ginkgo/v2` from 2.23.0 to 2.23.3
- [Release notes](https://github.com/onsi/ginkgo/releases)
- [Changelog](https://github.com/onsi/ginkgo/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/ginkgo/compare/v2.23.0...v2.23.3)

Updates `github.com/onsi/gomega` from 1.36.2 to 1.36.3
- [Release notes](https://github.com/onsi/gomega/releases)
- [Changelog](https://github.com/onsi/gomega/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/gomega/compare/v1.36.2...v1.36.3)

Updates `google.golang.org/protobuf` from 1.36.5 to 1.36.6

---
updated-dependencies:
- dependency-name: github.com/onsi/ginkgo/v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-updates
- dependency-name: github.com/onsi/gomega
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-updates
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-26 23:01:25 +00:00
Gilbert Becker 8f84195862
Change path in walkthrough call to match httpscaledobject from example (#1266)
Signed-off-by: Gilbert <gilbertbckrgithub@gmail.com>
2025-03-26 13:39:11 +01:00
dependabot[bot] 46884e237d
chore(deps): bump github.com/prometheus/common in the all-updates group (#1270)
Bumps the all-updates group with 1 update: [github.com/prometheus/common](https://github.com/prometheus/common).


Updates `github.com/prometheus/common` from 0.62.0 to 0.63.0
- [Release notes](https://github.com/prometheus/common/releases)
- [Changelog](https://github.com/prometheus/common/blob/main/RELEASE.md)
- [Commits](https://github.com/prometheus/common/compare/v0.62.0...v0.63.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/common
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-23 01:39:19 +00:00
dependabot[bot] c34fc522fa
chore(deps): bump the all-updates group with 2 updates (#1271)
Bumps the all-updates group with 2 updates: [docker/login-action](https://github.com/docker/login-action) and [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action).


Updates `docker/login-action` from 3.3.0 to 3.4.0
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](9780b0c442...74a5d14239)

Updates `golangci/golangci-lint-action` from 6.5.0 to 6.5.1
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](2226d7cb06...4696ba8bab)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-updates
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-22 23:29:30 +00:00
dependabot[bot] ee764f97ea
chore(deps): bump github.com/expr-lang/expr from 1.16.9 to 1.17.0 (#1272)
Bumps [github.com/expr-lang/expr](https://github.com/expr-lang/expr) from 1.16.9 to 1.17.0.
- [Release notes](https://github.com/expr-lang/expr/releases)
- [Commits](https://github.com/expr-lang/expr/compare/v1.16.9...v1.17.0)

---
updated-dependencies:
- dependency-name: github.com/expr-lang/expr
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-22 22:59:51 +01:00
dependabot[bot] a99deeeb8b
chore(deps): bump golang.org/x/net from 0.35.0 to 0.36.0 (#1269)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.35.0 to 0.36.0.
- [Commits](https://github.com/golang/net/compare/v0.35.0...v0.36.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-14 23:12:53 +01:00
Jorge Turrado Ferrero 30e3ecc2ea
chore(deps): bump the all-updates group across 1 directory with 11 up… (#1268)
* chore(deps): bump the all-updates group across 1 directory with 11 updates

Bumps the all-updates group with 7 updates in the / directory:

| Package | From | To |
| --- | --- | --- |
| [github.com/google/go-cmp](https://github.com/google/go-cmp) | `0.6.0` | `0.7.0` |
| [github.com/onsi/ginkgo/v2](https://github.com/onsi/ginkgo) | `2.22.2` | `2.23.0` |
| [go.opentelemetry.io/otel](https://github.com/open-telemetry/opentelemetry-go) | `1.34.0` | `1.35.0` |
| [go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp](https://github.com/open-telemetry/opentelemetry-go) | `1.34.0` | `1.35.0` |
| [golang.org/x/sync](https://github.com/golang/sync) | `0.11.0` | `0.12.0` |
| [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) | `1.20.5` | `1.21.1` |
| [go.opentelemetry.io/otel/exporters/prometheus](https://github.com/open-telemetry/opentelemetry-go) | `0.56.0` | `0.57.0` |



Updates `github.com/google/go-cmp` from 0.6.0 to 0.7.0
- [Release notes](https://github.com/google/go-cmp/releases)
- [Commits](https://github.com/google/go-cmp/compare/v0.6.0...v0.7.0)

Updates `github.com/onsi/ginkgo/v2` from 2.22.2 to 2.23.0
- [Release notes](https://github.com/onsi/ginkgo/releases)
- [Changelog](https://github.com/onsi/ginkgo/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/ginkgo/compare/v2.22.2...v2.23.0)

Updates `go.opentelemetry.io/otel` from 1.34.0 to 1.35.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.34.0...v1.35.0)

Updates `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp` from 1.34.0 to 1.35.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.34.0...v1.35.0)

Updates `go.opentelemetry.io/otel/sdk` from 1.34.0 to 1.35.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.34.0...v1.35.0)

Updates `golang.org/x/sync` from 0.11.0 to 0.12.0
- [Commits](https://github.com/golang/sync/compare/v0.11.0...v0.12.0)

Updates `google.golang.org/grpc` from 1.70.0 to 1.71.0
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.70.0...v1.71.0)

Updates `github.com/prometheus/client_golang` from 1.20.5 to 1.21.1
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.20.5...v1.21.1)

Updates `go.opentelemetry.io/otel/exporters/prometheus` from 0.56.0 to 0.57.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/exporters/prometheus/v0.56.0...exporters/prometheus/v0.57.0)

Updates `go.opentelemetry.io/otel/metric` from 1.34.0 to 1.35.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.34.0...v1.35.0)

Updates `go.opentelemetry.io/otel/sdk/metric` from 1.34.0 to 1.35.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.34.0...v1.35.0)

---
updated-dependencies:
- dependency-name: github.com/google/go-cmp
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-updates
- dependency-name: github.com/onsi/ginkgo/v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-updates
- dependency-name: go.opentelemetry.io/otel
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-updates
- dependency-name: go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-updates
- dependency-name: go.opentelemetry.io/otel/sdk
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-updates
- dependency-name: golang.org/x/sync
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-updates
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-updates
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-updates
- dependency-name: go.opentelemetry.io/otel/exporters/prometheus
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-updates
- dependency-name: go.opentelemetry.io/otel/metric
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-updates
- dependency-name: go.opentelemetry.io/otel/sdk/metric
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-updates
...

Signed-off-by: dependabot[bot] <support@github.com>

* fix the test

Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Jorge Turrado <jorge_turrado@hotmail.es>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-11 13:09:30 +01:00
Zahari Aleksiev 850678c13a
Update ADOPTERS.md (#1265)
Signed-off-by: Zahari Aleksiev <67169256+rd-zahari-aleksiev@users.noreply.github.com>
2025-03-09 18:44:49 +00:00
dependabot[bot] 616cab02d1
chore(deps): bump the all-updates group with 4 updates (#1263)
Bumps the all-updates group with 4 updates: [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action), [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer), [Azure/setup-helm](https://github.com/azure/setup-helm) and [actions/cache](https://github.com/actions/cache).


Updates `docker/setup-buildx-action` from 3.9.0 to 3.10.0
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](f7ce87c1d6...b5ca514318)

Updates `sigstore/cosign-installer` from 3.8.0 to 3.8.1
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](c56c2d3e59...d7d6bc7722)

Updates `Azure/setup-helm` from 4.2.0 to 4.3.0
- [Release notes](https://github.com/azure/setup-helm/releases)
- [Changelog](https://github.com/Azure/setup-helm/blob/main/CHANGELOG.md)
- [Commits](fe7b79cd5e...b9e51907a0)

Updates `actions/cache` from 4.2.0 to 4.2.2
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](1bd1e32a3b...d4323d4df1)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-updates
- dependency-name: sigstore/cosign-installer
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-updates
- dependency-name: Azure/setup-helm
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-updates
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-03 21:28:57 +01:00
53 changed files with 1990 additions and 557 deletions

View File

@ -3,7 +3,7 @@
# Licensed under the MIT License. See https://go.microsoft.com/fwlink/?linkid=2090316 for license information.
#-------------------------------------------------------------------------------------------------------------
FROM golang:1.23.6
FROM golang:1.24.3
# Avoid warnings by switching to noninteractive
ENV DEBIAN_FRONTEND=noninteractive
@ -55,7 +55,7 @@ RUN apt-get update \
&& go install honnef.co/go/tools/cmd/staticcheck@latest \
&& go install golang.org/x/tools/gopls@latest \
# Install golangci-lint
&& curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.57.2 \
&& curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v2.1.0 \
#
# Create a non-root user to use if preferred - see https://aka.ms/vscode-remote/containers/non-root-user.
&& groupadd --gid $USER_GID $USERNAME \

View File

@ -16,7 +16,7 @@ jobs:
packages: write
id-token: write # needed for signing the images with GitHub OIDC Token **not production ready**
container: ghcr.io/kedacore/keda-tools:1.23.6
container: ghcr.io/kedacore/keda-tools:1.24.3
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
@ -24,7 +24,7 @@ jobs:
run: git config --global --add safe.directory "$GITHUB_WORKSPACE"
- name: Login to GHCR
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
# Username used to log in to a Docker registry. If not set then no login will occur
username: ${{ github.repository_owner }}
@ -34,7 +34,7 @@ jobs:
registry: ghcr.io
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@f7ce87c1d6bead3e36075b2ce75da1f6cc28aaca # v3.9.0
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: Publish on GitHub Container Registry
run: make publish-multiarch
@ -43,7 +43,7 @@ jobs:
# https://github.com/sigstore/cosign-installer
- name: Install Cosign
uses: sigstore/cosign-installer@c56c2d3e59e4281cc41dea2217323ba5694b171e # v3.8.0
uses: sigstore/cosign-installer@3454372f43399081ed03b604cb2d021dabca52bb # v3.8.2
- name: Check Cosign install!
run: cosign version

View File

@ -12,7 +12,7 @@ jobs:
packages: write
id-token: write # needed for signing the images with GitHub OIDC Token **not production ready**
container: ghcr.io/kedacore/keda-tools:1.23.6
container: ghcr.io/kedacore/keda-tools:1.24.3
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
@ -30,7 +30,7 @@ jobs:
VERSION: ${{ steps.get_version.outputs.VERSION }}
- name: Login to GHCR
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
# Username used to log in to a Docker registry. If not set then no login will occur
username: ${{ github.repository_owner }}
@ -40,7 +40,7 @@ jobs:
registry: ghcr.io
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@f7ce87c1d6bead3e36075b2ce75da1f6cc28aaca # v3.9.0
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: Publish on GitHub Container Registry
run: make publish-multiarch
@ -49,7 +49,7 @@ jobs:
# https://github.com/sigstore/cosign-installer
- name: Install Cosign
uses: sigstore/cosign-installer@c56c2d3e59e4281cc41dea2217323ba5694b171e # v3.8.0
uses: sigstore/cosign-installer@3454372f43399081ed03b604cb2d021dabca52bb # v3.8.2
- name: Check Cosign install!
run: cosign version

View File

@ -37,12 +37,12 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- uses: actions/setup-go@f111f3307d8850f501ac008e886eec1fd1932a34 # v4.1
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
with:
go-version: "1.23"
go-version: "1.24"
- name: Helm install
uses: Azure/setup-helm@fe7b79cd5ee1e45176fcad797de68ecaf3ca4814 # v4.2.0
uses: Azure/setup-helm@b9e51907a09c216f16ebe8536097933489208112 # v4.3.0
- name: Create k8s ${{ matrix.kubernetesVersion }} Kind Cluster
uses: helm/kind-action@a1b0e391336a6ee6713a0583f8c6240d70863de3 # v1.12.0
@ -107,12 +107,12 @@ jobs:
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- uses: actions/setup-go@f111f3307d8850f501ac008e886eec1fd1932a34 # v4.1
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
with:
go-version: "1.23"
go-version: "1.24"
- name: Helm install
uses: Azure/setup-helm@fe7b79cd5ee1e45176fcad797de68ecaf3ca4814 # v4.2.0
uses: Azure/setup-helm@b9e51907a09c216f16ebe8536097933489208112 # v4.3.0
- name: Create k8s ${{ matrix.kubernetesVersion }} Kind Cluster
uses: helm/kind-action@a1b0e391336a6ee6713a0583f8c6240d70863de3 # v1.12.0

View File

@ -13,7 +13,7 @@ permissions:
jobs:
build_scaler:
runs-on: ubuntu-latest
container: ghcr.io/kedacore/keda-tools:1.23.6
container: ghcr.io/kedacore/keda-tools:1.24.3
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- name: Register workspace path
@ -25,7 +25,7 @@ jobs:
build_operator:
runs-on: ubuntu-latest
container: ghcr.io/kedacore/keda-tools:1.23.6
container: ghcr.io/kedacore/keda-tools:1.24.3
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- name: Register workspace path
@ -37,7 +37,7 @@ jobs:
build_interceptor:
runs-on: ubuntu-latest
container: ghcr.io/kedacore/keda-tools:1.23.6
container: ghcr.io/kedacore/keda-tools:1.24.3
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- name: Register workspace path

View File

@ -21,5 +21,6 @@ jobs:
with:
paths: "**/*.md"
markdown: true
concurrency: 1
retry: true
linksToSkip: "https://github.com/kedacore/http-add-on/pkgs/container/http-add-on-interceptor, https://github.com/kedacore/http-add-on/pkgs/container/http-add-on-operator, https://github.com/kedacore/http-add-on/pkgs/container/http-add-on-scaler, http://opentelemetry-collector.open-telemetry-system:4318, https://www.gnu.org/software/make/"
linksToSkip: "https://github.com/kedacore/http-add-on/pkgs/container/http-add-on-interceptor, https://github.com/kedacore/http-add-on/pkgs/container/http-add-on-operator, https://github.com/kedacore/http-add-on/pkgs/container/http-add-on-scaler,http://opentelemetry-collector.open-telemetry-system:4318,http://opentelemetry-collector.open-telemetry-system:4318/v1/traces, https://www.gnu.org/software/make/"

View File

@ -16,7 +16,7 @@ jobs:
validate:
name: validate - ${{ matrix.name }}
runs-on: ${{ matrix.runner }}
container: ghcr.io/kedacore/keda-tools:1.23.6
container: ghcr.io/kedacore/keda-tools:1.24.3
strategy:
matrix:
include:
@ -40,13 +40,13 @@ jobs:
echo ::set-output name=build_cache::$(go env GOCACHE)
- name: Go modules cache
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ${{ steps.go-paths.outputs.mod_cache }}
key: ${{ runner.os }}-go-mod-${{ hashFiles('**/go.sum') }}
- name: Go build cache
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ${{ steps.go-paths.outputs.build_cache }}
key: ${{ runner.os }}-go-build-cache-${{ hashFiles('**/go.sum') }}
@ -79,7 +79,7 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-go@f111f3307d8850f501ac008e886eec1fd1932a34 # v5.3.0
with:
go-version: "1.23"
- uses: golangci/golangci-lint-action@2226d7cb06a077cd73e56eedd38eecad18e5d837 # v6.5.0
go-version: "1.24"
- uses: golangci/golangci-lint-action@4afd733a84b1f43292c63897423277bb7f4313a9 # v8.0.0
with:
version: v1.60
version: v2.1.0

View File

@ -1,76 +1,74 @@
# options for analysis running
version: "2"
run:
# default concurrency is a available CPU number
concurrency: 4
# add the build tags to include e2e tests files
build-tags:
- e2e
# timeout for analysis, e.g. 30s, 5m, default is 1m
timeout: 10m
- e2e
linters:
# please, do not use `enable-all`: it's deprecated and will be removed soon.
# inverted configuration with `enable-all` and `disable` is not scalable during updates of golangci-lint
disable-all: true
default: none
enable:
- typecheck
- dupl
- goprintffuncname
- govet
- nolintlint
#- rowserrcheck
- gofmt
- revive
- goimports
- misspell
- bodyclose
- unconvert
- ineffassign
- staticcheck
- exportloopref
- copyloopvar
#- depguard #https://github.com/kedacore/keda/issues/4980
- dogsled
- dupl
- errcheck
#- funlen
- gci
- goconst
- gocritic
- gocyclo
- gosimple
- stylecheck
- unused
- unparam
- goprintffuncname
- govet
- ineffassign
- misspell
- nolintlint
- revive
- staticcheck
- unconvert
- unparam
- unused
- whitespace
issues:
include:
- EXC0002 # disable excluding of issues about comments from golint
# Excluding configuration per-path, per-linter, per-text and per-source
exclude-rules:
- path: _test\.go
linters:
- dupl
- unparam
- revive
# Exclude gci check for //+kubebuilder:scaffold:imports comments. Waiting to
# resolve https://github.com/kedacore/keda/issues/4379
- path: operator/controllers/http/suite_test.go
linters:
- gci
- path: operator/main.go
linters:
- gci
# Exlude httpso.Spec.ScaleTargetRef.Deployment until we remove it in v0.9.0
- linters:
- staticcheck
text: "SA1019: httpso.Spec.ScaleTargetRef.Deployment"
linters-settings:
funlen:
lines: 80
statements: 40
gci:
sections:
- standard
- default
- prefix(github.com/kedacore/http-add-on)
settings:
funlen:
lines: 80
statements: 40
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
rules:
- linters:
- dupl
- revive
- unparam
path: _test\.go
paths:
- third_party$
- builtin$
- examples$
formatters:
enable:
- gci
- gofmt
- goimports
settings:
gci:
sections:
- standard
- default
- prefix(github.com/kedacore/http-add-on)
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$
# Exclude gci check for //+kubebuilder:scaffold:imports comments. Waiting to
# resolve https://github.com/kedacore/keda/issues/4379
- operator/controllers/http/suite_test.go
- operator/main.go

View File

@ -7,6 +7,7 @@ This page contains a list of organizations who are using KEDA's HTTP Add-on in p
| Organization | Status | More Information (Blog post, etc.) |
| ------------ | ---------| ---------------|
| PropulsionAI |![testing](https://img.shields.io/badge/-development%20&%20testing-green?style=flat)|[PropulsionAI](https://propulsionhq.com) allows you to add AI to your apps, without writing code.|
| REWE Digital |![testing](https://img.shields.io/badge/-development%20&%20testing-green?style=flat)|From delivery service to market — [REWE Digital](https://www.rewe-digital.com) strengthens leading technological position of REWE Group in food retail sector. |
## Become an adopter!

View File

@ -25,11 +25,14 @@ This changelog keeps track of work items that have been completed and are ready
### New
- **General**: TODO ([#TODO](https://github.com/kedacore/http-add-on/issues/TODO))
- **General**: Add failover service on cold-start ([#1280](https://github.com/kedacore/http-add-on/pull/1280))
- **General**: Add configurable tracing support to the interceptor proxy ([#1021](https://github.com/kedacore/http-add-on/pull/1021))
- **General**: Allow using HSO and SO with different names ([#1293](https://github.com/kedacore/http-add-on/issues/1293))
- **General**: Support profiling for KEDA components ([#4789](https://github.com/kedacore/keda/issues/4789))
- **General**: Add possibility to skip TLS verification for upstreams in interceptor ([#1307](https://github.com/kedacore/http-add-on/pull/1307))
### Improvements
- **General**: TODO ([#TODO](https://github.com/kedacore/http-add-on/issues/TODO))
- **Interceptor**: Support HTTPScaledObject scoped timeout ([#813](https://github.com/kedacore/http-add-on/issues/813))
### Fixes

View File

@ -60,6 +60,32 @@ spec:
spec:
description: HTTPScaledObjectSpec defines the desired state of HTTPScaledObject
properties:
coldStartTimeoutFailoverRef:
description: (optional) The name of the failover service to route
HTTP requests to when the target is not available
properties:
port:
description: The port to route to
format: int32
type: integer
portName:
description: The port to route to referenced by name
type: string
service:
description: The name of the service to route to
type: string
timeoutSeconds:
default: 30
description: The timeout in seconds to wait before routing to
the failover service (Default 30)
format: int32
type: integer
required:
- service
type: object
x-kubernetes-validations:
- message: must define either the 'portName' or the 'port'
rule: has(self.portName) != has(self.port)
hosts:
description: |-
The hosts to route. All requests which the "Host" header
@ -162,6 +188,22 @@ spec:
metric value
format: int32
type: integer
timeouts:
description: (optional) Timeouts that override the global ones
properties:
conditionWait:
description: How long to wait for the backing workload to have
1 or more replicas before connecting and sending the HTTP request
(Default is set by the KEDA_CONDITION_WAIT_TIMEOUT environment
variable)
type: string
responseHeader:
description: How long to wait between when the HTTP request is
sent to the backing app and when response headers need to arrive
(Default is set by the KEDA_RESPONSE_HEADER_TIMEOUT environment
variable)
type: string
type: object
required:
- scaleTargetRef
type: object

View File

@ -19,3 +19,11 @@ spec:
value: "http://opentelemetry-collector.open-telemetry-system:4318"
- name: OTEL_METRIC_EXPORT_INTERVAL
value: "1"
- name: OTEL_EXPORTER_OTLP_TRACES_ENABLED
value: "true"
- name: OTEL_EXPORTER_OTLP_TRACES_PROTOCOL
value: "http/protobuf"
- name: OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
value: "http://opentelemetry-collector.open-telemetry-system:4318/v1/traces"
- name: OTEL_EXPORTER_OTLP_TRACES_INSECURE
value: "true"

View File

@ -28,4 +28,35 @@ For setting multiple TLS certs, set `KEDA_HTTP_PROXY_TLS_CERT_STORE_PATHS` with
* `XYZ.crt` + `XYZ.key` - this is a convention when using Kubernetes Secrets of type tls
* `XYZ.pem` + `XYZ-key.pem`
To disable certificate chain verification, set `KEDA_HTTP_PROXY_TLS_SKIP_VERIFY` to `false`
The matching between certs and requests is performed during the TLS ClientHelo message, where the SNI service name is compared to SANs provided in each cert and the first matching cert will be used for the rest of the TLS handshake.
# Configuring tracing for the KEDA HTTP Add-on interceptor proxy
### Supported Exporters:
* **console** - The console exporter is useful for development and debugging tasks, and is the simplest to set up.
* **http/protobuf** - To send trace data to an OTLP endpoint (like the collector or Jaeger >= v1.35.0) youll want to configure an OTLP exporter that sends to your endpoint.
* * **grpc** - To configure exporter to send trace data over gRPC connection to an OTLP endpoint (like the collector or Jaeger >= v1.35.0) youll want to configure an OTLP exporter that sends to your endpoint.
### Configuring tracing with console exporter
To enable tracing with the console exporter, the `OTEL_EXPORTER_OTLP_TRACES_ENABLED` environment variable should be set to `true` on the interceptor deployment. (`false` by default).
Secondly set `OTEL_EXPORTER_OTLP_TRACES_PROTOCOL` to `console` (`console` by default). Other protocols include (`http/protobuf` and `grpc`).
Finally set `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` to `"http://localhost:4318/v1/traces"` (`"http://localhost:4318/v1/traces"` by default).
### Configuring tracing with OTLP exporter
When configured, the interceptor proxy can export metrics to a OTEL HTTP collector.
To enable tracing with otlp exporter, the `OTEL_EXPORTER_OTLP_TRACES_ENABLED` environment variable should be set to `true` on the interceptor deployment. (`false` by default).
Secondly set `OTEL_EXPORTER_OTLP_TRACES_PROTOCOL` to `otlphttp` (`console` by default). Other protocols include (`http/protobuf` and `grpc`)
Finally set `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` to the collector to send the traces to (e.g. http://opentelemetry-collector.open-telemetry-system:4318/v1/traces) (`"http://localhost:4318/v1/traces"` by default).
NOTE: full path is required to be set including <scheme><url><port><path>
Optional variables
`OTEL_EXPORTER_OTLP_HEADERS` - To pass any extra headers to the spans to utilise your OTEL collector e.g. authentication details (`"key1=value1,key2=value2"`)
`OTEL_EXPORTER_OTLP_TRACES_INSECURE` - To send traces to the tracing via HTTP rather than HTTPS (`false` by default)
`OTEL_EXPORTER_OTLP_TRACES_TIMEOUT` - The batcher timeout in seconds to send batch of data points (`5` by default)
### Configuring Service Failover

View File

@ -134,7 +134,7 @@ Now that you have your application running and your ingress configured, you can
Regardless, you can use the below `curl` command to make a request to your application:
```console
curl -H "Host: myhost.com" <Your IP>/path1
curl -H "Host: myhost.com" <Your IP>/test
```
>Note the `-H` flag above to specify the `Host` header. This is needed to tell the interceptor how to route the request. If you have a DNS name set up for the IP, you don't need this header.
@ -143,7 +143,7 @@ You can also use port-forward to interceptor service for making the request:
```console
kubectl port-forward svc/keda-add-ons-http-interceptor-proxy -n ${NAMESPACE} 8080:8080
curl -H "Host: myhost.com" localhost:8080/path1
curl -H "Host: myhost.com" localhost:8080/test
```
### Integrating HTTP Add-On Scaler with other KEDA scalers

162
go.mod
View File

@ -1,43 +1,51 @@
module github.com/kedacore/http-add-on
go 1.23.6
go 1.24.3
require (
github.com/go-logr/logr v1.4.2
github.com/google/go-cmp v0.6.0
github.com/go-logr/logr v1.4.3
github.com/google/go-cmp v0.7.0
github.com/hashicorp/go-immutable-radix/v2 v2.1.0
github.com/kedacore/keda/v2 v2.16.1
github.com/kedacore/keda/v2 v2.17.1
github.com/kelseyhightower/envconfig v1.4.0
github.com/onsi/ginkgo/v2 v2.22.2
github.com/onsi/gomega v1.36.2
github.com/onsi/ginkgo/v2 v2.23.4
github.com/onsi/gomega v1.37.0
github.com/stretchr/testify v1.10.0
go.opentelemetry.io/otel v1.34.0
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.34.0
go.opentelemetry.io/otel/sdk v1.34.0
go.uber.org/mock v0.5.0
golang.org/x/sync v0.11.0
google.golang.org/grpc v1.70.0
google.golang.org/protobuf v1.36.5
k8s.io/api v0.31.6
k8s.io/apimachinery v0.31.6
k8s.io/client-go v0.31.6
k8s.io/code-generator v0.31.6
k8s.io/utils v0.0.0-20240921022957-49e7df575cb6
sigs.k8s.io/controller-runtime v0.19.6
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0
go.opentelemetry.io/contrib/propagators/b3 v1.36.0
go.opentelemetry.io/otel v1.36.0
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.36.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.36.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.36.0
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.36.0
go.opentelemetry.io/otel/sdk v1.36.0
go.uber.org/mock v0.5.2
golang.org/x/sync v0.14.0
google.golang.org/grpc v1.72.2
google.golang.org/protobuf v1.36.6
k8s.io/api v0.32.2
k8s.io/apimachinery v0.32.2
k8s.io/client-go v1.5.2
k8s.io/code-generator v0.32.2
k8s.io/utils v0.0.0-20250502105355-0f33e8f1c979
sigs.k8s.io/controller-runtime v0.19.7
sigs.k8s.io/gateway-api v1.2.1
sigs.k8s.io/kustomize/kustomize/v5 v5.6.0
)
replace (
// pin k8s.io to v0.31.6
// pin k8s.io to v0.31.7 & sigs.k8s.io/controller-runtime to v0.19.7
github.com/google/cel-go => github.com/google/cel-go v0.20.1
k8s.io/api => k8s.io/api v0.31.6
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.31.6
k8s.io/apimachinery => k8s.io/apimachinery v0.31.6
k8s.io/apiserver => k8s.io/apiserver v0.31.6
k8s.io/client-go => k8s.io/client-go v0.31.6
k8s.io/code-generator => k8s.io/code-generator v0.31.6
k8s.io/component-base => k8s.io/component-base v0.31.6
github.com/prometheus/client_golang => github.com/prometheus/client_golang v1.21.1
github.com/prometheus/client_model => github.com/prometheus/client_model v0.6.1
github.com/prometheus/common => github.com/prometheus/common v0.63.0
k8s.io/api => k8s.io/api v0.31.7
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.31.7
k8s.io/apimachinery => k8s.io/apimachinery v0.31.7
k8s.io/apiserver => k8s.io/apiserver v0.31.7
k8s.io/client-go => k8s.io/client-go v0.31.7
k8s.io/code-generator => k8s.io/code-generator v0.31.7
k8s.io/component-base => k8s.io/component-base v0.31.7
k8s.io/kube-openapi => k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340
k8s.io/metrics => k8s.io/metrics v0.31.6
k8s.io/utils => k8s.io/utils v0.0.0-20240711033017-18e509b52bc8
@ -47,89 +55,95 @@ replace (
require (
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cenkalti/backoff/v5 v5.0.2 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-task/slim-sprig/v3 v3.0.0 // indirect
github.com/gorilla/websocket v1.5.3 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 // indirect
github.com/klauspost/compress v1.17.11 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0 // indirect
go.opentelemetry.io/proto/otlp v1.7.0 // indirect
go.uber.org/automaxprocs v1.6.0 // indirect
go.uber.org/zap v1.27.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250528174236-200df99c418a // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
)
require (
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.12.1 // indirect
github.com/evanphx/json-patch/v5 v5.9.0 // indirect
github.com/expr-lang/expr v1.16.9 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/emicklei/go-restful/v3 v3.12.2 // indirect
github.com/evanphx/json-patch/v5 v5.9.11 // indirect
github.com/expr-lang/expr v1.17.4 // indirect
github.com/fsnotify/fsnotify v1.9.0 // indirect
github.com/fxamacker/cbor/v2 v2.8.0 // indirect
github.com/go-errors/errors v1.5.1 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-logr/zapr v1.3.0 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonpointer v0.21.1 // indirect
github.com/go-openapi/jsonreference v0.21.0 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/go-task/slim-sprig/v3 v3.0.0 // indirect
github.com/go-openapi/swag v0.23.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/gnostic-models v0.6.9 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/pprof v0.0.0-20241210010833-40e02aabc2ad // indirect
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/gorilla/websocket v1.5.1 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.25.1 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/imdario/mergo v0.3.16 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.17.11 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/moby/spdystream v0.4.0 // indirect
github.com/mailru/easyjson v0.9.0 // indirect
github.com/moby/spdystream v0.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_golang v1.20.5
github.com/prometheus/client_model v0.6.1
github.com/prometheus/common v0.62.0
github.com/prometheus/procfs v0.15.1 // indirect
github.com/prometheus/client_golang v1.22.0
github.com/prometheus/client_model v0.6.2
github.com/prometheus/common v0.64.0
github.com/prometheus/procfs v0.16.1 // indirect
github.com/sergi/go-diff v1.2.0 // indirect
github.com/spf13/cobra v1.8.1 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/spf13/pflag v1.0.6 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/xlab/treeprint v1.2.0 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/otel/exporters/prometheus v0.56.0
go.opentelemetry.io/otel/metric v1.34.0
go.opentelemetry.io/otel/sdk/metric v1.34.0
go.opentelemetry.io/otel/trace v1.34.0 // indirect
go.opentelemetry.io/proto/otlp v1.5.0 // indirect
go.opentelemetry.io/otel/exporters/prometheus v0.57.0
go.opentelemetry.io/otel/metric v1.36.0
go.opentelemetry.io/otel/sdk/metric v1.36.0
go.opentelemetry.io/otel/trace v1.36.0
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
golang.org/x/exp v0.0.0-20241108190413-2d47ceb2692f
golang.org/x/mod v0.22.0 // indirect
golang.org/x/net v0.34.0 // indirect
golang.org/x/oauth2 v0.24.0 // indirect
golang.org/x/sys v0.29.0 // indirect
golang.org/x/term v0.28.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/time v0.8.0 // indirect
golang.org/x/tools v0.28.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250115164207-1a7da9e5054f // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
golang.org/x/exp v0.0.0-20250531010427-b6e5de432a8b
golang.org/x/mod v0.24.0 // indirect
golang.org/x/net v0.40.0 // indirect
golang.org/x/oauth2 v0.30.0 // indirect
golang.org/x/sys v0.33.0 // indirect
golang.org/x/term v0.32.0 // indirect
golang.org/x/text v0.25.0 // indirect
golang.org/x/time v0.11.0 // indirect
golang.org/x/tools v0.33.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.5.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250528174236-200df99c418a // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/apiextensions-apiserver v0.31.3 // indirect
k8s.io/gengo/v2 v2.0.0-20240228010128-51d4e06bde70 // indirect
k8s.io/apiextensions-apiserver v0.32.1 // indirect
k8s.io/gengo/v2 v2.0.0-20240911193312-2b36238f13e9 // indirect
k8s.io/klog/v2 v2.130.1 // indirect
k8s.io/kube-openapi v0.0.0-20241212222426-2c72e554b1e7 // indirect
knative.dev/pkg v0.0.0-20241218051509-40afb7c5436e // indirect
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff // indirect
knative.dev/pkg v0.0.0-20250602175424-3c3a920206ea // indirect
sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 // indirect
sigs.k8s.io/kustomize/api v0.19.0 // indirect
sigs.k8s.io/kustomize/cmd/config v0.19.0 // indirect
sigs.k8s.io/kustomize/kyaml v0.19.0 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.3 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.7.0 // indirect
sigs.k8s.io/yaml v1.4.0 // indirect
)

235
go.sum
View File

@ -4,8 +4,8 @@ github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/blang/semver/v4 v4.0.0 h1:1PFHFE6yCCTv8C1TeyNNarDzntLi7wMI5i/pzqYIsAM=
github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/cenkalti/backoff/v5 v5.0.2 h1:rIfFVxEf1QsI7E1ZHfp/B4DF/6QBAUhmgkxc0H7Zss8=
github.com/cenkalti/backoff/v5 v5.0.2/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
@ -13,33 +13,35 @@ github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/emicklei/go-restful/v3 v3.12.1 h1:PJMDIM/ak7btuL8Ex0iYET9hxM3CI2sjZtzpL63nKAU=
github.com/emicklei/go-restful/v3 v3.12.1/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/emicklei/go-restful/v3 v3.12.2 h1:DhwDP0vY3k8ZzE0RunuJy8GhNpPL6zqLkDf9B/a0/xU=
github.com/emicklei/go-restful/v3 v3.12.2/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/evanphx/json-patch v5.9.0+incompatible h1:fBXyNpNMuTTDdquAq/uisOr2lShz4oaXpDTX2bLe7ls=
github.com/evanphx/json-patch v5.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch/v5 v5.9.0 h1:kcBlZQbplgElYIlo/n1hJbls2z/1awpXxpRi0/FOJfg=
github.com/evanphx/json-patch/v5 v5.9.0/go.mod h1:VNkHZ/282BpEyt/tObQO8s5CMPmYYq14uClGH4abBuQ=
github.com/expr-lang/expr v1.16.9 h1:WUAzmR0JNI9JCiF0/ewwHB1gmcGw5wW7nWt8gc6PpCI=
github.com/expr-lang/expr v1.16.9/go.mod h1:8/vRC7+7HBzESEqt5kKpYXxrxkr31SaO8r40VO/1IT4=
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E=
github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ=
github.com/evanphx/json-patch/v5 v5.9.11 h1:/8HVnzMq13/3x9TPvjG08wUGqBTmZBsCWzjTM0wiaDU=
github.com/evanphx/json-patch/v5 v5.9.11/go.mod h1:3j+LviiESTElxA4p3EMKAB9HXj3/XEtnUf6OZxqIQTM=
github.com/expr-lang/expr v1.17.4 h1:qhTVftZ2Z3WpOEXRHWErEl2xf1Kq011MnQmWgLq06CY=
github.com/expr-lang/expr v1.17.4/go.mod h1:8/vRC7+7HBzESEqt5kKpYXxrxkr31SaO8r40VO/1IT4=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/fxamacker/cbor/v2 v2.8.0 h1:fFtUGXUzXPHTIUdne5+zzMPTfffl3RD5qYnkY40vtxU=
github.com/fxamacker/cbor/v2 v2.8.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ=
github.com/go-errors/errors v1.5.1 h1:ZwEMSLRCapFLflTpT7NKaAc7ukJ8ZPEjzlxt8rPN8bk=
github.com/go-errors/errors v1.5.1/go.mod h1:sIVyrIiJhuEF+Pj9Ebtd6P/rEYROXFi3BopGUQ5a5Og=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-logr/zapr v1.3.0 h1:XGdV8XW8zdwFiwOA2Dryh1gj2KRQyOOoNmBy4EplIcQ=
github.com/go-logr/zapr v1.3.0/go.mod h1:YKepepNBd1u/oyhd/yQmtjVXmm9uML4IXUgMOwR8/Gg=
github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ=
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
github.com/go-openapi/jsonpointer v0.21.1 h1:whnzv/pNXtK2FbX/W9yJfRmE2gsmkfahjMKB0fZvcic=
github.com/go-openapi/jsonpointer v0.21.1/go.mod h1:50I1STOfbY1ycR8jGz8DaMeLCdXiI6aDteEdRNNzpdk=
github.com/go-openapi/jsonreference v0.21.0 h1:Rs+Y7hSXT83Jacb7kFyjn4ijOuVGSvOdF2+tg1TRrwQ=
github.com/go-openapi/jsonreference v0.21.0/go.mod h1:LmZmgsrTkVg9LG4EaHeY8cBDslNPMo06cago5JNLkm4=
github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
github.com/go-openapi/swag v0.23.1 h1:lpsStH0n2ittzTnbaSloVZLuB5+fvSY/+hnagBjSNZU=
github.com/go-openapi/swag v0.23.1/go.mod h1:STZs8TbRvEQQKUA+JZNAm3EWlgaOBGpyFDqQnDHMef0=
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
@ -51,21 +53,21 @@ github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6
github.com/google/gnostic-models v0.6.9 h1:MU/8wDLif2qCXZmzncUQ/BOfxWfthHi63KqpoNbWqVw=
github.com/google/gnostic-models v0.6.9/go.mod h1:CiWsm0s6BSQd1hRn8/QmxqB6BesYcbSZxsz9b0KuDBw=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/pprof v0.0.0-20241210010833-40e02aabc2ad h1:a6HEuzUHeKH6hwfN/ZoQgRgVIWFJljSWa/zetS2WTvg=
github.com/google/pprof v0.0.0-20241210010833-40e02aabc2ad/go.mod h1:vavhavw2zAxS5dIdcRluK6cSGGPlZynqzFM8NdvU144=
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 h1:BHT72Gu3keYf3ZEu2J0b1vyeLSOYI8bm5wbJM/8yDe8=
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 h1:El6M4kTTCOh6aBiKaUGG7oYTSPP8MxqL4YI3kZKwcP4=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510/go.mod h1:pupxD2MaaD3pAXIBCelhxNneeOaAeabZDe5s4K6zSpQ=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/websocket v1.5.1 h1:gmztn0JnHVt9JZquRuzLw3g4wouNVzKL15iLr/zn/QY=
github.com/gorilla/websocket v1.5.1/go.mod h1:x3kM2JMyaluk02fnUJpQuwD2dCS5NDG2ZHL0uE0tcaY=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.25.1 h1:VNqngBF40hVlDloBruUehVYC3ArSgIyScOAyMRqBxRg=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.25.1/go.mod h1:RBRO7fro65R6tjKzYgLAFo0t1QEXY1Dp+i/bvpRiqiQ=
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 h1:5ZPtiqj0JL5oKWmcsq4VMaAW5ukBEgSGXEN89zeH1Jo=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3/go.mod h1:ndYquD05frm2vACXE1nsccT4oJzjhw2arTS2cpUD1PI=
github.com/hashicorp/go-immutable-radix/v2 v2.1.0 h1:CUW5RYIcysz+D3B+l1mDeXrQ7fUvGGCwJfdASSzbrfo=
github.com/hashicorp/go-immutable-radix/v2 v2.1.0/go.mod h1:hgdqLXA4f6NIjRVisM1TJ9aOJVNRqKZj+xDGF6m7PBw=
github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8=
@ -80,8 +82,8 @@ github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8Hm
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/kedacore/keda/v2 v2.16.1 h1:LfYsxfSX8DjetLW8q9qnriImH936POrQJvE+caRoScI=
github.com/kedacore/keda/v2 v2.16.1/go.mod h1:pO2ksUCwSOQ2u3OWqj+jh9Hgf0+26MZug6dF7WWgcAk=
github.com/kedacore/keda/v2 v2.17.1 h1:UomWibe5aO7COMUyF+jVM9fuENf4/wcSpiui65tF+d0=
github.com/kedacore/keda/v2 v2.17.1/go.mod h1:yKJMF8zuLI2xXvZtgfcbW+V8k3VO4a4R/fucy3z5lC8=
github.com/kelseyhightower/envconfig v1.4.0 h1:Im6hONhd3pLkfDFsbRgu68RDNkGF1r3dvMUtDTo2cv8=
github.com/kelseyhightower/envconfig v1.4.0/go.mod h1:cccZRl6mQpaq41TPp5QxidR+Sa3axMbJDNb//FQX6Gg=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
@ -97,10 +99,10 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/moby/spdystream v0.4.0 h1:Vy79D6mHeJJjiPdFEL2yku1kl0chZpJfZcPpb16BRl8=
github.com/moby/spdystream v0.4.0/go.mod h1:xBAYlnt/ay+11ShkdFKNAG7LsyK/tmNBVvVOwrfMgdI=
github.com/mailru/easyjson v0.9.0 h1:PrnmzHw7262yW8sTBwxi1PdJA3Iw/EKBa8psRf7d9a4=
github.com/mailru/easyjson v0.9.0/go.mod h1:1+xMtQp2MRNVL/V1bOzuP3aP8VNwRW55fQUto+XFtTU=
github.com/moby/spdystream v0.5.0 h1:7r0J1Si3QO/kjRitvSLVVFUjxMEb/YLj6S9FF62JBCU=
github.com/moby/spdystream v0.5.0/go.mod h1:xBAYlnt/ay+11ShkdFKNAG7LsyK/tmNBVvVOwrfMgdI=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@ -112,23 +114,25 @@ github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f h1:y5//uYreIhSUg3J1GEMiLbxo1LJaP8RfCpH6pymGZus=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/onsi/ginkgo/v2 v2.22.2 h1:/3X8Panh8/WwhU/3Ssa6rCKqPLuAkVY2I0RoyDLySlU=
github.com/onsi/ginkgo/v2 v2.22.2/go.mod h1:oeMosUL+8LtarXBHu/c0bx2D/K9zyQ6uX3cTyztHwsk=
github.com/onsi/gomega v1.36.2 h1:koNYke6TVk6ZmnyHrCXba/T/MoLBXFjeC1PtvYgw0A8=
github.com/onsi/gomega v1.36.2/go.mod h1:DdwyADRjrc825LhMEkD76cHR5+pUnjhUN8GlHlRPHzY=
github.com/onsi/ginkgo/v2 v2.23.4 h1:ktYTpKJAVZnDT4VjxSbiBenUjmlL/5QkBEocaWXiQus=
github.com/onsi/ginkgo/v2 v2.23.4/go.mod h1:Bt66ApGPBFzHyR+JO10Zbt0Gsp4uWxu5mIOTusL46e8=
github.com/onsi/gomega v1.37.0 h1:CdEG8g0S133B4OswTDC/5XPSzE1OeP29QOioj2PID2Y=
github.com/onsi/gomega v1.37.0/go.mod h1:8D9+Txp43QWKhM24yyOBEdpkzN8FvJyAwecBgsU4KU0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y=
github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g=
github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U=
github.com/prometheus/client_golang v1.21.1 h1:DOvXXTqVzvkIewV/CDPFdejpMCGeMcbGCQ8YOmu+Ibk=
github.com/prometheus/client_golang v1.21.1/go.mod h1:U9NM32ykUErtVBxdvD3zfi+EuFkkaBvMb09mIfe0Zgg=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.62.0 h1:xasJaQlnWAeyHdUBeGjXmutelfJHWMRr+Fg4QszZ2Io=
github.com/prometheus/common v0.62.0/go.mod h1:vyBcEuLSvWos9B1+CyL7JZ2up+uFzXhkqml0W5zIY1I=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/prometheus/common v0.63.0 h1:YR/EIY1o3mEFP/kZCD7iDMnLPlGyuU2Gb3HIcXnA98k=
github.com/prometheus/common v0.63.0/go.mod h1:VVFF/fBIoToEnWRVkYoXEkq3R3paCoxG9PXP74SnV18=
github.com/prometheus/procfs v0.16.1 h1:hZ15bTNuirocR6u0JZ6BAHHmwS1p8B4P6MRqxtzMyRg=
github.com/prometheus/procfs v0.16.1/go.mod h1:teAbpZRB1iIAJYREa1LsoWUXykVXA1KlTmWl8x/U+Is=
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
@ -136,8 +140,9 @@ github.com/sergi/go-diff v1.2.0 h1:XU+rvMAioB0UC3q1MFrIQy4Vo5/4VsRDQQXHsEya6xQ=
github.com/sergi/go-diff v1.2.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=
github.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/pflag v1.0.6 h1:jFzHGLGAlb3ruxLB8MhbI6A8+AQX/2eW4qeyNZXNp2o=
github.com/spf13/pflag v1.0.6/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
@ -154,26 +159,40 @@ github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9de
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/otel v1.34.0 h1:zRLXxLCgL1WyKsPVrgbSdMN4c0FMkDAskSTQP+0hdUY=
go.opentelemetry.io/otel v1.34.0/go.mod h1:OWFPOQ+h4G8xpyjgqo4SxJYdDQ/qmRH+wivy7zzx9oI=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.34.0 h1:opwv08VbCZ8iecIWs+McMdHRcAXzjAeda3uG2kI/hcA=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.34.0/go.mod h1:oOP3ABpW7vFHulLpE8aYtNBodrHhMTrvfxUXGvqm7Ac=
go.opentelemetry.io/otel/exporters/prometheus v0.56.0 h1:GnCIi0QyG0yy2MrJLzVrIM7laaJstj//flf1zEJCG+E=
go.opentelemetry.io/otel/exporters/prometheus v0.56.0/go.mod h1:JQcVZtbIIPM+7SWBB+T6FK+xunlyidwLp++fN0sUaOk=
go.opentelemetry.io/otel/metric v1.34.0 h1:+eTR3U0MyfWjRDhmFMxe2SsW64QrZ84AOhvqS7Y+PoQ=
go.opentelemetry.io/otel/metric v1.34.0/go.mod h1:CEDrp0fy2D0MvkXE+dPV7cMi8tWZwX3dmaIhwPOaqHE=
go.opentelemetry.io/otel/sdk v1.34.0 h1:95zS4k/2GOy069d321O8jWgYsW3MzVV+KuSPKp7Wr1A=
go.opentelemetry.io/otel/sdk v1.34.0/go.mod h1:0e/pNiaMAqaykJGKbi+tSjWfNNHMTxoC9qANsCzbyxU=
go.opentelemetry.io/otel/sdk/metric v1.34.0 h1:5CeK9ujjbFVL5c1PhLuStg1wxA7vQv7ce1EK0Gyvahk=
go.opentelemetry.io/otel/sdk/metric v1.34.0/go.mod h1:jQ/r8Ze28zRKoNRdkjCZxfs6YvBTG1+YIqyFVFYec5w=
go.opentelemetry.io/otel/trace v1.34.0 h1:+ouXS2V8Rd4hp4580a8q23bg0azF2nI8cqLYnC8mh/k=
go.opentelemetry.io/otel/trace v1.34.0/go.mod h1:Svm7lSjQD7kG7KJ/MUHPVXSDGz2OX4h0M2jHBhmSfRE=
go.opentelemetry.io/proto/otlp v1.5.0 h1:xJvq7gMzB31/d406fB8U5CBdyQGw4P399D1aQWU/3i4=
go.opentelemetry.io/proto/otlp v1.5.0/go.mod h1:keN8WnHxOy8PG0rQZjJJ5A2ebUoafqWp0eVQ4yIXvJ4=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 h1:F7Jx+6hwnZ41NSFTO5q4LYDtJRXBf2PD0rNBkeB/lus=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q=
go.opentelemetry.io/contrib/propagators/b3 v1.36.0 h1:xrAb/G80z/l5JL6XlmUMSD1i6W8vXkWrLfmkD3w/zZo=
go.opentelemetry.io/contrib/propagators/b3 v1.36.0/go.mod h1:UREJtqioFu5awNaCR8aEx7MfJROFlAWb6lPaJFbHaG0=
go.opentelemetry.io/otel v1.36.0 h1:UumtzIklRBY6cI/lllNZlALOF5nNIzJVb16APdvgTXg=
go.opentelemetry.io/otel v1.36.0/go.mod h1:/TcFMXYjyRNh8khOAO9ybYkqaDBb/70aVwkNML4pP8E=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.36.0 h1:gAU726w9J8fwr4qRDqu1GYMNNs4gXrU+Pv20/N1UpB4=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.36.0/go.mod h1:RboSDkp7N292rgu+T0MgVt2qgFGu6qa1RpZDOtpL76w=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0 h1:dNzwXjZKpMpE2JhmO+9HsPl42NIXFIFSUSSs0fiqra0=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0/go.mod h1:90PoxvaEB5n6AOdZvi+yWJQoE95U8Dhhw2bSyRqnTD0=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.36.0 h1:JgtbA0xkWHnTmYk7YusopJFX6uleBmAuZ8n05NEh8nQ=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.36.0/go.mod h1:179AK5aar5R3eS9FucPy6rggvU0g52cvKId8pv4+v0c=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.36.0 h1:nRVXXvf78e00EwY6Wp0YII8ww2JVWshZ20HfTlE11AM=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.36.0/go.mod h1:r49hO7CgrxY9Voaj3Xe8pANWtr0Oq916d0XAmOoCZAQ=
go.opentelemetry.io/otel/exporters/prometheus v0.57.0 h1:AHh/lAP1BHrY5gBwk8ncc25FXWm/gmmY3BX258z5nuk=
go.opentelemetry.io/otel/exporters/prometheus v0.57.0/go.mod h1:QpFWz1QxqevfjwzYdbMb4Y1NnlJvqSGwyuU0B4iuc9c=
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.36.0 h1:G8Xec/SgZQricwWBJF/mHZc7A02YHedfFDENwJEdRA0=
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.36.0/go.mod h1:PD57idA/AiFD5aqoxGxCvT/ILJPeHy3MjqU/NS7KogY=
go.opentelemetry.io/otel/metric v1.36.0 h1:MoWPKVhQvJ+eeXWHFBOPoBOi20jh6Iq2CcCREuTYufE=
go.opentelemetry.io/otel/metric v1.36.0/go.mod h1:zC7Ks+yeyJt4xig9DEw9kuUFe5C3zLbVjV2PzT6qzbs=
go.opentelemetry.io/otel/sdk v1.36.0 h1:b6SYIuLRs88ztox4EyrvRti80uXIFy+Sqzoh9kFULbs=
go.opentelemetry.io/otel/sdk v1.36.0/go.mod h1:+lC+mTgD+MUWfjJubi2vvXWcVxyr9rmlshZni72pXeY=
go.opentelemetry.io/otel/sdk/metric v1.36.0 h1:r0ntwwGosWGaa0CrSt8cuNuTcccMXERFwHX4dThiPis=
go.opentelemetry.io/otel/sdk/metric v1.36.0/go.mod h1:qTNOhFDfKRwX0yXOqJYegL5WRaW376QbB7P4Pb0qva4=
go.opentelemetry.io/otel/trace v1.36.0 h1:ahxWNuqZjpdiFAyrIoQ4GIiAIhxAunQR6MUoKrsNd4w=
go.opentelemetry.io/otel/trace v1.36.0/go.mod h1:gQ+OnDZzrybY4k4seLzPAWNwVBBVlF2szhehOBB/tGA=
go.opentelemetry.io/proto/otlp v1.7.0 h1:jX1VolD6nHuFzOYso2E73H85i92Mv8JQYk0K9vz09os=
go.opentelemetry.io/proto/otlp v1.7.0/go.mod h1:fSKjH6YJ7HDlwzltzyMj036AJ3ejJLCgCSHGj4efDDo=
go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs=
go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/mock v0.5.0 h1:KAMbZvZPyBPWgD14IrIQ38QCyjwpvVVV6K/bHl1IwQU=
go.uber.org/mock v0.5.0/go.mod h1:ge71pBPLYDk7QIi1LupWxdAykm7KIEFchiOqd6z7qMM=
go.uber.org/mock v0.5.2 h1:LbtPTcP8A5k9WPXj54PPPbjcI4Y6lhyOZXn+VS7wNko=
go.uber.org/mock v0.5.2/go.mod h1:wLlUxC2vVTPTaE3UD51E0BGOAElKrILxhVSDYQLld5o=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
@ -181,58 +200,58 @@ go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20241108190413-2d47ceb2692f h1:XdNn9LlyWAhLVp6P/i8QYBW+hlyhrhei9uErw2B5GJo=
golang.org/x/exp v0.0.0-20241108190413-2d47ceb2692f/go.mod h1:D5SMRVC3C2/4+F/DB1wZsLRnSNimn2Sp/NPsCrsv8ak=
golang.org/x/exp v0.0.0-20250531010427-b6e5de432a8b h1:QoALfVG9rhQ/M7vYDScfPdWjGL9dlsVVM5VGh7aKoAA=
golang.org/x/exp v0.0.0-20250531010427-b6e5de432a8b/go.mod h1:U6Lno4MTRCDY+Ba7aCcauB9T60gsv5s4ralQzP72ZoQ=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.22.0 h1:D4nJWe9zXqHOmWqj4VMOJhvzj7bEZg4wEYa759z1pH4=
golang.org/x/mod v0.22.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY=
golang.org/x/mod v0.24.0 h1:ZfthKaKaT4NrhGVZHO1/WDTwGES4De8KtWO0SIbNJMU=
golang.org/x/mod v0.24.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.34.0 h1:Mb7Mrk043xzHgnRM88suvJFwzVrRfHEHJEl5/71CKw0=
golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k=
golang.org/x/oauth2 v0.24.0 h1:KTBBxWqUa0ykRPLtV69rRto9TLXcqYkeswu48x/gvNE=
golang.org/x/oauth2 v0.24.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/net v0.40.0 h1:79Xs7wF06Gbdcg4kdCCIQArK11Z1hr5POQ6+fIYHNuY=
golang.org/x/net v0.40.0/go.mod h1:y0hY0exeL2Pku80/zKK7tpntoX23cqL3Oa6njdgRtds=
golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI=
golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.11.0 h1:GGz8+XQP4FvTTrjZPzNKTMFtSXH80RAzG+5ghFPgK9w=
golang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.14.0 h1:woo0S4Yywslg6hp4eUFjTVOyKt0RookbpAHG4c1HmhQ=
golang.org/x/sync v0.14.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU=
golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.28.0 h1:/Ts8HFuMR2E6IP/jlo7QVLZHggjKQbhu/7H0LJFr3Gg=
golang.org/x/term v0.28.0/go.mod h1:Sw/lC2IAUZ92udQNf3WodGtn4k/XoLyZoh8v/8uiwek=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.32.0 h1:DR4lr0TjUs3epypdhTOkMmuF5CDFJ/8pOnbzMZPQ7bg=
golang.org/x/term v0.32.0/go.mod h1:uZG1FhGx848Sqfsq4/DlJr3xGGsYMu/L5GW4abiaEPQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/time v0.8.0 h1:9i3RxcPv3PZnitoVGMPDKZSq1xW1gK1Xy3ArNOGZfEg=
golang.org/x/time v0.8.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/text v0.25.0 h1:qVyWApTSYLk/drJRO5mDlNYskwQznZmkpV2c8q9zls4=
golang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA=
golang.org/x/time v0.11.0 h1:/bpjEDfN9tkoN/ryeYHnv5hcMlc8ncjMcM4XBk5NWV0=
golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.28.0 h1:WuB6qZ4RPCQo5aP3WdKZS7i595EdWqWR8vqJTlwTVK8=
golang.org/x/tools v0.28.0/go.mod h1:dcIOrVd3mfQKTgrDVQHqCPMWy6lnhfhtX3hLXYVLfRw=
golang.org/x/tools v0.33.0 h1:4qz2S3zmRxbGIhDIAgjxvFutSvH5EfnsYrRBj0UI0bc=
golang.org/x/tools v0.33.0/go.mod h1:CIJMaWEY88juyUfo7UbgPqbC8rU2OqfAV1h2Qp0oMYI=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gomodules.xyz/jsonpatch/v2 v2.4.0 h1:Ci3iUJyx9UeRx7CeFN8ARgGbkESwJK+KB9lLcWxY/Zw=
gomodules.xyz/jsonpatch/v2 v2.4.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY=
google.golang.org/genproto/googleapis/api v0.0.0-20250115164207-1a7da9e5054f h1:gap6+3Gk41EItBuyi4XX/bp4oqJ3UwuIMl25yGinuAA=
google.golang.org/genproto/googleapis/api v0.0.0-20250115164207-1a7da9e5054f/go.mod h1:Ic02D47M+zbarjYYUlK57y316f2MoN0gjAwI3f2S95o=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f h1:OxYkA3wjPsZyBylwymxSHa7ViiW1Sml4ToBrncvFehI=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f/go.mod h1:+2Yz8+CLJbIfL9z73EW45avw8Lmge3xVElCP9zEKi50=
google.golang.org/grpc v1.70.0 h1:pWFv03aZoHzlRKHWicjsZytKAiYCtNS0dHbXnIdq7jQ=
google.golang.org/grpc v1.70.0/go.mod h1:ofIJqVKDXx/JiXrwr2IG4/zwdH9txy3IlF40RmcJSQw=
google.golang.org/protobuf v1.36.5 h1:tPhr+woSbjfYvY6/GPufUoYizxw1cF/yFoxJ2fmpwlM=
google.golang.org/protobuf v1.36.5/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gomodules.xyz/jsonpatch/v2 v2.5.0 h1:JELs8RLM12qJGXU4u/TO3V25KW8GreMKl9pdkk14RM0=
gomodules.xyz/jsonpatch/v2 v2.5.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY=
google.golang.org/genproto/googleapis/api v0.0.0-20250528174236-200df99c418a h1:SGktgSolFCo75dnHJF2yMvnns6jCmHFJ0vE4Vn2JKvQ=
google.golang.org/genproto/googleapis/api v0.0.0-20250528174236-200df99c418a/go.mod h1:a77HrdMjoeKbnd2jmgcWdaS++ZLZAEq3orIOAEIKiVw=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250528174236-200df99c418a h1:v2PbRU4K3llS09c7zodFpNePeamkAwG3mPrAery9VeE=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250528174236-200df99c418a/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=
google.golang.org/grpc v1.72.2 h1:TdbGzwb82ty4OusHWepvFWGLgIbNo1/SUynEN0ssqv8=
google.golang.org/grpc v1.72.2/go.mod h1:wH5Aktxcg25y1I3w7H69nHfXdOG3UiadoBtjh3izSDM=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
@ -248,26 +267,26 @@ gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
k8s.io/api v0.31.6 h1:ocWG/UhC9Mqp5oEfYWy9wCddbZiZyBAFTlBt0LVlhDg=
k8s.io/api v0.31.6/go.mod h1:i16xSiKMgVIVhsJMxfWq0mJbXA+Z7KhjPgYmwT41hl4=
k8s.io/apiextensions-apiserver v0.31.6 h1:v9sqyWlrgFZpAPdEb/bEiXfM98TfSppwRF0X/uWKXh0=
k8s.io/apiextensions-apiserver v0.31.6/go.mod h1:QVH3CFwqzGZtwsxPYzJlA/Qiwgb5FXmRMGls3CjzvbI=
k8s.io/apimachinery v0.31.6 h1:Pn96A0wHD0X8+l7QTdAzdLQPrpav1s8rU6A+v2/9UEY=
k8s.io/apimachinery v0.31.6/go.mod h1:rsPdaZJfTfLsNJSQzNHQvYoTmxhoOEofxtOsF3rtsMo=
k8s.io/client-go v0.31.6 h1:51HT40qVIZ13BrHKeWxFuU52uoPnFhxTYJnv4+LTgp4=
k8s.io/client-go v0.31.6/go.mod h1:MEq7JQJelUQ0/4fMoPEUrc/OOFyGo/9LmGA38H6O6xY=
k8s.io/code-generator v0.31.6 h1:CX4/NGV5UIdt7+nYG/G4+eGHOvcXAlKWswUhPPOtPtc=
k8s.io/code-generator v0.31.6/go.mod h1:vbqDrvP5hJJ5S/jzBtyMJoH5kJBWZMo/DZwMYiOQniE=
k8s.io/gengo/v2 v2.0.0-20240228010128-51d4e06bde70 h1:NGrVE502P0s0/1hudf8zjgwki1X/TByhmAoILTarmzo=
k8s.io/gengo/v2 v2.0.0-20240228010128-51d4e06bde70/go.mod h1:VH3AT8AaQOqiGjMF9p0/IM1Dj+82ZwjfxUP1IxaHE+8=
k8s.io/api v0.31.7 h1:wSo59nXpVXmaB6hgNVJCrdnKtyYoutIgpNNBbROBd2U=
k8s.io/api v0.31.7/go.mod h1:vLUha4nXRUGtQdayzsmjur0lQApK/sJSxyR/fwuujcU=
k8s.io/apiextensions-apiserver v0.31.7 h1:FujQQl6iKuCF5nX4GIQy3ClvftU8MqadAyi9oQ6ZeAw=
k8s.io/apiextensions-apiserver v0.31.7/go.mod h1:YmNzYECWFYy8n9R0oxtVAD9JYILZnZCNziYrpUQhKeI=
k8s.io/apimachinery v0.31.7 h1:fpV8yLerIZFAkj0of66+i1ArPv/Btf9KO6Aulng7RRw=
k8s.io/apimachinery v0.31.7/go.mod h1:rsPdaZJfTfLsNJSQzNHQvYoTmxhoOEofxtOsF3rtsMo=
k8s.io/client-go v0.31.7 h1:2+LFJc6Xw6rhmpDbN1NSmhoFLWBh62cPG/P+IfaTSGY=
k8s.io/client-go v0.31.7/go.mod h1:hrrMorBQ17LqzoKIxKg5cSWvmWl94EwA/MUF0Mkf+Zw=
k8s.io/code-generator v0.31.7 h1:8BU7n+pK8td2600IiqH6EgxuiWbwVA1+uTOwIJ/nTUA=
k8s.io/code-generator v0.31.7/go.mod h1:1oSRo6cJxwSCghcOFGsh53TKkUQ5ZgYoK7LBCFbhHDg=
k8s.io/gengo/v2 v2.0.0-20240911193312-2b36238f13e9 h1:si3PfKm8dDYxgfbeA6orqrtLkvvIeH8UqffFJDl0bz4=
k8s.io/gengo/v2 v2.0.0-20240911193312-2b36238f13e9/go.mod h1:EJykeLsmFC60UQbYJezXkEsG2FLrt0GPNkU5iK5GWxU=
k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 h1:BZqlfIlq5YbRMFko6/PM7FjZpUb45WallggurYhKGag=
k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340/go.mod h1:yD4MZYeKMBwQKVht279WycxKyM84kkAx2DPrTXaeb98=
k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 h1:pUdcCO1Lk/tbT5ztQWOBi5HBgbBP1J8+AsQnQCKsi8A=
k8s.io/utils v0.0.0-20240711033017-18e509b52bc8/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
knative.dev/pkg v0.0.0-20241218051509-40afb7c5436e h1:pgdDEZT3R50XHwbHBYUYTb71PQ1oDR/2m3mRyQ57W8w=
knative.dev/pkg v0.0.0-20241218051509-40afb7c5436e/go.mod h1:C2dxK66GlycMOS0SKqv0SMAnWkxsYbG4hkH32Xg1qD0=
knative.dev/pkg v0.0.0-20250602175424-3c3a920206ea h1:ukJPq9MzFTEH/Sei5MSVnSE8+7NSCKixCDZPd6p4ohw=
knative.dev/pkg v0.0.0-20250602175424-3c3a920206ea/go.mod h1:tFayQbi6t4+5HXuEGLOGvILW228Q7uaJp/FYEgbjJ3A=
sigs.k8s.io/controller-runtime v0.19.6 h1:fuq53qTLQ7aJTA7aNsklNnu7eQtSFqJUomOyM+phPLk=
sigs.k8s.io/controller-runtime v0.19.6/go.mod h1:iRmWllt8IlaLjvTTDLhRBXIEtkCK6hwVBJJsYS9Ajf4=
sigs.k8s.io/gateway-api v1.2.1 h1:fZZ/+RyRb+Y5tGkwxFKuYuSRQHu9dZtbjenblleOLHM=
@ -282,7 +301,9 @@ sigs.k8s.io/kustomize/kustomize/v5 v5.6.0 h1:MWtRRDWCwQEeW2rnJTqJMuV6Agy56P53Skb
sigs.k8s.io/kustomize/kustomize/v5 v5.6.0/go.mod h1:XuuZiQF7WdcvZzEYyNww9A0p3LazCKeJmCjeycN8e1I=
sigs.k8s.io/kustomize/kyaml v0.19.0 h1:RFge5qsO1uHhwJsu3ipV7RNolC7Uozc0jUBC/61XSlA=
sigs.k8s.io/kustomize/kyaml v0.19.0/go.mod h1:FeKD5jEOH+FbZPpqUghBP8mrLjJ3+zD3/rf9NNu1cwY=
sigs.k8s.io/structured-merge-diff/v4 v4.4.3 h1:sCP7Vv3xx/CWIuTPVN38lUPx0uw0lcLfzaiDa8Ja01A=
sigs.k8s.io/structured-merge-diff/v4 v4.4.3/go.mod h1:N8f93tFZh9U6vpxwRArLiikrE5/2tiu1w1AGfACIGE4=
sigs.k8s.io/randfill v0.0.0-20250304075658-069ef1bbf016 h1:kXv6kKdoEtedwuqMmkqhbkgvYKeycVbC8+iPCP9j5kQ=
sigs.k8s.io/randfill v0.0.0-20250304075658-069ef1bbf016/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/structured-merge-diff/v4 v4.7.0 h1:qPeWmscJcXP0snki5IYF79Z8xrl8ETFxgMd7wez1XkI=
sigs.k8s.io/structured-merge-diff/v4 v4.7.0/go.mod h1:dDy58f92j70zLsuZVuUX5Wp9vtxXpaZnkPGWeqDfCps=
sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=

View File

@ -1,4 +1,4 @@
FROM --platform=${BUILDPLATFORM} ghcr.io/kedacore/keda-tools:1.23.6 as builder
FROM --platform=${BUILDPLATFORM} ghcr.io/kedacore/keda-tools:1.24.3 as builder
WORKDIR /workspace
COPY go.* .
RUN go mod download

View File

@ -43,8 +43,12 @@ type Serving struct {
TLSKeyPath string `envconfig:"KEDA_HTTP_PROXY_TLS_KEY_PATH" default:"/certs/tls.key"`
// TLSCertStorePaths is a comma separated list of paths to read the certificate/key pairs for the TLS server
TLSCertStorePaths string `envconfig:"KEDA_HTTP_PROXY_TLS_CERT_STORE_PATHS" default:""`
// TLSSkipVerify is a boolean flag to specify whether the interceptor should skip TLS verification for upstreams
TLSSkipVerify bool `envconfig:"KEDA_HTTP_PROXY_TLS_SKIP_VERIFY" default:"false"`
// TLSPort is the port that the server should serve on if TLS is enabled
TLSPort int `envconfig:"KEDA_HTTP_PROXY_TLS_PORT" default:"8443"`
// ProfilingAddr if not empty, pprof will be available on this address, assuming host:port here
ProfilingAddr string `envconfig:"PROFILING_BIND_ADDRESS" default:""`
}
// Parse parses standard configs using envconfig and returns a pointer to the

View File

@ -0,0 +1,21 @@
package config
import (
"github.com/kelseyhightower/envconfig"
)
// Tracing is the configuration for configuring tracing through the interceptor.
type Tracing struct {
// States whether tracing should be enabled, False by default
Enabled bool `envconfig:"OTEL_EXPORTER_OTLP_TRACES_ENABLED" default:"false"`
// Sets what tracing export to use, must be one of: console,http/protobuf, grpc
Exporter string `envconfig:"OTEL_EXPORTER_OTLP_TRACES_PROTOCOL" default:"console"`
}
// Parse parses standard configs using envconfig and returns a pointer to the
// newly created config. Returns nil and a non-nil error if parsing failed
func MustParseTracing() *Tracing {
ret := new(Tracing)
envconfig.MustProcess("", ret)
return ret
}

View File

@ -5,6 +5,12 @@ import (
"net/http"
"net/http/httputil"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/propagation"
"go.opentelemetry.io/otel/trace"
"github.com/kedacore/http-add-on/interceptor/config"
"github.com/kedacore/http-add-on/pkg/util"
)
@ -13,12 +19,16 @@ var (
)
type Upstream struct {
roundTripper http.RoundTripper
roundTripper http.RoundTripper
tracingCfg *config.Tracing
shouldFailover bool
}
func NewUpstream(roundTripper http.RoundTripper) *Upstream {
func NewUpstream(roundTripper http.RoundTripper, tracingCfg *config.Tracing, shouldFailover bool) *Upstream {
return &Upstream{
roundTripper: roundTripper,
roundTripper: roundTripper,
tracingCfg: tracingCfg,
shouldFailover: shouldFailover,
}
}
@ -28,7 +38,26 @@ func (uh *Upstream) ServeHTTP(w http.ResponseWriter, r *http.Request) {
r = util.RequestWithLoggerWithName(r, "UpstreamHandler")
ctx := r.Context()
if uh.tracingCfg.Enabled {
p := otel.GetTextMapPropagator()
ctx = p.Extract(ctx, propagation.HeaderCarrier(r.Header))
p.Inject(ctx, propagation.HeaderCarrier(w.Header()))
span := trace.SpanFromContext(ctx)
defer span.End()
serviceValAttr := attribute.String("service", "keda-http-interceptor-proxy-upstream")
coldStartValAttr := attribute.String("cold-start", w.Header().Get("X-KEDA-HTTP-Cold-Start"))
span.SetAttributes(serviceValAttr, coldStartValAttr)
}
stream := util.StreamFromContext(ctx)
if uh.shouldFailover {
stream = util.FailoverStreamFromContext(ctx)
}
if stream == nil {
sh := NewStatic(http.StatusInternalServerError, errNilStream)
sh.ServeHTTP(w, r)

View File

@ -1,6 +1,8 @@
package handler
import (
"context"
"fmt"
"log"
"net/http"
"net/http/httptest"
@ -9,13 +11,227 @@ import (
"time"
"github.com/stretchr/testify/require"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/sdk/trace"
"go.opentelemetry.io/otel/sdk/trace/tracetest"
"k8s.io/apimachinery/pkg/util/wait"
"github.com/kedacore/http-add-on/interceptor/config"
"github.com/kedacore/http-add-on/interceptor/tracing"
kedanet "github.com/kedacore/http-add-on/pkg/net"
"github.com/kedacore/http-add-on/pkg/util"
)
const (
traceID = "a8419b25ec2051e5"
fullW3CLengthTraceID = "29b3290dc5a93f2618b17502ccb2a728"
spanID = "97337bce1bc3e368"
parentSpanID = "2890e7e08fc6592b"
sampled = "1"
w3cPadding = "0000000000000000"
)
func TestB3MultiPropagation(t *testing.T) {
// Given
r := require.New(t)
microservice, microserviceURL, closeServer := startMicroservice(t)
defer closeServer()
exporter, tracerProvider := setupOTelSDKForTesting()
instrumentedServeHTTP := withAutoInstrumentation(serveHTTP)
request, responseWriter := createRequestAndResponse("GET", microserviceURL)
request.Header.Set("X-B3-Traceid", traceID)
request.Header.Set("X-B3-Spanid", spanID)
request.Header.Set("X-B3-Parentspanid", parentSpanID)
request.Header.Set("X-B3-Sampled", sampled)
defer func(traceProvider *trace.TracerProvider, ctx context.Context) {
_ = traceProvider.Shutdown(ctx)
}(tracerProvider, request.Context())
// When
instrumentedServeHTTP.ServeHTTP(responseWriter, request)
// Then
receivedRequest := microservice.IncomingRequests()[0]
receivedHeaders := receivedRequest.Header
r.Equal(receivedHeaders.Get("X-B3-Parentspanid"), parentSpanID)
r.Equal(receivedHeaders.Get("X-B3-Traceid"), traceID)
r.Equal(receivedHeaders.Get("X-B3-Spanid"), spanID)
r.Equal(receivedHeaders.Get("X-B3-Sampled"), sampled)
r.NotContains(receivedHeaders, "Traceparent")
r.NotContains(receivedHeaders, "B3")
r.NotContains(receivedHeaders, "b3")
_ = tracerProvider.ForceFlush(request.Context())
exportedSpans := exporter.GetSpans()
if len(exportedSpans) != 1 {
t.Fatalf("Expected 1 Span, got %d", len(exportedSpans))
}
sc := exportedSpans[0].SpanContext
r.Equal(w3cPadding+traceID, sc.TraceID().String())
r.NotEqual(sc.SpanID().String(), spanID)
}
func TestW3CAndB3MultiPropagation(t *testing.T) {
// Given
r := require.New(t)
microservice, microserviceURL, closeServer := startMicroservice(t)
defer closeServer()
exporter, tracerProvider := setupOTelSDKForTesting()
instrumentedServeHTTP := withAutoInstrumentation(serveHTTP)
request, responseWriter := createRequestAndResponse("GET", microserviceURL)
request.Header.Set("X-B3-Traceid", traceID)
request.Header.Set("X-B3-Spanid", spanID)
request.Header.Set("X-B3-Parentspanid", parentSpanID)
request.Header.Set("X-B3-Sampled", sampled)
request.Header.Set("Traceparent", w3cPadding+traceID)
defer func(traceProvider *trace.TracerProvider, ctx context.Context) {
_ = traceProvider.Shutdown(ctx)
}(tracerProvider, request.Context())
// When
instrumentedServeHTTP.ServeHTTP(responseWriter, request)
// Then
receivedRequest := microservice.IncomingRequests()[0]
receivedHeaders := receivedRequest.Header
r.Equal(receivedHeaders.Get("X-B3-Parentspanid"), parentSpanID)
r.Equal(receivedHeaders.Get("X-B3-Traceid"), traceID)
r.Equal(receivedHeaders.Get("X-B3-Spanid"), spanID)
r.Equal(receivedHeaders.Get("X-B3-Sampled"), sampled)
r.Equal(receivedHeaders.Get("Traceparent"), w3cPadding+traceID)
r.NotContains(receivedHeaders, "B3")
r.NotContains(receivedHeaders, "b3")
_ = tracerProvider.ForceFlush(request.Context())
exportedSpans := exporter.GetSpans()
if len(exportedSpans) != 1 {
t.Fatalf("Expected 1 Span, got %d", len(exportedSpans))
}
sc := exportedSpans[0].SpanContext
r.Equal(w3cPadding+traceID, sc.TraceID().String())
r.NotEqual(sc.SpanID().String(), spanID)
}
func TestW3CPropagation(t *testing.T) {
// Given
r := require.New(t)
microservice, microserviceURL, closeServer := startMicroservice(t)
defer closeServer()
exporter, tracerProvider := setupOTelSDKForTesting()
instrumentedServeHTTP := withAutoInstrumentation(serveHTTP)
request, responseWriter := createRequestAndResponse("GET", microserviceURL)
traceParent := fmt.Sprintf("00-%s-%s-01", fullW3CLengthTraceID, spanID)
request.Header.Set("Traceparent", traceParent)
defer func(traceProvider *trace.TracerProvider, ctx context.Context) {
_ = traceProvider.Shutdown(ctx)
}(tracerProvider, request.Context())
// When
instrumentedServeHTTP.ServeHTTP(responseWriter, request)
// Then
receivedRequest := microservice.IncomingRequests()[0]
receivedHeaders := receivedRequest.Header
r.Equal(receivedHeaders.Get("Traceparent"), traceParent)
r.NotContains(receivedHeaders, "B3")
r.NotContains(receivedHeaders, "b3")
r.NotContains(receivedHeaders, "X-B3-Parentspanid")
r.NotContains(receivedHeaders, "X-B3-Traceid")
r.NotContains(receivedHeaders, "X-B3-Spanid")
r.NotContains(receivedHeaders, "X-B3-Sampled")
_ = tracerProvider.ForceFlush(request.Context())
exportedSpans := exporter.GetSpans()
if len(exportedSpans) != 1 {
t.Fatalf("Expected 1 Span, got %d", len(exportedSpans))
}
sc := exportedSpans[0].SpanContext
r.Equal(fullW3CLengthTraceID, sc.TraceID().String())
r.Equal(true, sc.IsSampled())
r.NotEqual(sc.SpanID().String(), spanID)
}
func TestPropagationWhenNoHeaders(t *testing.T) {
// Given
r := require.New(t)
microservice, microserviceURL, closeServer := startMicroservice(t)
defer closeServer()
exporter, tracerProvider := setupOTelSDKForTesting()
instrumentedServeHTTP := withAutoInstrumentation(serveHTTP)
request, responseWriter := createRequestAndResponse("GET", microserviceURL)
defer func(traceProvider *trace.TracerProvider, ctx context.Context) {
_ = traceProvider.Shutdown(ctx)
}(tracerProvider, request.Context())
// When
instrumentedServeHTTP.ServeHTTP(responseWriter, request)
// Then
receivedRequest := microservice.IncomingRequests()[0]
receivedHeaders := receivedRequest.Header
r.NotContains(receivedHeaders, "Traceparent")
r.NotContains(receivedHeaders, "B3")
r.NotContains(receivedHeaders, "b3")
r.NotContains(receivedHeaders, "X-B3-Parentspanid")
r.NotContains(receivedHeaders, "X-B3-Traceid")
r.NotContains(receivedHeaders, "X-B3-Spanid")
r.NotContains(receivedHeaders, "X-B3-Sampled")
_ = tracerProvider.ForceFlush(request.Context())
exportedSpans := exporter.GetSpans()
if len(exportedSpans) != 1 {
t.Fatalf("Expected 1 Span, got %d", len(exportedSpans))
}
sc := exportedSpans[0].SpanContext
r.NotEmpty(sc.SpanID())
r.NotEmpty(sc.TraceID())
hasServiceAttribute := false
hasColdStartAttribute := false
for _, attribute := range exportedSpans[0].Attributes {
if attribute.Key == "service" && attribute.Value.AsString() == "keda-http-interceptor-proxy-upstream" {
hasServiceAttribute = true
}
if attribute.Key == "cold-start" {
hasColdStartAttribute = true
}
}
r.True(hasServiceAttribute)
r.True(hasColdStartAttribute)
}
func TestForwarderSuccess(t *testing.T) {
r := require.New(t)
// this channel will be closed after the request was received, but
@ -43,7 +259,7 @@ func TestForwarderSuccess(t *testing.T) {
timeouts := defaultTimeouts()
dialCtxFunc := retryDialContextFunc(timeouts, timeouts.DefaultBackoff())
rt := newRoundTripper(dialCtxFunc, timeouts.ResponseHeader)
uh := NewUpstream(rt)
uh := NewUpstream(rt, &config.Tracing{}, false)
uh.ServeHTTP(res, req)
r.True(
@ -88,7 +304,7 @@ func TestForwarderHeaderTimeout(t *testing.T) {
r.NoError(err)
req = util.RequestWithStream(req, originURL)
rt := newRoundTripper(dialCtxFunc, timeouts.ResponseHeader)
uh := NewUpstream(rt)
uh := NewUpstream(rt, &config.Tracing{}, false)
uh.ServeHTTP(res, req)
forwardedRequests := hdl.IncomingRequests()
@ -138,7 +354,7 @@ func TestForwarderWaitsForSlowOrigin(t *testing.T) {
r.NoError(err)
req = util.RequestWithStream(req, originURL)
rt := newRoundTripper(dialCtxFunc, timeouts.ResponseHeader)
uh := NewUpstream(rt)
uh := NewUpstream(rt, &config.Tracing{}, false)
uh.ServeHTTP(res, req)
// wait for the goroutine above to finish, with a little cusion
ensureSignalBeforeTimeout(originWaitCh, originDelay*2)
@ -161,7 +377,7 @@ func TestForwarderConnectionRetryAndTimeout(t *testing.T) {
r.NoError(err)
req = util.RequestWithStream(req, noSuchURL)
rt := newRoundTripper(dialCtxFunc, timeouts.ResponseHeader)
uh := NewUpstream(rt)
uh := NewUpstream(rt, &config.Tracing{}, false)
start := time.Now()
uh.ServeHTTP(res, req)
@ -217,7 +433,7 @@ func TestForwardRequestRedirectAndHeaders(t *testing.T) {
r.NoError(err)
req = util.RequestWithStream(req, srvURL)
rt := newRoundTripper(dialCtxFunc, timeouts.ResponseHeader)
uh := NewUpstream(rt)
uh := NewUpstream(rt, &config.Tracing{}, false)
uh.ServeHTTP(res, req)
r.Equal(301, res.Code)
r.Equal("abc123.com", res.Header().Get("Location"))
@ -281,3 +497,56 @@ func ensureSignalBeforeTimeout(signalCh <-chan struct{}, timeout time.Duration)
return true
}
}
func serveHTTP(w http.ResponseWriter, r *http.Request) {
timeouts := defaultTimeouts()
dialCtxFunc := retryDialContextFunc(timeouts, timeouts.DefaultBackoff())
rt := newRoundTripper(dialCtxFunc, timeouts.ResponseHeader)
upstream := NewUpstream(rt, &config.Tracing{Enabled: true}, false)
upstream.ServeHTTP(w, r)
}
func setupOTelSDKForTesting() (*tracetest.InMemoryExporter, *trace.TracerProvider) {
exporter := tracetest.NewInMemoryExporter()
traceProvider := trace.NewTracerProvider(trace.WithBatcher(exporter, trace.WithBatchTimeout(time.Second)))
otel.SetTracerProvider(traceProvider)
prop := tracing.NewPropagator()
otel.SetTextMapPropagator(prop)
return exporter, traceProvider
}
func startMicroservice(t *testing.T) (*kedanet.TestHTTPHandlerWrapper, *url.URL, func()) {
assert := require.New(t)
requestReceiveChannel := make(chan struct{})
const respCode = 200
const respBody = "Success Response"
microservice := kedanet.NewTestHTTPHandlerWrapper(
http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
close(requestReceiveChannel)
w.WriteHeader(respCode)
_, err := w.Write([]byte(respBody))
assert.NoError(err)
}),
)
server := httptest.NewServer(microservice)
url, err := url.Parse(server.URL)
assert.NoError(err)
return microservice, url, func() {
server.Close()
}
}
func createRequestAndResponse(method string, url *url.URL) (*http.Request, http.ResponseWriter) {
ctx := util.ContextWithStream(context.Background(), url)
request, _ := http.NewRequestWithContext(ctx, method, url.String(), nil)
recorder := httptest.NewRecorder()
return request, recorder
}
func withAutoInstrumentation(sut func(w http.ResponseWriter, r *http.Request)) http.Handler {
return otelhttp.NewHandler(http.HandlerFunc(sut), "SystemUnderTest")
}

View File

@ -8,13 +8,16 @@ import (
"flag"
"fmt"
"net/http"
_ "net/http/pprof"
"os"
"path/filepath"
"runtime"
"strings"
"time"
"github.com/go-logr/logr"
"github.com/prometheus/client_golang/prometheus/promhttp"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
"golang.org/x/exp/maps"
"golang.org/x/sync/errgroup"
k8sinformers "k8s.io/client-go/informers"
@ -26,6 +29,7 @@ import (
"github.com/kedacore/http-add-on/interceptor/handler"
"github.com/kedacore/http-add-on/interceptor/metrics"
"github.com/kedacore/http-add-on/interceptor/middleware"
"github.com/kedacore/http-add-on/interceptor/tracing"
clientset "github.com/kedacore/http-add-on/operator/generated/clientset/versioned"
informers "github.com/kedacore/http-add-on/operator/generated/informers/externalversions"
"github.com/kedacore/http-add-on/pkg/build"
@ -46,9 +50,11 @@ var (
// +kubebuilder:rbac:groups="",resources=services,verbs=get;list;watch
func main() {
defer os.Exit(1)
timeoutCfg := config.MustParseTimeouts()
servingCfg := config.MustParseServing()
metricsCfg := config.MustParseMetrics()
tracingCfg := config.MustParseTracing()
opts := zap.Options{
Development: true,
@ -60,7 +66,7 @@ func main() {
if err := config.Validate(servingCfg, *timeoutCfg, ctrl.Log); err != nil {
setupLog.Error(err, "invalid configuration")
os.Exit(1)
runtime.Goexit()
}
setupLog.Info(
@ -76,6 +82,7 @@ func main() {
proxyPort := servingCfg.ProxyPort
adminPort := servingCfg.AdminPort
proxyTLSEnabled := servingCfg.ProxyTLSEnabled
profilingAddr := servingCfg.ProfilingAddr
// setup the configured metrics collectors
metrics.NewMetricsCollectors(metricsCfg)
@ -85,7 +92,7 @@ func main() {
cl, err := kubernetes.NewForConfig(cfg)
if err != nil {
setupLog.Error(err, "creating new Kubernetes ClientSet")
os.Exit(1)
runtime.Goexit()
}
k8sSharedInformerFactory := k8sinformers.NewSharedInformerFactory(cl, time.Millisecond*time.Duration(servingCfg.EndpointsCachePollIntervalMS))
@ -93,14 +100,14 @@ func main() {
endpointsCache := k8s.NewInformerBackedEndpointsCache(ctrl.Log, cl, time.Millisecond*time.Duration(servingCfg.EndpointsCachePollIntervalMS))
if err != nil {
setupLog.Error(err, "creating new endpoints cache")
os.Exit(1)
runtime.Goexit()
}
waitFunc := newWorkloadReplicasForwardWaitFunc(ctrl.Log, endpointsCache)
httpCl, err := clientset.NewForConfig(cfg)
if err != nil {
setupLog.Error(err, "creating new HTTP ClientSet")
os.Exit(1)
runtime.Goexit()
}
queues := queue.NewMemory()
@ -109,7 +116,7 @@ func main() {
routingTable, err := routing.NewTable(sharedInformerFactory, servingCfg.WatchNamespace, queues)
if err != nil {
setupLog.Error(err, "fetching routing table")
os.Exit(1)
runtime.Goexit()
}
setupLog.Info("Interceptor starting")
@ -119,6 +126,18 @@ func main() {
eg, ctx := errgroup.WithContext(ctx)
if tracingCfg.Enabled {
shutdown, err := tracing.SetupOTelSDK(ctx, tracingCfg)
if err != nil {
setupLog.Error(err, "Error setting up tracer")
}
defer func() {
err = errors.Join(err, shutdown(context.Background()))
}()
}
// start the endpoints cache updater
eg.Go(func() error {
setupLog.Info("starting the endpoints cache")
@ -173,13 +192,13 @@ func main() {
// start a proxy server with TLS
if proxyTLSEnabled {
eg.Go(func() error {
proxyTLSConfig := map[string]string{"certificatePath": servingCfg.TLSCertPath, "keyPath": servingCfg.TLSKeyPath, "certstorePaths": servingCfg.TLSCertStorePaths}
proxyTLSConfig := map[string]interface{}{"certificatePath": servingCfg.TLSCertPath, "keyPath": servingCfg.TLSKeyPath, "certstorePaths": servingCfg.TLSCertStorePaths, "skipVerify": servingCfg.TLSSkipVerify}
proxyTLSPort := servingCfg.TLSPort
k8sSharedInformerFactory.WaitForCacheSync(ctx.Done())
setupLog.Info("starting the proxy server with TLS enabled", "port", proxyTLSPort)
if err := runProxyServer(ctx, ctrl.Log, queues, waitFunc, routingTable, svcCache, timeoutCfg, proxyTLSPort, proxyTLSEnabled, proxyTLSConfig); !util.IsIgnoredErr(err) {
if err := runProxyServer(ctx, ctrl.Log, queues, waitFunc, routingTable, svcCache, timeoutCfg, proxyTLSPort, proxyTLSEnabled, proxyTLSConfig, tracingCfg); !util.IsIgnoredErr(err) {
setupLog.Error(err, "tls proxy server failed")
return err
}
@ -193,7 +212,7 @@ func main() {
setupLog.Info("starting the proxy server with TLS disabled", "port", proxyPort)
k8sSharedInformerFactory.WaitForCacheSync(ctx.Done())
if err := runProxyServer(ctx, ctrl.Log, queues, waitFunc, routingTable, svcCache, timeoutCfg, proxyPort, false, nil); !util.IsIgnoredErr(err) {
if err := runProxyServer(ctx, ctrl.Log, queues, waitFunc, routingTable, svcCache, timeoutCfg, proxyPort, false, nil, tracingCfg); !util.IsIgnoredErr(err) {
setupLog.Error(err, "proxy server failed")
return err
}
@ -201,11 +220,18 @@ func main() {
return nil
})
if len(profilingAddr) > 0 {
eg.Go(func() error {
setupLog.Info("enabling pprof for profiling", "address", profilingAddr)
return http.ListenAndServe(profilingAddr, nil)
})
}
build.PrintComponentInfo(ctrl.Log, "Interceptor")
if err := eg.Wait(); err != nil && !errors.Is(err, context.Canceled) {
setupLog.Error(err, "fatal error")
os.Exit(1)
runtime.Goexit()
}
setupLog.Info("Bye!")
@ -282,12 +308,15 @@ func defaultCertPool(logger logr.Logger) *x509.CertPool {
// getTLSConfig creates a TLS config from KEDA_HTTP_PROXY_TLS_CERT_PATH, KEDA_HTTP_PROXY_TLS_KEY_PATH and KEDA_HTTP_PROXY_TLS_CERTSTORE_PATHS
// The matching between request and certificate is performed by comparing TLS/SNI server name with x509 SANs
func getTLSConfig(tlsConfig map[string]string, logger logr.Logger) (*tls.Config, error) {
certPath := tlsConfig["certificatePath"]
keyPath := tlsConfig["keyPath"]
certStorePaths := tlsConfig["certstorePaths"]
func getTLSConfig(tlsConfig map[string]interface{}, logger logr.Logger) (*tls.Config, error) {
certPath, _ := tlsConfig["certificatePath"].(string)
keyPath, _ := tlsConfig["keyPath"].(string)
certStorePaths, _ := tlsConfig["certstorePaths"].(string)
insecureSkipVerify, _ := tlsConfig["skipVerify"].(bool)
servingTLS := &tls.Config{
RootCAs: defaultCertPool(logger),
RootCAs: defaultCertPool(logger),
InsecureSkipVerify: insecureSkipVerify,
}
var defaultCert *tls.Certificate
@ -378,7 +407,8 @@ func runProxyServer(
timeouts *config.Timeouts,
port int,
tlsEnabled bool,
tlsConfig map[string]string,
tlsConfig map[string]interface{},
tracingConfig *config.Tracing,
) error {
dialer := kedanet.NewNetDialer(timeouts.Connect, timeouts.KeepAlive)
dialContextFunc := kedanet.DialContextWithRetry(dialer, timeouts.DefaultBackoff())
@ -403,6 +433,7 @@ func runProxyServer(
if tlsCfg != nil {
forwardingTLSCfg.RootCAs = tlsCfg.RootCAs
forwardingTLSCfg.Certificates = tlsCfg.Certificates
forwardingTLSCfg.InsecureSkipVerify = tlsCfg.InsecureSkipVerify
}
upstreamHandler = newForwardingHandler(
@ -411,6 +442,7 @@ func runProxyServer(
waitFunc,
newForwardingConfigFromTimeouts(timeouts),
forwardingTLSCfg,
tracingConfig,
)
upstreamHandler = middleware.NewCountingMiddleware(
q,
@ -425,6 +457,11 @@ func runProxyServer(
svcCache,
tlsEnabled,
)
if tracingConfig.Enabled {
rootHandler = otelhttp.NewHandler(rootHandler, "keda-http-interceptor")
}
rootHandler = middleware.NewLogging(
logger,
rootHandler,

View File

@ -16,6 +16,7 @@ import (
"golang.org/x/sync/errgroup"
"github.com/kedacore/http-add-on/interceptor/config"
"github.com/kedacore/http-add-on/interceptor/tracing"
"github.com/kedacore/http-add-on/pkg/k8s"
kedanet "github.com/kedacore/http-add-on/pkg/net"
"github.com/kedacore/http-add-on/pkg/queue"
@ -71,6 +72,15 @@ func TestRunProxyServerCountMiddleware(t *testing.T) {
<-waiterCh
return false, nil
}
tracingCfg := config.Tracing{Enabled: true, Exporter: "otlphttp"}
_, err = tracing.SetupOTelSDK(ctx, &tracingCfg)
if err != nil {
fmt.Println(err, "Error setting up tracer")
}
g.Go(func() error {
return runProxyServer(
ctx,
@ -82,7 +92,8 @@ func TestRunProxyServerCountMiddleware(t *testing.T) {
timeouts,
port,
false,
map[string]string{},
map[string]interface{}{},
&tracingCfg,
)
})
// wait for server to start
@ -111,7 +122,11 @@ func TestRunProxyServerCountMiddleware(t *testing.T) {
resp.StatusCode,
)
}
if resp.Header.Get("X-KEDA-HTTP-Cold-Start") != falseStr {
if _, ok := resp.Header["Traceparent"]; !ok {
return fmt.Errorf("expected Traceparent header to exist, but the header wasn't found")
}
if resp.Header.Get("X-KEDA-HTTP-Cold-Start") != "false" {
return fmt.Errorf("expected X-KEDA-HTTP-Cold-Start false, but got %s", resp.Header.Get("X-KEDA-HTTP-Cold-Start"))
}
return nil
@ -204,6 +219,7 @@ func TestRunProxyServerWithTLSCountMiddleware(t *testing.T) {
<-waiterCh
return false, nil
}
tracingCfg := config.Tracing{Enabled: true, Exporter: "otlphttp"}
g.Go(func() error {
return runProxyServer(
@ -216,7 +232,8 @@ func TestRunProxyServerWithTLSCountMiddleware(t *testing.T) {
timeouts,
port,
true,
map[string]string{"certificatePath": "../certs/tls.crt", "keyPath": "../certs/tls.key"},
map[string]interface{}{"certificatePath": "../certs/tls.crt", "keyPath": "../certs/tls.key", "skipVerify": true},
&tracingCfg,
)
})
@ -352,6 +369,8 @@ func TestRunProxyServerWithMultipleCertsTLSCountMiddleware(t *testing.T) {
return false, nil
}
tracingCfg := config.Tracing{Enabled: true, Exporter: "otlphttp"}
g.Go(func() error {
return runProxyServer(
ctx,
@ -363,7 +382,8 @@ func TestRunProxyServerWithMultipleCertsTLSCountMiddleware(t *testing.T) {
timeouts,
port,
true,
map[string]string{"certstorePaths": "../certs"},
map[string]interface{}{"certstorePaths": "../certs"},
&tracingCfg,
)
})

View File

@ -24,8 +24,7 @@ func TestPromRequestCountMetric(t *testing.T) {
otel_scope_info{otel_scope_name="keda-interceptor-proxy",otel_scope_version=""} 1
# HELP target_info Target metadata
# TYPE target_info gauge
target_info{service_name="interceptor-proxy",service_version="main"} 1
target_info{"service.name"="interceptor-proxy","service.version"="main"} 1
`
expectedOutputReader := strings.NewReader(expectedOutput)
testPrometheus.RecordRequestCount("post", "/test", 500, "test-host")
@ -47,8 +46,7 @@ func TestPromPendingRequestCountMetric(t *testing.T) {
otel_scope_info{otel_scope_name="keda-interceptor-proxy",otel_scope_version=""} 1
# HELP target_info Target metadata
# TYPE target_info gauge
target_info{service_name="interceptor-proxy",service_version="main"} 1
target_info{"service.name"="interceptor-proxy","service.version"="main"} 1
`
expectedOutputReader := strings.NewReader(expectedOutput)
testPrometheus.RecordPendingRequestCount("test-host", 10)

View File

@ -57,7 +57,7 @@ func (rm *Routing) ServeHTTP(w http.ResponseWriter, r *http.Request) {
}
r = r.WithContext(util.ContextWithHTTPSO(r.Context(), httpso))
stream, err := rm.streamFromHTTPSO(r.Context(), httpso)
stream, err := rm.streamFromHTTPSO(r.Context(), httpso, httpso.Spec.ScaleTargetRef)
if err != nil {
sh := handler.NewStatic(http.StatusInternalServerError, err)
sh.ServeHTTP(w, r)
@ -66,37 +66,53 @@ func (rm *Routing) ServeHTTP(w http.ResponseWriter, r *http.Request) {
}
r = r.WithContext(util.ContextWithStream(r.Context(), stream))
if httpso.Spec.ColdStartTimeoutFailoverRef != nil {
failoverStream, err := rm.streamFromHTTPSO(r.Context(), httpso, httpso.Spec.ColdStartTimeoutFailoverRef)
if err != nil {
sh := handler.NewStatic(http.StatusInternalServerError, err)
sh.ServeHTTP(w, r)
return
}
r = r.WithContext(util.ContextWithFailoverStream(r.Context(), failoverStream))
}
rm.upstreamHandler.ServeHTTP(w, r)
}
func (rm *Routing) getPort(ctx context.Context, httpso *httpv1alpha1.HTTPScaledObject) (int32, error) {
if httpso.Spec.ScaleTargetRef.Port != 0 {
return httpso.Spec.ScaleTargetRef.Port, nil
func (rm *Routing) getPort(ctx context.Context, httpso *httpv1alpha1.HTTPScaledObject, reference httpv1alpha1.Ref) (int32, error) {
var (
port = reference.GetPort()
portName = reference.GetPortName()
serviceName = reference.GetServiceName()
)
if port != 0 {
return port, nil
}
if httpso.Spec.ScaleTargetRef.PortName == "" {
if portName == "" {
return 0, fmt.Errorf(`must specify either "port" or "portName"`)
}
svc, err := rm.svcCache.Get(ctx, httpso.GetNamespace(), httpso.Spec.ScaleTargetRef.Service)
svc, err := rm.svcCache.Get(ctx, httpso.GetNamespace(), serviceName)
if err != nil {
return 0, fmt.Errorf("failed to get Service: %w", err)
}
for _, port := range svc.Spec.Ports {
if port.Name == httpso.Spec.ScaleTargetRef.PortName {
if port.Name == portName {
return port.Port, nil
}
}
return 0, fmt.Errorf("portName %q not found in Service", httpso.Spec.ScaleTargetRef.PortName)
return 0, fmt.Errorf("portName %q not found in Service", portName)
}
func (rm *Routing) streamFromHTTPSO(ctx context.Context, httpso *httpv1alpha1.HTTPScaledObject) (*url.URL, error) {
port, err := rm.getPort(ctx, httpso)
func (rm *Routing) streamFromHTTPSO(ctx context.Context, httpso *httpv1alpha1.HTTPScaledObject, reference httpv1alpha1.Ref) (*url.URL, error) {
port, err := rm.getPort(ctx, httpso, reference)
if err != nil {
return nil, fmt.Errorf("failed to get port: %w", err)
}
if rm.tlsEnabled {
return url.Parse(fmt.Sprintf(
"https://%s.%s:%d",
httpso.Spec.ScaleTargetRef.Service,
reference.GetServiceName(),
httpso.GetNamespace(),
port,
))
@ -104,7 +120,7 @@ func (rm *Routing) streamFromHTTPSO(ctx context.Context, httpso *httpv1alpha1.HT
//goland:noinspection HttpUrlsUsage
return url.Parse(fmt.Sprintf(
"http://%s.%s:%d",
httpso.Spec.ScaleTargetRef.Service,
reference.GetServiceName(),
httpso.GetNamespace(),
port,
))

View File

@ -9,6 +9,7 @@ import (
"time"
"github.com/go-logr/logr"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
"github.com/kedacore/http-add-on/interceptor/config"
"github.com/kedacore/http-add-on/interceptor/handler"
@ -50,40 +51,66 @@ func newForwardingHandler(
waitFunc forwardWaitFunc,
fwdCfg forwardingConfig,
tlsCfg *tls.Config,
tracingCfg *config.Tracing,
) http.Handler {
roundTripper := &http.Transport{
Proxy: http.ProxyFromEnvironment,
DialContext: dialCtxFunc,
ForceAttemptHTTP2: fwdCfg.forceAttemptHTTP2,
MaxIdleConns: fwdCfg.maxIdleConns,
IdleConnTimeout: fwdCfg.idleConnTimeout,
TLSHandshakeTimeout: fwdCfg.tlsHandshakeTimeout,
ExpectContinueTimeout: fwdCfg.expectContinueTimeout,
ResponseHeaderTimeout: fwdCfg.respHeaderTimeout,
TLSClientConfig: tlsCfg,
}
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
var uh *handler.Upstream
ctx := r.Context()
httpso := util.HTTPSOFromContext(ctx)
hasFailover := httpso.Spec.ColdStartTimeoutFailoverRef != nil
waitFuncCtx, done := context.WithTimeout(r.Context(), fwdCfg.waitTimeout)
conditionWaitTimeout := fwdCfg.waitTimeout
roundTripper := &http.Transport{
Proxy: http.ProxyFromEnvironment,
DialContext: dialCtxFunc,
ForceAttemptHTTP2: fwdCfg.forceAttemptHTTP2,
MaxIdleConns: fwdCfg.maxIdleConns,
IdleConnTimeout: fwdCfg.idleConnTimeout,
TLSHandshakeTimeout: fwdCfg.tlsHandshakeTimeout,
ExpectContinueTimeout: fwdCfg.expectContinueTimeout,
ResponseHeaderTimeout: fwdCfg.respHeaderTimeout,
TLSClientConfig: tlsCfg,
}
if httpso.Spec.Timeouts != nil {
if httpso.Spec.Timeouts.ConditionWait.Duration > 0 {
conditionWaitTimeout = httpso.Spec.Timeouts.ConditionWait.Duration
}
if httpso.Spec.Timeouts.ResponseHeader.Duration > 0 {
roundTripper.ResponseHeaderTimeout = httpso.Spec.Timeouts.ResponseHeader.Duration
}
}
if hasFailover && httpso.Spec.ColdStartTimeoutFailoverRef.TimeoutSeconds > 0 {
conditionWaitTimeout = time.Duration(httpso.Spec.ColdStartTimeoutFailoverRef.TimeoutSeconds) * time.Second
}
waitFuncCtx, done := context.WithTimeout(ctx, conditionWaitTimeout)
defer done()
isColdStart, err := waitFunc(
waitFuncCtx,
httpso.GetNamespace(),
httpso.Spec.ScaleTargetRef.Service,
)
if err != nil {
if err != nil && !hasFailover {
lggr.Error(err, "wait function failed, not forwarding request")
w.WriteHeader(http.StatusBadGateway)
if _, err := w.Write([]byte(fmt.Sprintf("error on backend (%s)", err))); err != nil {
if _, err := fmt.Fprintf(w, "error on backend (%s)", err); err != nil {
lggr.Error(err, "could not write error response to client")
}
return
}
w.Header().Add("X-KEDA-HTTP-Cold-Start", strconv.FormatBool(isColdStart))
r.Header.Add("X-KEDA-HTTP-Cold-Start-Ref-Name", httpso.Spec.ScaleTargetRef.Name)
r.Header.Add("X-KEDA-HTTP-Cold-Start-Ref-Namespace", httpso.Namespace)
uh := handler.NewUpstream(roundTripper)
shouldFailover := hasFailover && err != nil
if tracingCfg.Enabled {
uh = handler.NewUpstream(otelhttp.NewTransport(roundTripper), tracingCfg, shouldFailover)
} else {
uh = handler.NewUpstream(roundTripper, &config.Tracing{}, shouldFailover)
}
uh.ServeHTTP(w, r)
})
}

View File

@ -21,6 +21,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
"github.com/kedacore/http-add-on/interceptor/config"
"github.com/kedacore/http-add-on/interceptor/middleware"
"github.com/kedacore/http-add-on/pkg/k8s"
kedanet "github.com/kedacore/http-add-on/pkg/net"
@ -308,7 +309,8 @@ func newHarness(
waitTimeout: activeEndpointsTimeout,
respHeaderTimeout: time.Second,
},
&tls.Config{}),
&tls.Config{},
&config.Tracing{}),
svcCache,
false,
)

View File

@ -4,6 +4,7 @@ import (
"context"
"crypto/tls"
"crypto/x509"
"errors"
"fmt"
"log"
"net/http"
@ -78,6 +79,7 @@ func TestImmediatelySuccessfulProxy(t *testing.T) {
respHeaderTimeout: timeouts.ResponseHeader,
},
&tls.Config{},
&config.Tracing{},
)
const path = "/testfwd"
res, req, err := reqAndRes(path)
@ -129,6 +131,7 @@ func TestImmediatelySuccessfulProxyTLS(t *testing.T) {
respHeaderTimeout: timeouts.ResponseHeader,
},
&TestTLSConfig,
&config.Tracing{},
)
const path = "/testfwd"
res, req, err := reqAndRes(path)
@ -149,6 +152,76 @@ func TestImmediatelySuccessfulProxyTLS(t *testing.T) {
r.Equal("test response", res.Body.String())
}
// the proxy should successfully forward a request to the failover when the server is not reachable
func TestImmediatelySuccessfulFailoverProxy(t *testing.T) {
host := fmt.Sprintf("%s.testing", t.Name())
r := require.New(t)
initialStream, err := url.Parse("http://0.0.0.0:0")
r.NoError(err)
failoverHdl := kedanet.NewTestHTTPHandlerWrapper(
http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
w.WriteHeader(200)
_, err := w.Write([]byte("test response"))
r.NoError(err)
}),
)
srv, failoverURL, err := kedanet.StartTestServer(failoverHdl)
r.NoError(err)
defer srv.Close()
failoverPort, err := strconv.Atoi(failoverURL.Port())
r.NoError(err)
timeouts := defaultTimeouts()
dialCtxFunc := retryDialContextFunc(timeouts, timeouts.DefaultBackoff())
waitFunc := func(ctx context.Context, _ string, _ string) (bool, error) {
return false, errors.New("nothing")
}
hdl := newForwardingHandler(
logr.Discard(),
dialCtxFunc,
waitFunc,
forwardingConfig{
waitTimeout: 0,
respHeaderTimeout: timeouts.ResponseHeader,
},
&tls.Config{},
&config.Tracing{},
)
const path = "/testfwd"
res, req, err := reqAndRes(path)
r.NoError(err)
req = util.RequestWithHTTPSO(req,
&httpv1alpha1.HTTPScaledObject{
ObjectMeta: metav1.ObjectMeta{
Namespace: "@" + host,
},
Spec: httpv1alpha1.HTTPScaledObjectSpec{
ScaleTargetRef: httpv1alpha1.ScaleTargetRef{
Name: "testdepl",
Service: "testsvc",
Port: int32(456),
},
ColdStartTimeoutFailoverRef: &httpv1alpha1.ColdStartTimeoutFailoverRef{
Service: "testsvc",
Port: int32(failoverPort),
TimeoutSeconds: 30,
},
TargetPendingRequests: ptr.To[int32](123),
},
},
)
req = util.RequestWithStream(req, initialStream)
req = util.RequestWithFailoverStream(req, failoverURL)
req.Host = host
hdl.ServeHTTP(res, req)
r.Equal(200, res.Code, "expected response code 200")
r.Equal("test response", res.Body.String())
}
// the proxy should wait for a timeout and fail if there is no
// origin to which to connect
func TestWaitFailedConnection(t *testing.T) {
@ -174,6 +247,7 @@ func TestWaitFailedConnection(t *testing.T) {
respHeaderTimeout: timeouts.ResponseHeader,
},
&tls.Config{},
&config.Tracing{},
)
stream, err := url.Parse("http://0.0.0.0:0")
r.NoError(err)
@ -224,6 +298,7 @@ func TestWaitFailedConnectionTLS(t *testing.T) {
respHeaderTimeout: timeouts.ResponseHeader,
},
&TestTLSConfig,
&config.Tracing{},
)
stream, err := url.Parse("http://0.0.0.0:0")
r.NoError(err)
@ -275,6 +350,7 @@ func TestTimesOutOnWaitFunc(t *testing.T) {
respHeaderTimeout: timeouts.ResponseHeader,
},
&tls.Config{},
&config.Tracing{},
)
stream, err := url.Parse("http://1.1.1.1")
r.NoError(err)
@ -347,6 +423,7 @@ func TestTimesOutOnWaitFuncTLS(t *testing.T) {
respHeaderTimeout: timeouts.ResponseHeader,
},
&TestTLSConfig,
&config.Tracing{},
)
stream, err := url.Parse("http://1.1.1.1")
r.NoError(err)
@ -430,6 +507,7 @@ func TestWaitsForWaitFunc(t *testing.T) {
respHeaderTimeout: timeouts.ResponseHeader,
},
&tls.Config{},
&config.Tracing{},
)
const path = "/testfwd"
res, req, err := reqAndRes(path)
@ -496,6 +574,7 @@ func TestWaitsForWaitFuncTLS(t *testing.T) {
respHeaderTimeout: timeouts.ResponseHeader,
},
&TestTLSConfig,
&config.Tracing{},
)
const path = "/testfwd"
res, req, err := reqAndRes(path)
@ -566,6 +645,7 @@ func TestWaitHeaderTimeout(t *testing.T) {
respHeaderTimeout: timeouts.ResponseHeader,
},
&tls.Config{},
&config.Tracing{},
)
const path = "/testfwd"
res, req, err := reqAndRes(path)
@ -624,6 +704,7 @@ func TestWaitHeaderTimeoutTLS(t *testing.T) {
respHeaderTimeout: timeouts.ResponseHeader,
},
&TestTLSConfig,
&config.Tracing{},
)
const path = "/testfwd"
res, req, err := reqAndRes(path)

View File

@ -0,0 +1,102 @@
package tracing
import (
"context"
"errors"
"strings"
"go.opentelemetry.io/contrib/propagators/b3"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
"go.opentelemetry.io/otel/exporters/stdout/stdouttrace"
"go.opentelemetry.io/otel/propagation"
"go.opentelemetry.io/otel/sdk/resource"
"go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.26.0"
"github.com/kedacore/http-add-on/interceptor/config"
)
var serviceName = "keda-http-interceptor"
func SetupOTelSDK(ctx context.Context, tCfg *config.Tracing) (shutdown func(context.Context) error, err error) {
var shutdownFuncs []func(context.Context) error
// shutdown calls cleanup functions registered via shutdownFuncs.
// The errors from the calls are joined.
// Each registered cleanup will be invoked once.
shutdown = func(ctx context.Context) error {
var err error
for _, fn := range shutdownFuncs {
err = errors.Join(err, fn(ctx))
}
shutdownFuncs = nil
return err
}
handleErr := func(inErr error) {
err = errors.Join(inErr, shutdown(ctx))
}
res, err := newResource(serviceName)
if err != nil {
handleErr(err)
return
}
prop := NewPropagator()
otel.SetTextMapPropagator(prop)
tracerProvider, err := newTraceProvider(ctx, res, tCfg)
if err != nil {
handleErr(err)
return
}
shutdownFuncs = append(shutdownFuncs, tracerProvider.Shutdown)
otel.SetTracerProvider(tracerProvider)
return
}
func newResource(serviceName string) (*resource.Resource, error) {
return resource.Merge(resource.Default(),
resource.NewWithAttributes(semconv.SchemaURL,
semconv.ServiceName(serviceName),
))
}
func NewPropagator() propagation.TextMapPropagator {
return propagation.NewCompositeTextMapPropagator(
propagation.TraceContext{},
propagation.Baggage{},
b3.New(),
)
}
func newTraceProvider(ctx context.Context, res *resource.Resource, tCfg *config.Tracing) (*trace.TracerProvider, error) {
traceExporter, err := newExporter(ctx, tCfg)
if err != nil {
return nil, err
}
traceProvider := trace.NewTracerProvider(
trace.WithSampler(trace.AlwaysSample()),
trace.WithBatcher(traceExporter),
trace.WithResource(res),
)
return traceProvider, nil
}
func newExporter(ctx context.Context, tCfg *config.Tracing) (trace.SpanExporter, error) {
switch strings.ToLower(tCfg.Exporter) {
case "console":
return stdouttrace.New()
case "http/protobuf":
return otlptracehttp.New(ctx)
case "grpc":
return otlptracegrpc.New(ctx)
default:
return nil, errors.New("no valid tracing exporter defined")
}
}

View File

@ -0,0 +1,18 @@
package tracing
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/kedacore/http-add-on/interceptor/config"
)
func TestTracingConfig(t *testing.T) {
tracingCfg := config.MustParseTracing()
tracingCfg.Enabled = true
// check defaults are set correctly
assert.Equal(t, "console", tracingCfg.Exporter)
assert.Equal(t, true, tracingCfg.Enabled)
}

View File

@ -1,4 +1,4 @@
FROM --platform=${BUILDPLATFORM} ghcr.io/kedacore/keda-tools:1.23.6 as builder
FROM --platform=${BUILDPLATFORM} ghcr.io/kedacore/keda-tools:1.24.3 as builder
WORKDIR /workspace
COPY go.* .
RUN go mod download

View File

@ -20,6 +20,13 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// +kubebuilder:object:generate=false
type Ref interface {
GetServiceName() string
GetPort() int32
GetPortName() string
}
// ScaleTargetRef contains all the details about an HTTP application to scale and route to
type ScaleTargetRef struct {
// +optional
@ -36,6 +43,44 @@ type ScaleTargetRef struct {
PortName string `json:"portName,omitempty"`
}
func (s ScaleTargetRef) GetServiceName() string {
return s.Service
}
func (s ScaleTargetRef) GetPort() int32 {
return s.Port
}
func (s ScaleTargetRef) GetPortName() string {
return s.PortName
}
// ColdStartTimeoutFailoverRef contains all the details about an HTTP application to scale and route to
type ColdStartTimeoutFailoverRef struct {
// The name of the service to route to
Service string `json:"service"`
// The port to route to
Port int32 `json:"port,omitempty"`
// The port to route to referenced by name
PortName string `json:"portName,omitempty"`
// The timeout in seconds to wait before routing to the failover service (Default 30)
// +kubebuilder:default=30
// +optional
TimeoutSeconds int32 `json:"timeoutSeconds,omitempty"`
}
func (s *ColdStartTimeoutFailoverRef) GetServiceName() string {
return s.Service
}
func (s *ColdStartTimeoutFailoverRef) GetPort() int32 {
return s.Port
}
func (s *ColdStartTimeoutFailoverRef) GetPortName() string {
return s.PortName
}
// ReplicaStruct contains the minimum and maximum amount of replicas to have in the deployment
type ReplicaStruct struct {
// Minimum amount of replicas to have in the deployment (Default 0)
@ -76,6 +121,17 @@ type RateMetricSpec struct {
Granularity metav1.Duration `json:"granularity" description:"Time granularity for rate calculation"`
}
// HTTPScaledObjectTimeoutsSpec defines timeouts that override the global ones
type HTTPScaledObjectTimeoutsSpec struct {
// How long to wait for the backing workload to have 1 or more replicas before connecting and sending the HTTP request (Default is set by the KEDA_CONDITION_WAIT_TIMEOUT environment variable)
// +optional
ConditionWait metav1.Duration `json:"conditionWait" description:"How long to wait for the backing workload to have 1 or more replicas before connecting and sending the HTTP request"`
// How long to wait between when the HTTP request is sent to the backing app and when response headers need to arrive (Default is set by the KEDA_RESPONSE_HEADER_TIMEOUT environment variable)
// +optional
ResponseHeader metav1.Duration `json:"responseHeader" description:"How long to wait between when the HTTP request is sent to the backing app and when response headers need to arrive"`
}
// HTTPScaledObjectSpec defines the desired state of HTTPScaledObject
type HTTPScaledObjectSpec struct {
// The hosts to route. All requests which the "Host" header
@ -93,6 +149,10 @@ type HTTPScaledObjectSpec struct {
// Including validation as a requirement to define either the PortName or the Port
// +kubebuilder:validation:XValidation:rule="has(self.portName) != has(self.port)",message="must define either the 'portName' or the 'port'"
ScaleTargetRef ScaleTargetRef `json:"scaleTargetRef"`
// (optional) The name of the failover service to route HTTP requests to when the target is not available
// +optional
// +kubebuilder:validation:XValidation:rule="has(self.portName) != has(self.port)",message="must define either the 'portName' or the 'port'"
ColdStartTimeoutFailoverRef *ColdStartTimeoutFailoverRef `json:"coldStartTimeoutFailoverRef,omitempty"`
// (optional) Replica information
// +optional
Replicas *ReplicaStruct `json:"replicas,omitempty"`
@ -108,6 +168,9 @@ type HTTPScaledObjectSpec struct {
// (optional) Configuration for the metric used for scaling
// +optional
ScalingMetric *ScalingMetricSpec `json:"scalingMetric,omitempty" description:"Configuration for the metric used for scaling. If empty 'concurrency' will be used"`
// (optional) Timeouts that override the global ones
// +optional
Timeouts *HTTPScaledObjectTimeoutsSpec `json:"timeouts,omitempty" description:"Timeouts that override the global ones"`
}
// HTTPScaledObjectStatus defines the observed state of HTTPScaledObject

View File

@ -24,6 +24,21 @@ import (
runtime "k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ColdStartTimeoutFailoverRef) DeepCopyInto(out *ColdStartTimeoutFailoverRef) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ColdStartTimeoutFailoverRef.
func (in *ColdStartTimeoutFailoverRef) DeepCopy() *ColdStartTimeoutFailoverRef {
if in == nil {
return nil
}
out := new(ColdStartTimeoutFailoverRef)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ConcurrencyMetricSpec) DeepCopyInto(out *ConcurrencyMetricSpec) {
*out = *in
@ -146,6 +161,11 @@ func (in *HTTPScaledObjectSpec) DeepCopyInto(out *HTTPScaledObjectSpec) {
copy(*out, *in)
}
out.ScaleTargetRef = in.ScaleTargetRef
if in.ColdStartTimeoutFailoverRef != nil {
in, out := &in.ColdStartTimeoutFailoverRef, &out.ColdStartTimeoutFailoverRef
*out = new(ColdStartTimeoutFailoverRef)
**out = **in
}
if in.Replicas != nil {
in, out := &in.Replicas, &out.Replicas
*out = new(ReplicaStruct)
@ -171,6 +191,11 @@ func (in *HTTPScaledObjectSpec) DeepCopyInto(out *HTTPScaledObjectSpec) {
*out = new(ScalingMetricSpec)
(*in).DeepCopyInto(*out)
}
if in.Timeouts != nil {
in, out := &in.Timeouts, &out.Timeouts
*out = new(HTTPScaledObjectTimeoutsSpec)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HTTPScaledObjectSpec.
@ -203,6 +228,23 @@ func (in *HTTPScaledObjectStatus) DeepCopy() *HTTPScaledObjectStatus {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HTTPScaledObjectTimeoutsSpec) DeepCopyInto(out *HTTPScaledObjectTimeoutsSpec) {
*out = *in
out.ConditionWait = in.ConditionWait
out.ResponseHeader = in.ResponseHeader
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HTTPScaledObjectTimeoutsSpec.
func (in *HTTPScaledObjectTimeoutsSpec) DeepCopy() *HTTPScaledObjectTimeoutsSpec {
if in == nil {
return nil
}
out := new(HTTPScaledObjectTimeoutsSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RateMetricSpec) DeepCopyInto(out *RateMetricSpec) {
*out = *in

View File

@ -60,7 +60,7 @@ func (r *HTTPScaledObjectReconciler) Reconcile(ctx context.Context, req ctrl.Req
logger.Info("Reconciliation start")
httpso := &httpv1alpha1.HTTPScaledObject{}
if err := r.Client.Get(ctx, req.NamespacedName, httpso); err != nil {
if err := r.Get(ctx, req.NamespacedName, httpso); err != nil {
if k8serrors.IsNotFound(err) {
// If the HTTPScaledObject wasn't found, it might have
// been deleted between the reconcile and the get.

View File

@ -67,12 +67,12 @@ func TestCreateOrUpdateScaledObject(t *testing.T) {
)
r.EqualValues(
testInfra.httpso.ObjectMeta.Labels,
testInfra.httpso.Labels,
metadata.Labels,
)
r.EqualValues(
testInfra.httpso.ObjectMeta.Annotations,
testInfra.httpso.Annotations,
metadata.Annotations,
)
@ -108,8 +108,8 @@ func TestCreateOrUpdateScaledObject(t *testing.T) {
}
*testInfra.httpso.Spec.Replicas.Min++
*testInfra.httpso.Spec.Replicas.Max++
testInfra.httpso.ObjectMeta.Labels["test"] = "test-label"
testInfra.httpso.ObjectMeta.Annotations["test"] = "test-annotation"
testInfra.httpso.Labels["test"] = "test-label"
testInfra.httpso.Annotations["test"] = "test-annotation"
r.NoError(reconciller.createOrUpdateScaledObject(
testInfra.ctx,
testInfra.cl,
@ -127,15 +127,14 @@ func TestCreateOrUpdateScaledObject(t *testing.T) {
)
r.NoError(err)
metadata = retSO.ObjectMeta
r.EqualValues(
testInfra.httpso.ObjectMeta.Labels,
metadata.Labels,
testInfra.httpso.Labels,
retSO.Labels,
)
r.EqualValues(
testInfra.httpso.ObjectMeta.Annotations,
metadata.Annotations,
testInfra.httpso.Annotations,
retSO.Annotations,
)
spec = retSO.Spec

View File

@ -57,11 +57,13 @@ func main() {
var metricsAddr string
var enableLeaderElection bool
var probeAddr string
var profilingAddr string
flag.StringVar(&metricsAddr, "metrics-bind-address", ":8080", "The address the metric endpoint binds to.")
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller manager.")
flag.StringVar(&profilingAddr, "profiling-bind-address", "", "The address the profiling would be exposed on.")
opts := zap.Options{
Development: true,
}
@ -96,6 +98,7 @@ func main() {
Metrics: server.Options{
BindAddress: metricsAddr,
},
PprofBindAddress: profilingAddr,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "http-add-on.keda.sh",

View File

@ -57,7 +57,6 @@ func TestGetEndpoints(t *testing.T) {
addrLookup := map[string]*v1.EndpointAddress{}
for _, subset := range endpoints.Subsets {
for _, addr := range subset.Addresses {
addr := addr
key := fmt.Sprintf("http://%s:%s", addr.IP, svcPort)
addrLookup[key] = &addr
}

View File

@ -29,3 +29,14 @@ func NamespacedNameFromScaledObjectRef(sor *externalscaler.ScaledObjectRef) *typ
Name: sor.GetName(),
}
}
func NamespacedNameFromNameAndNamespace(name, namespace string) *types.NamespacedName {
if name == "" || namespace == "" {
return nil
}
return &types.NamespacedName{
Name: name,
Namespace: namespace,
}
}

View File

@ -29,7 +29,7 @@ func NewScaledObject(
cooldownPeriod *int32,
initialCooldownPeriod *int32,
) *kedav1alpha1.ScaledObject {
so := &kedav1alpha1.ScaledObject{
return &kedav1alpha1.ScaledObject{
TypeMeta: metav1.TypeMeta{
APIVersion: kedav1alpha1.SchemeGroupVersion.Identifier(),
Kind: ObjectKind(&kedav1alpha1.ScaledObject{}),
@ -62,10 +62,7 @@ func NewScaledObject(
},
},
},
InitialCooldownPeriod: initialCooldownPeriod,
},
}
if initialCooldownPeriod != nil {
so.Spec.InitialCooldownPeriod = *initialCooldownPeriod
}
return so
}

View File

@ -190,8 +190,6 @@ var _ = Describe("Table", func() {
defer cancel()
for _, httpso := range httpsoList.Items {
httpso := httpso
key := *k8s.NamespacedNameFromObject(&httpso)
t.httpScaledObjects[key] = &httpso
}
@ -216,8 +214,6 @@ var _ = Describe("Table", func() {
defer cancel()
for _, httpso := range httpsoList.Items {
httpso := httpso
key := *k8s.NamespacedNameFromObject(&httpso)
t.httpScaledObjects[key] = &httpso
}
@ -285,8 +281,6 @@ var _ = Describe("Table", func() {
It("returns new memory based on HTTPSOs", func() {
for _, httpso := range httpsoList.Items {
httpso := httpso
key := *k8s.NamespacedNameFromObject(&httpso)
t.httpScaledObjects[key] = &httpso
}

View File

@ -44,7 +44,7 @@ func (tm tableMemory) Remember(httpso *httpv1alpha1.HTTPScaledObject) TableMemor
newStore, oldHTTPSO, _ := store.Insert(key, httpso)
// oldest HTTPScaledObject has precedence
if oldHTTPSO != nil && httpso.GetCreationTimestamp().Time.After(oldHTTPSO.GetCreationTimestamp().Time) {
if oldHTTPSO != nil && httpso.GetCreationTimestamp().After(oldHTTPSO.GetCreationTimestamp().Time) {
continue
}

View File

@ -257,21 +257,21 @@ var _ = Describe("TableMemory", func() {
t0 := time.Now()
httpso00 := *httpso0.DeepCopy()
httpso00.ObjectMeta.CreationTimestamp = metav1.NewTime(t0)
httpso00.CreationTimestamp = metav1.NewTime(t0)
tm = tm.Remember(&httpso00).(tableMemory)
httpso01 := *httpso0.DeepCopy()
httpso01.ObjectMeta.Name += nameSuffix
httpso01.ObjectMeta.CreationTimestamp = metav1.NewTime(t0.Add(-time.Minute))
httpso01.Name += nameSuffix
httpso01.CreationTimestamp = metav1.NewTime(t0.Add(-time.Minute))
tm = tm.Remember(&httpso01).(tableMemory)
httpso10 := *httpso1.DeepCopy()
httpso10.ObjectMeta.CreationTimestamp = metav1.NewTime(t0)
httpso10.CreationTimestamp = metav1.NewTime(t0)
tm = tm.Remember(&httpso10).(tableMemory)
httpso11 := *httpso1.DeepCopy()
httpso11.ObjectMeta.Name += nameSuffix
httpso11.ObjectMeta.CreationTimestamp = metav1.NewTime(t0.Add(+time.Minute))
httpso11.Name += nameSuffix
httpso11.CreationTimestamp = metav1.NewTime(t0.Add(+time.Minute))
tm = tm.Remember(&httpso11).(tableMemory)
assertIndex(tm, &httpso00, &httpso00)
@ -341,21 +341,21 @@ var _ = Describe("TableMemory", func() {
t0 := time.Now()
httpso00 := *httpso0.DeepCopy()
httpso00.ObjectMeta.CreationTimestamp = metav1.NewTime(t0)
httpso00.CreationTimestamp = metav1.NewTime(t0)
tm = insertTrees(tm, &httpso00)
httpso01 := *httpso0.DeepCopy()
httpso01.ObjectMeta.Name += nameSuffix
httpso01.ObjectMeta.CreationTimestamp = metav1.NewTime(t0.Add(-time.Minute))
httpso01.Name += nameSuffix
httpso01.CreationTimestamp = metav1.NewTime(t0.Add(-time.Minute))
tm = insertTrees(tm, &httpso01)
httpso10 := *httpso1.DeepCopy()
httpso10.ObjectMeta.Name += nameSuffix
httpso10.ObjectMeta.CreationTimestamp = metav1.NewTime(t0)
httpso10.Name += nameSuffix
httpso10.CreationTimestamp = metav1.NewTime(t0)
tm = insertTrees(tm, &httpso10)
httpso11 := *httpso1.DeepCopy()
httpso11.ObjectMeta.CreationTimestamp = metav1.NewTime(t0.Add(-time.Minute))
httpso11.CreationTimestamp = metav1.NewTime(t0.Add(-time.Minute))
tm = insertTrees(tm, &httpso11)
tm = tm.Forget(&httpso0NamespacedName).(tableMemory)
@ -484,8 +484,6 @@ var _ = Describe("TableMemory", func() {
store: iradix.New[*httpv1alpha1.HTTPScaledObject](),
}
for _, httpso := range httpsoList.Items {
httpso := httpso
tm = insertTrees(tm, &httpso)
}

View File

@ -15,6 +15,7 @@ const (
ckLogger contextKey = iota
ckHTTPSO
ckStream
ckFailoverStream
)
func ContextWithLogger(ctx context.Context, logger logr.Logger) context.Context {
@ -43,3 +44,12 @@ func StreamFromContext(ctx context.Context) *url.URL {
cv, _ := ctx.Value(ckStream).(*url.URL)
return cv
}
func ContextWithFailoverStream(ctx context.Context, url *url.URL) context.Context {
return context.WithValue(ctx, ckFailoverStream, url)
}
func FailoverStreamFromContext(ctx context.Context) *url.URL {
cv, _ := ctx.Value(ckFailoverStream).(*url.URL)
return cv
}

View File

@ -36,3 +36,10 @@ func RequestWithStream(r *http.Request, stream *url.URL) *http.Request {
return r.WithContext(ctx)
}
func RequestWithFailoverStream(r *http.Request, stream *url.URL) *http.Request {
ctx := r.Context()
ctx = ContextWithFailoverStream(ctx, stream)
return r.WithContext(ctx)
}

View File

@ -1,4 +1,4 @@
FROM --platform=${BUILDPLATFORM} ghcr.io/kedacore/keda-tools:1.23.6 as builder
FROM --platform=${BUILDPLATFORM} ghcr.io/kedacore/keda-tools:1.24.3 as builder
WORKDIR /workspace
COPY go.* .
RUN go mod download

View File

@ -34,6 +34,8 @@ type config struct {
DeploymentCacheRsyncPeriod time.Duration `envconfig:"KEDA_HTTP_SCALER_DEPLOYMENT_INFORMER_RSYNC_PERIOD" default:"60m"`
// QueueTickDuration is the duration between queue requests
QueueTickDuration time.Duration `envconfig:"KEDA_HTTP_QUEUE_TICK_DURATION" default:"500ms"`
// ProfilingAddr if not empty, pprof will be available on this address, assuming host:port here
ProfilingAddr string `envconfig:"PROFILING_BIND_ADDRESS" default:""`
}
func mustParseConfig() *config {

View File

@ -45,12 +45,7 @@ type impl struct {
externalscaler.UnimplementedExternalScalerServer
}
func newImpl(
lggr logr.Logger,
pinger *queuePinger,
httpsoInformer informershttpv1alpha1.HTTPScaledObjectInformer,
defaultTargetMetric int64,
) *impl {
func newImpl(lggr logr.Logger, pinger *queuePinger, httpsoInformer informershttpv1alpha1.HTTPScaledObjectInformer, defaultTargetMetric int64) *impl {
return &impl{
lggr: lggr,
pinger: pinger,
@ -63,10 +58,7 @@ func (e *impl) Ping(context.Context, *emptypb.Empty) (*emptypb.Empty, error) {
return &emptypb.Empty{}, nil
}
func (e *impl) IsActive(
ctx context.Context,
sor *externalscaler.ScaledObjectRef,
) (*externalscaler.IsActiveResponse, error) {
func (e *impl) IsActive(ctx context.Context, sor *externalscaler.ScaledObjectRef) (*externalscaler.IsActiveResponse, error) {
lggr := e.lggr.WithName("IsActive")
gmr, err := e.GetMetrics(ctx, &externalscaler.GetMetricsRequest{
@ -91,10 +83,7 @@ func (e *impl) IsActive(
return res, nil
}
func (e *impl) StreamIsActive(
scaledObject *externalscaler.ScaledObjectRef,
server externalscaler.ExternalScaler_StreamIsActiveServer,
) error {
func (e *impl) StreamIsActive(scaledObject *externalscaler.ScaledObjectRef, server externalscaler.ExternalScaler_StreamIsActiveServer) error {
// this function communicates with KEDA via the 'server' parameter.
// we call server.Send (below) every streamInterval, which tells it to immediately
// ping our IsActive RPC
@ -107,46 +96,47 @@ func (e *impl) StreamIsActive(
case <-ticker.C:
active, err := e.IsActive(server.Context(), scaledObject)
if err != nil {
e.lggr.Error(
err,
"error getting active status in stream",
)
e.lggr.Error(err, "error getting active status in stream")
return err
}
err = server.Send(&externalscaler.IsActiveResponse{
Result: active.Result,
})
if err != nil {
e.lggr.Error(
err,
"error sending the active result in stream",
)
e.lggr.Error(err, "error sending the active result in stream")
return err
}
}
}
}
func (e *impl) GetMetricSpec(
_ context.Context,
sor *externalscaler.ScaledObjectRef,
) (*externalscaler.GetMetricSpecResponse, error) {
func (e *impl) GetMetricSpec(_ context.Context, sor *externalscaler.ScaledObjectRef) (*externalscaler.GetMetricSpecResponse, error) {
lggr := e.lggr.WithName("GetMetricSpec")
namespacedName := k8s.NamespacedNameFromScaledObjectRef(sor)
metricName := MetricName(namespacedName)
httpso, err := e.httpsoInformer.Lister().HTTPScaledObjects(sor.Namespace).Get(sor.Name)
if err != nil {
if scalerMetadata := sor.GetScalerMetadata(); scalerMetadata != nil {
scalerMetadata := sor.GetScalerMetadata()
httpScaledObjectName, ok := scalerMetadata[k8s.HTTPScaledObjectKey]
if !ok {
if scalerMetadata != nil {
if interceptorTargetPendingRequests, ok := scalerMetadata[keyInterceptorTargetPendingRequests]; ok {
// generated the metric name for the ScaledObject targeting the interceptor
metricName := MetricName(k8s.NamespacedNameFromScaledObjectRef(sor))
return e.interceptorMetricSpec(metricName, interceptorTargetPendingRequests)
}
}
lggr.Error(err, "unable to get HTTPScaledObject", "name", sor.Name, "namespace", sor.Namespace)
err := fmt.Errorf("unable to get HTTPScaledObject reference")
lggr.Error(err, "unable to get the linked HTTPScaledObject for ScaledObject", "name", sor.Name, "namespace", sor.Namespace, "httpScaledObjectName", httpScaledObjectName)
return nil, err
}
httpso, err := e.httpsoInformer.Lister().HTTPScaledObjects(sor.Namespace).Get(httpScaledObjectName)
if err != nil {
lggr.Error(err, "unable to get HTTPScaledObject", "name", sor.Name, "namespace", sor.Namespace, "httpScaledObjectName", httpScaledObjectName)
return nil, err
}
// generated the metric name for HTTPScaledObject
metricName := MetricName(k8s.NamespacedNameFromNameAndNamespace(httpScaledObjectName, sor.Namespace))
targetValue := int64(ptr.Deref(httpso.Spec.TargetPendingRequests, 100))
if httpso.Spec.ScalingMetric != nil {
@ -189,41 +179,40 @@ func (e *impl) interceptorMetricSpec(metricName string, interceptorTargetPending
return res, nil
}
func (e *impl) GetMetrics(
_ context.Context,
metricRequest *externalscaler.GetMetricsRequest,
) (*externalscaler.GetMetricsResponse, error) {
func (e *impl) GetMetrics(_ context.Context, metricRequest *externalscaler.GetMetricsRequest) (*externalscaler.GetMetricsResponse, error) {
lggr := e.lggr.WithName("GetMetrics")
sor := metricRequest.ScaledObjectRef
namespacedName := k8s.NamespacedNameFromScaledObjectRef(sor)
metricName := MetricName(namespacedName)
scalerMetadata := sor.GetScalerMetadata()
httpScaledObjectName, ok := scalerMetadata[k8s.HTTPScaledObjectKey]
if !ok {
if scalerMetadata := sor.GetScalerMetadata(); scalerMetadata != nil {
if scalerMetadata != nil {
if _, ok := scalerMetadata[keyInterceptorTargetPendingRequests]; ok {
// generated the metric name for the ScaledObject targeting the interceptor
metricName := MetricName(k8s.NamespacedNameFromScaledObjectRef(sor))
return e.interceptorMetrics(metricName)
}
}
err := fmt.Errorf("unable to get HTTPScaledObject reference")
lggr.Error(err, "unable to get the linked HTTPScaledObject for ScaledObject", "name", sor.Name, "namespace", sor.Namespace)
lggr.Error(err, "unable to get the linked HTTPScaledObject for ScaledObject", "name", sor.Name, "namespace", sor.Namespace, "httpScaledObjectName", httpScaledObjectName)
return nil, err
}
httpso, err := e.httpsoInformer.Lister().HTTPScaledObjects(sor.Namespace).Get(httpScaledObjectName)
if err != nil {
lggr.Error(err, "unable to get HTTPScaledObject", "name", httpScaledObjectName, "namespace", sor.Namespace)
lggr.Error(err, "unable to get HTTPScaledObject", "name", httpScaledObjectName, "namespace", sor.Namespace, "httpScaledObjectName", httpScaledObjectName)
return nil, err
}
// generated the metric name for HTTPScaledObject
namespacedName := k8s.NamespacedNameFromNameAndNamespace(httpScaledObjectName, sor.Namespace)
metricName := MetricName(namespacedName)
key := namespacedName.String()
count := e.pinger.counts()[key]
var metricValue int
if httpso.Spec.ScalingMetric != nil &&
httpso.Spec.ScalingMetric.Rate != nil {
if httpso.Spec.ScalingMetric != nil && httpso.Spec.ScalingMetric.Rate != nil {
metricValue = int(math.Ceil(count.RPS))
lggr.V(1).Info(fmt.Sprintf("%d rps for %s", metricValue, httpso.GetName()))
} else {

View File

@ -45,7 +45,7 @@ func TestStreamIsActive(t *testing.T) {
setup: func(t *testing.T, qp *queuePinger) {
namespacedName := &types.NamespacedName{
Namespace: "default",
Name: t.Name(),
Name: validHTTPScaledObjectName,
}
key := namespacedName.String()
@ -65,7 +65,7 @@ func TestStreamIsActive(t *testing.T) {
setup: func(t *testing.T, qp *queuePinger) {
namespacedName := &types.NamespacedName{
Namespace: "default",
Name: t.Name(),
Name: validHTTPScaledObjectName,
}
key := namespacedName.String()
@ -88,7 +88,7 @@ func TestStreamIsActive(t *testing.T) {
setup: func(t *testing.T, qp *queuePinger) {
namespacedName := &types.NamespacedName{
Namespace: "default",
Name: t.Name(),
Name: validHTTPScaledObjectName,
}
key := namespacedName.String()
@ -113,7 +113,7 @@ func TestStreamIsActive(t *testing.T) {
setup: func(t *testing.T, qp *queuePinger) {
namespacedName := &types.NamespacedName{
Namespace: "default",
Name: t.Name(),
Name: validHTTPScaledObjectName,
}
key := namespacedName.String()
@ -138,7 +138,7 @@ func TestStreamIsActive(t *testing.T) {
setup: func(t *testing.T, qp *queuePinger) {
namespacedName := &types.NamespacedName{
Namespace: "default",
Name: t.Name(),
Name: validHTTPScaledObjectName,
}
key := namespacedName.String()
@ -163,7 +163,7 @@ func TestStreamIsActive(t *testing.T) {
setup: func(t *testing.T, qp *queuePinger) {
namespacedName := &types.NamespacedName{
Namespace: "default",
Name: t.Name(),
Name: validHTTPScaledObjectName,
}
key := namespacedName.String()
@ -188,7 +188,7 @@ func TestStreamIsActive(t *testing.T) {
setup: func(t *testing.T, qp *queuePinger) {
namespacedName := &types.NamespacedName{
Namespace: "default",
Name: t.Name(),
Name: validHTTPScaledObjectName,
}
key := namespacedName.String()
@ -347,7 +347,7 @@ func TestIsActive(t *testing.T) {
setup: func(t *testing.T, qp *queuePinger) {
namespacedName := &types.NamespacedName{
Namespace: "default",
Name: t.Name(),
Name: validHTTPScaledObjectName,
}
key := namespacedName.String()
@ -369,7 +369,7 @@ func TestIsActive(t *testing.T) {
setup: func(t *testing.T, qp *queuePinger) {
namespacedName := &types.NamespacedName{
Namespace: "default",
Name: t.Name(),
Name: validHTTPScaledObjectName,
}
key := namespacedName.String()
@ -470,25 +470,31 @@ func TestGetMetricSpecTable(t *testing.T) {
name: "valid host as single host value in scaler metadata",
defaultTargetMetric: 0,
newInformer: func(t *testing.T, ctrl *gomock.Controller) *informersexternalversionshttpv1alpha1mock.MockHTTPScaledObjectInformer {
informer, _, namespaceLister := newMocks(ctrl, nil)
namespaceLister := listershttpv1alpha1mock.NewMockHTTPScaledObjectNamespaceLister(ctrl)
lister := listershttpv1alpha1mock.NewMockHTTPScaledObjectLister(ctrl)
informer := informersexternalversionshttpv1alpha1mock.NewMockHTTPScaledObjectInformer(ctrl)
informer.EXPECT().
Lister().
Return(lister).
AnyTimes()
lister.EXPECT().
HTTPScaledObjects(ns).
Return(namespaceLister).
AnyTimes()
httpso := &httpv1alpha1.HTTPScaledObject{
ObjectMeta: metav1.ObjectMeta{
Namespace: ns,
Name: t.Name(),
},
ObjectMeta: metav1.ObjectMeta{Namespace: ns, Name: validHTTPScaledObjectName},
Spec: httpv1alpha1.HTTPScaledObjectSpec{
ScaleTargetRef: httpv1alpha1.ScaleTargetRef{
Name: "testdepl",
Service: "testsrv",
Port: 8080,
},
ScaleTargetRef: httpv1alpha1.ScaleTargetRef{Name: "testdepl", Service: "testsrv", Port: 8080},
TargetPendingRequests: ptr.To[int32](123),
},
}
namespaceLister.EXPECT().
Get(httpso.GetName()).
Return(httpso, nil)
namespaceLister.
EXPECT().
Get(validHTTPScaledObjectName).
Return(httpso, nil).
Times(1)
return informer
},
@ -497,28 +503,36 @@ func TestGetMetricSpecTable(t *testing.T) {
r := require.New(t)
r.NoError(err)
r.NotNil(res)
r.Equal(1, len(res.MetricSpecs))
r.Len(res.MetricSpecs, 1)
spec := res.MetricSpecs[0]
r.Equal(MetricName(&types.NamespacedName{Namespace: ns, Name: t.Name()}), spec.MetricName)
r.Equal(MetricName(&types.NamespacedName{Namespace: ns, Name: validHTTPScaledObjectName}), spec.MetricName)
r.Equal(int64(123), spec.TargetSize)
},
scalerMetadata: map[string]string{
k8s.HTTPScaledObjectKey: validHTTPScaledObjectName,
},
},
{
name: "valid hosts as multiple hosts value in scaler metadata",
defaultTargetMetric: 0,
newInformer: func(t *testing.T, ctrl *gomock.Controller) *informersexternalversionshttpv1alpha1mock.MockHTTPScaledObjectInformer {
informer, _, namespaceLister := newMocks(ctrl, nil)
namespaceLister := listershttpv1alpha1mock.NewMockHTTPScaledObjectNamespaceLister(ctrl)
lister := listershttpv1alpha1mock.NewMockHTTPScaledObjectLister(ctrl)
informer := informersexternalversionshttpv1alpha1mock.NewMockHTTPScaledObjectInformer(ctrl)
informer.EXPECT().
Lister().
Return(lister).
AnyTimes()
lister.EXPECT().
HTTPScaledObjects(ns).
Return(namespaceLister).
AnyTimes()
httpso := &httpv1alpha1.HTTPScaledObject{
ObjectMeta: metav1.ObjectMeta{
Namespace: ns,
Name: t.Name(),
},
ObjectMeta: metav1.ObjectMeta{Namespace: ns, Name: validHTTPScaledObjectName},
Spec: httpv1alpha1.HTTPScaledObjectSpec{
Hosts: []string{
"validHost1",
"validHost2",
},
Hosts: []string{"validHost1", "validHost2"},
ScaleTargetRef: httpv1alpha1.ScaleTargetRef{
Name: "testdepl",
Service: "testsrv",
@ -527,9 +541,11 @@ func TestGetMetricSpecTable(t *testing.T) {
TargetPendingRequests: ptr.To[int32](123),
},
}
namespaceLister.EXPECT().
Get(httpso.GetName()).
Return(httpso, nil)
namespaceLister.
EXPECT().
Get(validHTTPScaledObjectName).
Return(httpso, nil).
Times(1)
return informer
},
@ -538,23 +554,31 @@ func TestGetMetricSpecTable(t *testing.T) {
r := require.New(t)
r.NoError(err)
r.NotNil(res)
r.Equal(1, len(res.MetricSpecs))
r.Len(res.MetricSpecs, 1)
spec := res.MetricSpecs[0]
r.Equal(MetricName(&types.NamespacedName{Namespace: ns, Name: t.Name()}), spec.MetricName)
r.Equal(MetricName(&types.NamespacedName{Namespace: ns, Name: validHTTPScaledObjectName}), spec.MetricName)
r.Equal(int64(123), spec.TargetSize)
},
scalerMetadata: map[string]string{
k8s.HTTPScaledObjectKey: validHTTPScaledObjectName,
},
},
{
name: "interceptor",
defaultTargetMetric: 0,
newInformer: func(t *testing.T, ctrl *gomock.Controller) *informersexternalversionshttpv1alpha1mock.MockHTTPScaledObjectInformer {
informer, _, namespaceLister := newMocks(ctrl, nil)
namespaceLister := listershttpv1alpha1mock.NewMockHTTPScaledObjectNamespaceLister(ctrl)
lister := listershttpv1alpha1mock.NewMockHTTPScaledObjectLister(ctrl)
informer := informersexternalversionshttpv1alpha1mock.NewMockHTTPScaledObjectInformer(ctrl)
namespaceLister.EXPECT().
Get(gomock.Any()).
DoAndReturn(func(name string) (*httpv1alpha1.HTTPScaledObject, error) {
return nil, errors.NewNotFound(httpv1alpha1.Resource("httpscaledobject"), name)
})
informer.EXPECT().
Lister().
Return(lister).
AnyTimes()
lister.EXPECT().
HTTPScaledObjects(ns).
Return(namespaceLister).
AnyTimes()
return informer
},
@ -563,7 +587,7 @@ func TestGetMetricSpecTable(t *testing.T) {
r := require.New(t)
r.NoError(err)
r.NotNil(res)
r.Equal(1, len(res.MetricSpecs))
r.Len(res.MetricSpecs, 1)
spec := res.MetricSpecs[0]
r.Equal(MetricName(&types.NamespacedName{Namespace: ns, Name: t.Name()}), spec.MetricName)
r.Equal(int64(1000), spec.TargetSize)
@ -574,11 +598,8 @@ func TestGetMetricSpecTable(t *testing.T) {
},
}
for i, c := range cases {
testName := fmt.Sprintf("test case #%d: %s", i, c.name)
// capture tc in scope so that we can run the below test
// in parallel
testCase := c
for i, tc := range cases {
testName := fmt.Sprintf("test case #%d: %s", i, tc.name)
t.Run(testName, func(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
@ -586,28 +607,22 @@ func TestGetMetricSpecTable(t *testing.T) {
ctx := context.Background()
t.Parallel()
lggr := logr.Discard()
informer := testCase.newInformer(t, ctrl)
informer := tc.newInformer(t, ctrl)
ticker, pinger, err := newFakeQueuePinger(lggr)
if err != nil {
t.Fatalf(
"error creating new fake queue pinger and related components: %s",
err,
)
t.Fatalf("error creating new fake queue pinger: %s", err)
}
defer ticker.Stop()
hdl := newImpl(
lggr,
pinger,
informer,
testCase.defaultTargetMetric,
)
hdl := newImpl(lggr, pinger, informer, tc.defaultTargetMetric)
scaledObjectRef := externalscaler.ScaledObjectRef{
Namespace: ns,
Name: t.Name(),
ScalerMetadata: testCase.scalerMetadata,
ScalerMetadata: tc.scalerMetadata,
}
ret, err := hdl.GetMetricSpec(ctx, &scaledObjectRef)
testCase.checker(t, ret, err)
res, err := hdl.GetMetricSpec(ctx, &scaledObjectRef)
tc.checker(t, res, err)
})
}
}
@ -704,9 +719,9 @@ func TestGetMetrics(t *testing.T) {
r := require.New(t)
r.NoError(err)
r.NotNil(res)
r.Equal(1, len(res.MetricValues))
r.Len(res.MetricValues, 1)
metricVal := res.MetricValues[0]
r.Equal(MetricName(&types.NamespacedName{Namespace: ns, Name: t.Name()}), metricVal.MetricName)
r.Equal(MetricName(&types.NamespacedName{Namespace: ns, Name: validHTTPScaledObjectName}), metricVal.MetricName)
r.Equal(int64(0), metricVal.MetricValue)
},
defaultTargetMetric: int64(200),
@ -726,7 +741,7 @@ func TestGetMetrics(t *testing.T) {
namespacedName := &types.NamespacedName{
Namespace: ns,
Name: t.Name(),
Name: validHTTPScaledObjectName,
}
key := namespacedName.String()
@ -744,9 +759,9 @@ func TestGetMetrics(t *testing.T) {
r := require.New(t)
r.NoError(err)
r.NotNil(res)
r.Equal(1, len(res.MetricValues))
r.Len(res.MetricValues, 1)
metricVal := res.MetricValues[0]
r.Equal(MetricName(&types.NamespacedName{Namespace: ns, Name: t.Name()}), metricVal.MetricName)
r.Equal(MetricName(&types.NamespacedName{Namespace: ns, Name: validHTTPScaledObjectName}), metricVal.MetricName)
r.Equal(int64(201), metricVal.MetricValue)
},
defaultTargetMetric: int64(200),
@ -766,7 +781,7 @@ func TestGetMetrics(t *testing.T) {
namespacedName := &types.NamespacedName{
Namespace: ns,
Name: t.Name(),
Name: validHTTPScaledObjectName,
}
key := namespacedName.String()
@ -784,9 +799,9 @@ func TestGetMetrics(t *testing.T) {
r := require.New(t)
r.NoError(err)
r.NotNil(res)
r.Equal(1, len(res.MetricValues))
r.Len(res.MetricValues, 1)
metricVal := res.MetricValues[0]
r.Equal(MetricName(&types.NamespacedName{Namespace: ns, Name: t.Name()}), metricVal.MetricName)
r.Equal(MetricName(&types.NamespacedName{Namespace: ns, Name: validHTTPScaledObjectName}), metricVal.MetricName)
// the value here needs to be the same thing as
// the sum of the values in the fake queue created
// in the setup function
@ -824,7 +839,7 @@ func TestGetMetrics(t *testing.T) {
r := require.New(t)
r.NoError(err)
r.NotNil(res)
r.Equal(1, len(res.MetricValues))
r.Len(res.MetricValues, 1)
metricVal := res.MetricValues[0]
r.Equal(MetricName(&types.NamespacedName{Namespace: ns, Name: t.Name()}), metricVal.MetricName)
// the value here needs to be the same thing as

View File

@ -10,6 +10,8 @@ import (
"flag"
"fmt"
"net"
"net/http"
_ "net/http/pprof"
"os"
"time"
@ -47,6 +49,7 @@ func main() {
deplName := cfg.TargetDeployment
targetPortStr := fmt.Sprintf("%d", cfg.TargetPort)
targetPendingRequests := cfg.TargetPendingRequests
profilingAddr := cfg.ProfilingAddr
opts := zap.Options{}
opts.BindFlags(flag.CommandLine)
@ -64,14 +67,7 @@ func main() {
setupLog.Error(err, "creating new Kubernetes ClientSet")
os.Exit(1)
}
pinger := newQueuePinger(
ctrl.Log,
k8s.EndpointsFuncForK8sClientset(k8sCl),
namespace,
svcName,
deplName,
targetPortStr,
)
pinger := newQueuePinger(ctrl.Log, k8s.EndpointsFuncForK8sClientset(k8sCl), namespace, svcName, deplName, targetPortStr)
// create the endpoints informer
endpInformer := k8s.NewInformerBackedEndpointsCache(
@ -120,6 +116,13 @@ func main() {
return nil
})
if len(profilingAddr) > 0 {
eg.Go(func() error {
setupLog.Info("enabling pprof for profiling", "address", profilingAddr)
return http.ListenAndServe(profilingAddr, nil)
})
}
eg.Go(func() error {
setupLog.Info("starting the grpc server")
@ -141,15 +144,7 @@ func main() {
setupLog.Info("Bye!")
}
func startGrpcServer(
ctx context.Context,
cfg *config,
lggr logr.Logger,
port int,
pinger *queuePinger,
httpsoInformer informershttpv1alpha1.HTTPScaledObjectInformer,
targetPendingRequests int64,
) error {
func startGrpcServer(ctx context.Context, cfg *config, lggr logr.Logger, port int, pinger *queuePinger, httpsoInformer informershttpv1alpha1.HTTPScaledObjectInformer, targetPendingRequests int64) error {
addr := fmt.Sprintf("0.0.0.0:%d", port)
lggr.Info("starting grpc server", "address", addr)
@ -188,20 +183,9 @@ func startGrpcServer(
}
}()
grpc_health_v1.RegisterHealthServer(
grpcServer,
hs,
)
grpc_health_v1.RegisterHealthServer(grpcServer, hs)
externalscaler.RegisterExternalScalerServer(
grpcServer,
newImpl(
lggr,
pinger,
httpsoInformer,
targetPendingRequests,
),
)
externalscaler.RegisterExternalScalerServer(grpcServer, newImpl(lggr, pinger, httpsoInformer, targetPendingRequests))
go func() {
<-ctx.Done()

View File

@ -54,14 +54,7 @@ type queuePinger struct {
status PingerStatus
}
func newQueuePinger(
lggr logr.Logger,
getEndpointsFn k8s.GetEndpointsFunc,
ns,
svcName,
deplName,
adminPort string,
) *queuePinger {
func newQueuePinger(lggr logr.Logger, getEndpointsFn k8s.GetEndpointsFunc, ns, svcName, deplName, adminPort string) *queuePinger {
pingMut := new(sync.RWMutex)
pinger := &queuePinger{
getEndpointsFn: getEndpointsFn,
@ -77,11 +70,7 @@ func newQueuePinger(
}
// start starts the queuePinger
func (q *queuePinger) start(
ctx context.Context,
ticker *time.Ticker,
endpCache k8s.EndpointsCache,
) error {
func (q *queuePinger) start(ctx context.Context, ticker *time.Ticker, endpCache k8s.EndpointsCache) error {
endpoWatchIface, err := endpCache.Watch(q.interceptorNS, q.interceptorServiceName)
if err != nil {
return err
@ -132,14 +121,7 @@ func (q *queuePinger) counts() map[string]queue.Count {
func (q *queuePinger) fetchAndSaveCounts(ctx context.Context) error {
q.pingMut.Lock()
defer q.pingMut.Unlock()
counts, err := fetchCounts(
ctx,
q.lggr,
q.getEndpointsFn,
q.interceptorNS,
q.interceptorSvcName,
q.adminPort,
)
counts, err := fetchCounts(ctx, q.lggr, q.getEndpointsFn, q.interceptorNS, q.interceptorSvcName, q.adminPort)
if err != nil {
q.lggr.Error(err, "getting request counts")
q.status = PingerERROR
@ -161,23 +143,10 @@ func (q *queuePinger) fetchAndSaveCounts(ctx context.Context) error {
//
// Upon any failure, a non-nil error is returned and the
// other two return values are nil and 0, respectively.
func fetchCounts(
ctx context.Context,
lggr logr.Logger,
endpointsFn k8s.GetEndpointsFunc,
ns,
svcName,
adminPort string,
) (map[string]queue.Count, error) {
func fetchCounts(ctx context.Context, lggr logr.Logger, endpointsFn k8s.GetEndpointsFunc, ns, svcName, adminPort string) (map[string]queue.Count, error) {
lggr = lggr.WithName("queuePinger.requestCounts")
endpointURLs, err := k8s.EndpointsForService(
ctx,
ns,
svcName,
adminPort,
endpointsFn,
)
endpointURLs, err := k8s.EndpointsForService(ctx, ns, svcName, adminPort, endpointsFn)
if err != nil {
return nil, err
}

View File

@ -0,0 +1,330 @@
//go:build e2e
// +build e2e
package interceptor_otel_tracing_test
import (
"encoding/json"
"fmt"
"testing"
"time"
"github.com/stretchr/testify/assert"
"k8s.io/client-go/kubernetes"
. "github.com/kedacore/http-add-on/tests/helper"
)
const (
testName = "interceptor-otel-tracing-test"
)
var (
testNamespace = fmt.Sprintf("%s-ns", testName)
deploymentName = fmt.Sprintf("%s-deployment", testName)
serviceName = fmt.Sprintf("%s-service", testName)
clientName = fmt.Sprintf("%s-client", testName)
httpScaledObjectName = fmt.Sprintf("%s-http-so", testName)
host = testName
minReplicaCount = 0
maxReplicaCount = 1
otelCollectorZipKinURL = "http://zipkin.zipkin:9411/api/v2/traces?serviceName=keda-http-interceptor\\&server.address=interceptor-otel-tracing-test\\&limit=1000"
traces = Trace{}
)
type templateData struct {
TestNamespace string
DeploymentName string
ServiceName string
ClientName string
HTTPScaledObjectName string
Host string
MinReplicas int
MaxReplicas int
}
type Trace [][]struct {
TraceID string `json:"traceId"`
ParentID string `json:"parentId"`
ID string `json:"id"`
Kind string `json:"kind"`
Name string `json:"name"`
Timestamp int `json:"timestamp"`
Duration int `json:"duration"`
LocalEndpoint struct {
ServiceName string `json:"serviceName"`
} `json:"localEndpoint"`
Tags struct {
HTTPFlavor string `json:"http.flavor"`
HTTPMethod string `json:"http.method"`
HTTPResponseContentLength string `json:"http.response_content_length"`
HTTPStatusCode string `json:"http.response.status_code"`
HTTPURL string `json:"http.url"`
HTTPUserAgent string `json:"http.user_agent"`
NetPeerName string `json:"net.peer.name"`
OtelLibraryName string `json:"otel.library.name"`
OtelLibraryVersion string `json:"otel.library.version"`
TelemetrySdkLanguage string `json:"telemetry.sdk.language"`
TelemetrySdkName string `json:"telemetry.sdk.name"`
TelemetrySdkVersion string `json:"telemetry.sdk.version"`
} `json:"tags"`
}
const (
serviceTemplate = `
apiVersion: v1
kind: Service
metadata:
name: {{.ServiceName}}
namespace: {{.TestNamespace}}
labels:
app: {{.DeploymentName}}
spec:
ports:
- port: 8080
targetPort: http
protocol: TCP
name: http
selector:
app: {{.DeploymentName}}
`
deploymentTemplate = `
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{.DeploymentName}}
namespace: {{.TestNamespace}}
labels:
app: {{.DeploymentName}}
spec:
replicas: 0
selector:
matchLabels:
app: {{.DeploymentName}}
template:
metadata:
labels:
app: {{.DeploymentName}}
spec:
containers:
- name: {{.DeploymentName}}
image: registry.k8s.io/e2e-test-images/agnhost:2.45
args:
- netexec
ports:
- name: http
containerPort: 8080
protocol: TCP
readinessProbe:
httpGet:
path: /
port: http
`
loadJobTemplate = `
apiVersion: batch/v1
kind: Job
metadata:
name: generate-request
namespace: {{.TestNamespace}}
spec:
ttlSecondsAfterFinished: 0
template:
spec:
containers:
- name: curl-client
image: curlimages/curl
imagePullPolicy: Always
command: ["curl", "-H", "Host: {{.Host}}", "keda-add-ons-http-interceptor-proxy.keda:8080"]
restartPolicy: Never
activeDeadlineSeconds: 600
backoffLimit: 5
`
httpScaledObjectTemplate = `
kind: HTTPScaledObject
apiVersion: http.keda.sh/v1alpha1
metadata:
name: {{.HTTPScaledObjectName}}
namespace: {{.TestNamespace}}
spec:
hosts:
- {{.Host}}
targetPendingRequests: 100
scaledownPeriod: 10
scaleTargetRef:
name: {{.DeploymentName}}
service: {{.ServiceName}}
port: 8080
replicas:
min: {{ .MinReplicas }}
max: {{ .MaxReplicas }}
`
clientTemplate = `
apiVersion: v1
kind: Pod
metadata:
name: {{.ClientName}}
namespace: {{.TestNamespace}}
spec:
containers:
- name: {{.ClientName}}
image: curlimages/curl
command:
- sh
- -c
- "exec tail -f /dev/null"`
zipkinTemplate = `
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: null
name: zipkin
spec: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: zipkin
name: zipkin
namespace: zipkin
spec:
replicas: 1
selector:
matchLabels:
app: zipkin
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: zipkin
spec:
containers:
- image: openzipkin/zipkin
name: zipkin
env:
- name: "JAVA_OPTS"
value: "-Xmx500M"
resources:
limits:
memory: "700M"
requests:
memory: "500M"
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: zipkin
name: zipkin
namespace: zipkin
spec:
ports:
- port: 9411
protocol: TCP
targetPort: 9411
selector:
app: zipkin
type: ClusterIP
status:
loadBalancer: {}
`
)
func TestTraceGeneration(t *testing.T) {
// setup
t.Log("--- setting up ---")
// Create kubernetes resources
kc := GetKubernetesClient(t)
data, templates := getTemplateData()
CreateKubernetesResources(t, kc, testNamespace, data, templates)
assert.True(t, WaitForDeploymentReplicaReadyCount(t, kc, deploymentName, testNamespace, minReplicaCount, 6, 10),
"replica count should be %d after 1 minutes", minReplicaCount)
assert.True(t, WaitForDeploymentReplicaReadyCount(t, kc, "zipkin", "zipkin", 1, 12, 10),
"zipkin replica count should be %d after 1 minutes", 1)
time.Sleep(5 * time.Second)
// Send a test request to the interceptor
sendLoad(t, kc, data)
// setting sleep for 15 sec so traces are sent over
time.Sleep(15 * time.Second)
// Fetch metrics and validate them
traces = fetchAndParseZipkinTraces(t, fmt.Sprintf("curl %s", otelCollectorZipKinURL))
assert.GreaterOrEqual(t, len(traces), 1)
traceStatus := getTracesStatus(traces)
assert.EqualValues(t, "200", traceStatus)
// cleanup
DeleteKubernetesResources(t, testNamespace, data, templates)
}
func sendLoad(t *testing.T, kc *kubernetes.Clientset, data templateData) {
t.Log("--- sending load ---")
KubectlApplyWithTemplate(t, data, "loadJobTemplate", loadJobTemplate)
assert.True(t, WaitForDeploymentReplicaReadyCount(t, kc, deploymentName, testNamespace, maxReplicaCount, 6, 10),
"replica count should be %d after 1 minutes", maxReplicaCount)
}
func fetchAndParseZipkinTraces(t *testing.T, cmd string) Trace {
out, errOut, err := ExecCommandOnSpecificPod(t, clientName, testNamespace, cmd)
assert.NoErrorf(t, err, "cannot execute command - %s", err)
assert.Empty(t, errOut, "cannot execute command - %s", errOut)
var traces Trace
e := json.Unmarshal([]byte(out), &traces)
if e != nil {
assert.NoErrorf(t, err, "JSON decode error! - %s", e)
return nil
}
return traces
}
func getTracesStatus(traces Trace) string {
for _, t := range traces {
for _, t1 := range t {
if t1.Kind == "CLIENT" {
s := t1.Tags.HTTPStatusCode
return s
}
}
}
return ""
}
func getTemplateData() (templateData, []Template) {
return templateData{
TestNamespace: testNamespace,
DeploymentName: deploymentName,
ServiceName: serviceName,
ClientName: clientName,
HTTPScaledObjectName: httpScaledObjectName,
Host: host,
MinReplicas: minReplicaCount,
MaxReplicas: maxReplicaCount,
}, []Template{
{Name: "zipkinTemplate", Config: zipkinTemplate},
{Name: "deploymentTemplate", Config: deploymentTemplate},
{Name: "serviceNameTemplate", Config: serviceTemplate},
{Name: "clientTemplate", Config: clientTemplate},
{Name: "httpScaledObjectTemplate", Config: httpScaledObjectTemplate},
}
}

View File

@ -0,0 +1,272 @@
//go:build e2e
// +build e2e
package interceptor_timeouts_test
import (
"fmt"
"testing"
"github.com/stretchr/testify/assert"
"k8s.io/client-go/kubernetes"
. "github.com/kedacore/http-add-on/tests/helper"
)
const (
testName = "interceptor-timeouts-test"
)
var (
testNamespace = fmt.Sprintf("%s-ns", testName)
deploymentName = fmt.Sprintf("%s-deployment", testName)
serviceName = fmt.Sprintf("%s-service", testName)
httpScaledObjectName = fmt.Sprintf("%s-http-so", testName)
host = testName
minReplicaCount = 0
maxReplicaCount = 1
requestJobName = fmt.Sprintf("%s-request", testName)
responseDelay = "0"
)
type templateData struct {
TestNamespace string
DeploymentName string
ServiceName string
HTTPScaledObjectName string
ResponseHeaderTimeout string
Host string
MinReplicas int
MaxReplicas int
RequestJobName string
ResponseDelay string
}
const (
serviceTemplate = `
apiVersion: v1
kind: Service
metadata:
name: {{.ServiceName}}
namespace: {{.TestNamespace}}
labels:
app: {{.DeploymentName}}
spec:
ports:
- port: 9898
targetPort: http
protocol: TCP
name: http
selector:
app: {{.DeploymentName}}
`
deploymentTemplate = `
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{.DeploymentName}}
namespace: {{.TestNamespace}}
labels:
app: {{.DeploymentName}}
spec:
replicas: 0
selector:
matchLabels:
app: {{.DeploymentName}}
template:
metadata:
labels:
app: {{.DeploymentName}}
spec:
containers:
- name: {{.DeploymentName}}
image: stefanprodan/podinfo:latest
ports:
- name: http
containerPort: 9898
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: http
`
loadJobTemplate = `
apiVersion: batch/v1
kind: Job
metadata:
name: {{.RequestJobName}}
namespace: {{.TestNamespace}}
spec:
template:
spec:
containers:
- name: curl-client
image: curlimages/curl
imagePullPolicy: Always
command: ["curl", "-f", "-H", "Host: {{.Host}}", "keda-add-ons-http-interceptor-proxy.keda:8080/delay/{{.ResponseDelay}}"]
restartPolicy: Never
activeDeadlineSeconds: 600
backoffLimit: 2
`
httpScaledObjectWithoutTimeoutsTemplate = `
kind: HTTPScaledObject
apiVersion: http.keda.sh/v1alpha1
metadata:
name: {{.HTTPScaledObjectName}}
namespace: {{.TestNamespace}}
spec:
hosts:
- {{.Host}}
targetPendingRequests: 100
scaledownPeriod: 10
scaleTargetRef:
name: {{.DeploymentName}}
service: {{.ServiceName}}
port: 9898
replicas:
min: {{ .MinReplicas }}
max: {{ .MaxReplicas }}
`
httpScaledObjectWithTimeoutsTemplate = `
kind: HTTPScaledObject
apiVersion: http.keda.sh/v1alpha1
metadata:
name: {{.HTTPScaledObjectName}}
namespace: {{.TestNamespace}}
spec:
hosts:
- {{.Host}}
targetPendingRequests: 100
scaledownPeriod: 10
scaleTargetRef:
name: {{.DeploymentName}}
service: {{.ServiceName}}
port: 9898
replicas:
min: {{ .MinReplicas }}
max: {{ .MaxReplicas }}
timeouts:
responseHeader: "{{ .ResponseHeaderTimeout }}s"
`
)
func TestCheck(t *testing.T) {
// setup
t.Log("--- setting up ---")
// Create kubernetes resources
kc := GetKubernetesClient(t)
data, templates := getTemplateData()
CreateKubernetesResources(t, kc, testNamespace, data, templates)
assert.True(t, WaitForDeploymentReplicaReadyCount(t, kc, deploymentName, testNamespace, minReplicaCount, 6, 10),
"replica count should be %d after 1 minutes", minReplicaCount)
testDefaultTimeouts(t, kc, data)
testCustomTimeouts(t, kc, data)
// cleanup
DeleteKubernetesResources(t, testNamespace, data, templates)
}
func testDefaultTimeouts(t *testing.T, kc *kubernetes.Clientset, data templateData) {
KubectlApplyWithTemplate(t, data, "httpScaledObjectTemplate", httpScaledObjectWithoutTimeoutsTemplate)
testDefaultTimeoutPasses(t, kc, data)
testDefaultTimeoutFails(t, kc, data)
KubectlDeleteWithTemplate(t, data, "httpScaledObjectTemplate", httpScaledObjectWithoutTimeoutsTemplate)
}
func testDefaultTimeoutPasses(t *testing.T, kc *kubernetes.Clientset, data templateData) {
t.Log("--- testing default timeout passes ---")
KubectlApplyWithTemplate(t, data, "loadJobTemplate", loadJobTemplate)
assert.True(t, WaitForDeploymentReplicaReadyCount(t, kc, deploymentName, testNamespace, maxReplicaCount, 6, 10),
"replica count should be %d after 1 minutes", maxReplicaCount)
assert.True(t, WaitForJobSuccess(t, kc, requestJobName, testNamespace, 1, 1), "request should succeed")
KubectlDeleteWithTemplate(t, data, "loadJobTemplate", loadJobTemplate)
assert.True(t, WaitForDeploymentReplicaReadyCount(t, kc, deploymentName, testNamespace, minReplicaCount, 12, 10),
"replica count should be %d after 2 minutes", minReplicaCount)
}
func testDefaultTimeoutFails(t *testing.T, kc *kubernetes.Clientset, data templateData) {
t.Log("--- testing default timeout fails ---")
data.ResponseDelay = "2"
KubectlApplyWithTemplate(t, data, "loadJobTemplate", loadJobTemplate)
assert.True(t, WaitForDeploymentReplicaReadyCount(t, kc, deploymentName, testNamespace, maxReplicaCount, 6, 10),
"replica count should be %d after 1 minutes", maxReplicaCount)
assert.False(t, WaitForJobSuccess(t, kc, requestJobName, testNamespace, 1, 1), "request should fail")
KubectlDeleteWithTemplate(t, data, "loadJobTemplate", loadJobTemplate)
assert.True(t, WaitForDeploymentReplicaReadyCount(t, kc, deploymentName, testNamespace, minReplicaCount, 12, 10),
"replica count should be %d after 2 minutes", minReplicaCount)
}
func testCustomTimeouts(t *testing.T, kc *kubernetes.Clientset, data templateData) {
data.ResponseHeaderTimeout = "5"
KubectlApplyWithTemplate(t, data, "httpScaledObjectTemplate", httpScaledObjectWithTimeoutsTemplate)
testCustomTimeoutPasses(t, kc, data)
testCustomTimeoutFails(t, kc, data)
KubectlDeleteWithTemplate(t, data, "httpScaledObjectTemplate", httpScaledObjectWithTimeoutsTemplate)
}
func testCustomTimeoutPasses(t *testing.T, kc *kubernetes.Clientset, data templateData) {
t.Log("--- testing custom timeout passes ---")
data.ResponseDelay = "2"
KubectlApplyWithTemplate(t, data, "loadJobTemplate", loadJobTemplate)
assert.True(t, WaitForDeploymentReplicaReadyCount(t, kc, deploymentName, testNamespace, maxReplicaCount, 6, 10),
"replica count should be %d after 1 minutes", maxReplicaCount)
assert.True(t, WaitForJobSuccess(t, kc, requestJobName, testNamespace, 1, 1), "request should succeed")
KubectlDeleteWithTemplate(t, data, "loadJobTemplate", loadJobTemplate)
assert.True(t, WaitForDeploymentReplicaReadyCount(t, kc, deploymentName, testNamespace, minReplicaCount, 12, 10),
"replica count should be %d after 2 minutes", minReplicaCount)
}
func testCustomTimeoutFails(t *testing.T, kc *kubernetes.Clientset, data templateData) {
t.Log("--- testing custom timeout fails ---")
data.ResponseDelay = "7"
KubectlApplyWithTemplate(t, data, "loadJobTemplate", loadJobTemplate)
assert.True(t, WaitForDeploymentReplicaReadyCount(t, kc, deploymentName, testNamespace, maxReplicaCount, 6, 10),
"replica count should be %d after 1 minutes", maxReplicaCount)
assert.False(t, WaitForJobSuccess(t, kc, requestJobName, testNamespace, 1, 1), "request should fail")
KubectlDeleteWithTemplate(t, data, "loadJobTemplate", loadJobTemplate)
assert.True(t, WaitForDeploymentReplicaReadyCount(t, kc, deploymentName, testNamespace, minReplicaCount, 12, 10),
"replica count should be %d after 2 minutes", minReplicaCount)
}
func getTemplateData() (templateData, []Template) {
return templateData{
TestNamespace: testNamespace,
DeploymentName: deploymentName,
ServiceName: serviceName,
HTTPScaledObjectName: httpScaledObjectName,
Host: host,
MinReplicas: minReplicaCount,
MaxReplicas: maxReplicaCount,
RequestJobName: requestJobName,
ResponseDelay: responseDelay,
}, []Template{
{Name: "deploymentTemplate", Config: deploymentTemplate},
{Name: "serviceNameTemplate", Config: serviceTemplate},
}
}

View File

@ -18,20 +18,26 @@ import (
var (
OtlpConfig = `mode: deployment
image:
repository: "otel/opentelemetry-collector-contrib"
repository: "ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib"
config:
exporters:
debug:
verbosity: basic
prometheus:
endpoint: 0.0.0.0:8889
zipkin:
endpoint: http://zipkin.zipkin:9411/api/v2/spans
receivers:
jaeger: null
prometheus: null
zipkin: null
service:
pipelines:
traces: null
traces:
receivers:
- otlp
exporters:
- zipkin
metrics:
receivers:
- otlp
@ -170,7 +176,7 @@ func TestSetupIngress(t *testing.T) {
_, err = ExecuteCommand("helm repo update ingress-nginx")
require.NoErrorf(t, err, "cannot update ingress-nginx helm repo - %s", err)
_, err = ExecuteCommand(fmt.Sprintf("helm upgrade --install %s ingress-nginx/ingress-nginx --set fullnameOverride=%s --set controller.service.type=ClusterIP --namespace %s --wait",
_, err = ExecuteCommand(fmt.Sprintf("helm upgrade --install %s ingress-nginx/ingress-nginx --set fullnameOverride=%s --set controller.service.type=ClusterIP --set controller.progressDeadlineSeconds=30 --namespace %s --wait",
IngressReleaseName, IngressReleaseName, IngressNamespace))
require.NoErrorf(t, err, "cannot install ingress - %s", err)
}